fix: copy

This commit is contained in:
0xSage 2023-10-10 14:48:03 +08:00
parent 6be342c51c
commit bac2257989
7 changed files with 308 additions and 357 deletions

View File

@ -1,54 +0,0 @@
---
title: Cloud vs. In-House Servers
---
## How to Decide for Your Business
In recent months, one of the most critical infrastructure decisions for organizations has been whether to opt for cloud-based servers or in-house servers to run LLMs. Finding the right balance for your needs is essential.
As open-source Large Language Models like LLaMA 2 and Falcon gain prominence across various industries, business leaders are grappling with significant infrastructure decisions. The fundamental question arises: should your company host LLMs in the cloud or invest in in-house servers? The cloud offers ease of deployment, flexible scalability, and alleviates maintenance burdens, making it an attractive choice for many mainstream business applications. However, this convenience often comes at the cost of reliance on internet connectivity and relinquishing some control over your data. In contrast, in-house servers necessitate a substantial upfront investment but provide complete autonomy, predictable costs, and the ability to customize your infrastructure.
## In-House Servers
"Great power comes with great responsibility." Running your own in-house server for large language models (LLMs) involves setting up and maintaining your hardware and software infrastructure to run and host LLMs. This can be complex and expensive due to the initial equipment investment.
### Pros of Running LLMs Locally
- **Full Control:** In-house servers provide complete control over hardware, software, and configurations. This level of customization is invaluable for businesses with unique or specialized requirements.
- **Data Privacy:** For organizations handling highly sensitive data, in-house servers provide greater control over data privacy and security.
- **Low Latency:** In-house servers can provide low-latency access to local users, ensuring optimal performance for critical applications.
- **Predictable Costs:** Ongoing maintenance costs can be more predictable, and hardware upgrades can be planned according to the organization's budget and timeline.
### Cons of Running LLMs Locally
- **High Initial Costs:** Building and maintaining an in-house server involves significant capital expenditures. This can be a barrier for small businesses or startups with limited budgets.
- **Disaster Recovery:** In-house servers may not offer the same level of redundancy and disaster recovery capabilities as major cloud providers. Ensuring business continuity in the face of hardware failures or disasters becomes the organization's responsibility.
- **Maintenance Burden:** In-house server management necessitates a dedicated IT team for maintenance, updates, security, and backups, diverting resources from research and development.
- **Limited Scalability:** Scaling up in-house servers can be complex and costly. Additional hardware acquisitions and installations can be time-consuming.
## Cloud Servers
Running LLMs in the cloud means using cloud computing resources to train, deploy, and manage large language models (LLMs). Cloud computing allows access to powerful computational resources on demand, without the need to maintain expensive hardware or infrastructure. This can make it easier and more cost-effective to develop, test, and use advanced AI models like LLMs.
### Pros of Using LLMs in the Cloud
- **Scalability:** One of the foremost advantages of cloud servers is their scalability. Cloud providers offer the ability to scale resources up or down on-demand. This means that businesses can efficiently accommodate fluctuating workloads without the need for significant upfront investments in hardware.
- **Initial Costs:** You don't have to invest in on-site hardware or incur capital expenses. This solution is particularly suitable for smaller companies that might rapidly exhaust their storage capacity.
- **Ease of Use:** The cloud platform provides a variety of APIs, tools, and language frameworks that make it significantly easier to create, train, and deploy machine learning models.
- **Accessibility:** Cloud servers are accessible from anywhere with an internet connection. This enables remote work and collaboration across locations. In-house servers require employees to be on-premises.
- **Managed Services:** Cloud providers offer a plethora of managed services, such as automated backups, security solutions, and database management. This offloads many administrative tasks, allowing businesses to focus on their core objectives.
- **Built-in AI Accelerators:** Cloud providers offer hardware accelerators like Nvidia GPUs and Google TPUs that are optimized for AI workloads and challenging for on-prem environments to match.
### Cons of Using LLMs in the Cloud
- **Limited Control:** Cloud users have limited control over the underlying infrastructure. This may not be suitable for businesses with specific hardware or software requirements that cannot be met within the cloud provider's ecosystem.
- **Data Security Concerns:** Entrusting sensitive data to a third-party cloud provider can raise security concerns. While major cloud providers employ robust security measures and comply with industry standards, businesses must still take responsibility for securing their data within the cloud environment.
- **Internet Dependency:** Cloud servers rely on internet connectivity. Any disruptions in internet service can impact access to critical applications and data. Businesses should have contingency plans for such scenarios.
- **Cost Unpredictability:** While cloud bills typically start small, costs for GPUs, data storage, and bandwidth can grow rapidly as workloads scale, making long-term TCO difficult to predict.
- **Lack of Customization:** In-house servers allow full hardware and software customization to meet specific needs. Cloud environments offer less customization and control.
## Conclusion
The decision to run LLMs in the cloud or on in-house servers is not one-size-fits-all. It depends on your business's specific needs, budget, and security considerations. Cloud-based LLMs offer scalability and cost-efficiency but come with potential security concerns, while in-house servers provide greater control, customization, and cost predictability.
In some situations, using a mix of cloud and in-house resources can be the best way to go. Businesses need to assess their needs and assets carefully to pick the right method for using LLMs in the ever-changing world of AI technology.

View File

@ -0,0 +1,56 @@
---
title: Cloud vs. Self-hosting Your AI
---
The choice of where to run your AI - on GPU cloud services, on-prem, vs. outright subscribing to an API provider - involves various trade-offs. The following is a naive exploration of the pros and cons of renting vs self-hosting.
## Cost Comparison
The following estimations use these general assumptions:
| | Self-Hosted | GPT 4.0 | GPU Rental |
| ---------- | ---------------------------------------- | -------------- | ------------------ |
| Unit Costs | $10k upfront for 2x4090s (5 year amort.) | $0.00012/token | $4.42 for 1xH100/h |
- 800 average tokens (input & output) in a single request
- Inference speed is at 24 tokens per second
When operating at low capacity:
| | Self-Hosted | GPT 4.0 | GPU Rental |
| ---------------- | ----------- | ------- | ---------- |
| Cost per Request | $2.33 | $0.10 | $0.04 |
When operating at high capacity, i.e. 24 hours in a day, ~77.8k requests per month:
| | Self-Hosted | GPT 4.0 | GPU Rental |
| -------------- | ------------ | ------- | ---------- |
| Cost per Month | $166 (fixed) | $7465 | $3182 |
The incremental cost for large context use cases. For example, if you had to write a 500 word essay summarizing Tolstoy's "War and Peace":
| | Self-Hosted | GPT 4.0 | GPU Rental |
| ----------------------- | -------------------- | ------- | ---------- |
| Cost of "War and Peace" | (upfront fixed cost) | $94 | $40 |
> **Takeaway**: Renting on cloud or using an API is great for initially scaling. However, it can quickly become expensive when dealing with large datasets and context windows. For predictable costs, self-hosting is an attractive option.
## Business Comparison
Other considerations include
| | Self-Hosted | GPT 4.0 | GPU Rental |
| ----------------------- | ----------- | ------- | ---------- |
| Data Privacy | ✅ | ❌ | ❌ |
| Offline Mode | ✅ | ❌ | ❌ |
| Customization & Control | ✅ | ❌ | ✅ |
| Auditing | ✅ | ❌ | ✅ |
| Setup Complexity | ❌ | ✅ | ✅ |
| Setup Cost | ❌ | ✅ | ✅ |
| Maintenance | ❌ | ✅ | ❌ |
## Conclusion
The decision to run LLMs in the cloud or on in-house servers is not one-size-fits-all. It depends on your business's specific needs, budget, and security considerations. Cloud-based LLMs offer scalability and cost-efficiency but come with potential security concerns, while in-house servers provide greater control, customization, and cost predictability.
In some situations, using a mix of cloud and in-house resources can be the best way to go. Businesses need to assess their needs and assets carefully to pick the right method for using LLMs in the ever-changing world of AI technology.

View File

@ -2,59 +2,7 @@
title: GPU vs CPU What's the Difference?
---
## Introduction
In the realm of machine learning, the choice of hardware can be the difference between slow, inefficient training and lightning-fast model convergence. Central Processing Units (CPUs) and Graphics Processing Units (GPUs) are the two primary players in this computational showdown. In this article, we'll delve deep into the architecture, pros, and cons of CPUs and GPUs for machine learning tasks, with a focus on their application in Large Language Models (LLMs).
## Difference between CPU and GPU
### CPU Basics:
Central Processing Units, or CPUs, are the workhorses of traditional computing. They consist of various components, including the Arithmetic Logic Unit (ALU), registers, and cache. CPUs are renowned for their precision in executing instructions and their versatility across a wide range of computing tasks.
### Pros of CPU for Machine Learning:
**Precision:** CPUs excel in precise numerical calculations, making them ideal for tasks that require high accuracy. Versatility: They can handle a wide variety of tasks, from web browsing to database management. Cons of CPU for Machine Learning:
**Versatility:** They can handle a wide variety of tasks, from web browsing to database management.
### Cons of CPU for Machine Learning:
**Limited Parallelism:** CPUs are inherently sequential processors, which makes them less efficient for parallelizable machine learning tasks.
**Slower for Complex ML Tasks:** Deep learning and other complex machine learning algorithms can be slow on CPUs due to their sequential nature.
### GPU Basics:
Graphics Processing Units, or GPUs, were originally designed for rendering graphics, but their architecture has proven to be a game-changer for machine learning. GPUs consist of numerous cores and feature a highly efficient memory hierarchy. Their parallel processing capabilities set them apart.
### Pros of GPU for Machine Learning:
**Massive Parallelism:** GPUs can process thousands of parallel threads simultaneously, making them exceptionally well-suited for machine learning tasks that involve matrix operations.
**Speed:** Deep learning algorithms benefit greatly from GPU acceleration, resulting in significantly faster training times.
### Cons of GPU for Machine Learning:
**Higher Power Consumption:** GPUs can be power-hungry, which might impact operational costs.
**Limited Flexibility:** They are optimized for parallelism and may not be as versatile as CPUs for non-parallel tasks.
![CPU VS GPU](https://media.discordapp.net/attachments/964896173401976932/1157998193741660222/CPU-vs-GPU-rendering.png?ex=651aa55b&is=651953db&hm=a22c80ed108a0d25106a20aa25236f7d0fa74167a50788194470f57ce7f4a6ca&=&width=807&height=426)
## Similarities Between CPUs and GPUs
CPUs (Central Processing Units) and GPUs (Graphics Processing Units) are both integral hardware components that power computers, functioning as the "brains" of these devices. Despite their distinct purposes, they share several key internal components that contribute to their functionality:
**Cores:** Both CPUs and GPUs have cores that do the thinking and calculations. CPUs have fewer but powerful cores, while GPUs have many cores for multitasking.
**Memory:** They use memory to work faster. Think of it like their short-term memory. CPUs and GPUs have different levels of memory, but it helps them process things quickly.
**Control Unit:** This unit makes sure everything runs smoothly. CPUs and GPUs with higher frequencies are faster, but they're good at different tasks.
In short, CPUs and GPUs share core elements that help them process information quickly, even though they have different roles in a computer.
## Summary of differences: CPU vs. GPU
## CPU vs. GPU
| | CPU | GPU |
| ------------------- | ------------------------------------------------------------------------ | ------------------------------------------------------- |
@ -63,16 +11,4 @@ In short, CPUs and GPUs share core elements that help them process information q
| **Design** | Fewer, more powerful cores | More cores than CPUs, but less powerful than CPU cores |
| **Best suited for** | General-purpose computing applications | High-performance computing applications |
## CPU vs. GPU in Machine Learning
When choosing between CPUs and GPUs for machine learning, the decision often boils down to the specific task at hand. For tasks that rely on precision and versatility, CPUs may be preferred. In contrast, for deep learning and highly parallelizable tasks, GPUs shine.
For example, training a Large Language Model (LLM) like LLAMA2 on a CPU would be painfully slow and inefficient. On the other hand, a GPU can significantly speed up the training process, making it a preferred choice for LLMs.
## Future Trends and Developments
The world of hardware for machine learning is constantly evolving. CPUs and GPUs are becoming more powerful and energy-efficient. Additionally, specialized AI hardware, such as Neural Processing Units (NPUs), is emerging. These developments are poised to further revolutionize machine learning, making it essential for professionals in the field to stay updated with the latest trends.
## Conclusion
In the battle of brains for machine learning, the choice between CPUs and GPUs depends on the specific requirements of your tasks. CPUs offer precision and versatility, while GPUs excel in parallel processing and speed. Understanding the nuances of these architectures is crucial for optimizing machine learning workflows, especially when dealing with Large Language Models (LLMs). As technology advances, the lines between CPU and GPU capabilities may blur, but for now, choosing the right hardware can be the key to unlocking the full potential of your machine learning endeavors. Stay tuned for the ever-evolving landscape of AI hardware, and choose wisely to power your AI-driven future.
![CPU VS GPU](https://media.discordapp.net/attachments/964896173401976932/1157998193741660222/CPU-vs-GPU-rendering.png?ex=651aa55b&is=651953db&hm=a22c80ed108a0d25106a20aa25236f7d0fa74167a50788194470f57ce7f4a6ca&=&width=807&height=426)

View File

@ -2,7 +2,7 @@
title: Recommended AI Hardware by Budget
---
> :warning: **Warning:** Hardware suggestions on this site are only recommendations. Do your own research before any purchase. I'm not liable for compatibility, performance or other issues. Products can become outdated quickly. Use judgement and check return policies. Treat as informative guide only.
> :warning: **Warning:** Do your own research before any purchase. Jan is not liable for compatibility, performance or other issues. Products can become outdated quickly.
## Entry-level PC Build at $1000

View File

@ -1,39 +1,7 @@
---
title: Recommended AI Hardware
title: Selecting AI Hardware
---
## Overview
Large language models(LLMs) have changed how computers handle human-like text and language tasks. However using them through APIs or cloud services can be costly and limiting. What if you could use these powerful models directly on your personal computer, without extra expenses? Running them on your own machine provides flexibility, control, and cost savings. In this guide, I'll show you how to do just that, unlocking their potential without relying on expensive APIs or cloud services.
To run Large language models(LLMs) model at home machine, you will need a computer built with a GPU that can handle the large amount of data and computation required for inferencing.
The GPU stands as the pivotal component for running LLMs, bearing the primary responsibility for processing tasks associated with model execution. The performance of the GPU directly dictates the speed of inference.
While certain model variations and implementations may demand less potent hardware, the GPU retains its central role as the cornerstone of the system. The advantage of the GPU is that it can significantly improve performance compared to the CPU.
## GPU Selection
Selecting the optimal GPU for running Large Language Models (LLMs) on your home PC is a decision influenced by your budget and the specific LLMs you intend to work with. Your choice should strike a balance between performance, efficiency, and cost-effectiveness.
In general, the following GPU features are important for running LLMs:
- **High VRAM:** LLMs are typically very large and complex models, so they require a GPU with a high amount of VRAM. This will allow the model to be loaded into memory and processed efficiently.
- **CUDA Compatibility:** When running LLMs on a GPU, CUDA compatibility is paramount. CUDA is NVIDIA's parallel computing platform, and it plays a vital role in accelerating deep learning tasks. LLMs, with their extensive matrix calculations, heavily rely on parallel processing. Ensuring your GPU supports CUDA is like having the right tool for the job. It allows the LLM to leverage the GPU's parallel processing capabilities, significantly speeding up model training and inference.
- **Number of CUDA, Tensor, and RT Cores:** High-performance NVIDIA GPUs have both CUDA and Tensor cores. These cores are responsible for executing the neural network computations that underpin LLMs' language understanding and generation. The more CUDA cores your GPU has, the better equipped it is to handle the massive computational load that LLMs impose. Tensor cores in your GPU, further enhance LLM performance by accelerating the critical matrix operations integral to language modeling tasks.
- **Generation (Series)**: When selecting a GPU for LLMs, consider its generation or series (e.g., RTX 30 series). Newer GPU generations often come with improved architectures and features. For LLM tasks, opting for the latest generation can mean better performance, energy efficiency, and support for emerging AI technologies. Avoid purchasing, RTX-2000 series GPUs which are much outdated nowadays.
### Here are some of the best GPU options for this purpose:
1. **NVIDIA RTX 3090**: The NVIDIA RTX 3090 is a high-end GPU with a substantial 24GB of VRAM. This copious VRAM capacity makes it exceptionally well-suited for handling large LLMs. Moreover, it's known for its relative efficiency, meaning it won't overheat or strain your home PC's cooling system excessively. The RTX 3090's robust capabilities are a boon for those who need to work with hefty language models.
2. **NVIDIA RTX 4090**: If you're looking for peak performance and can afford the investment, the NVIDIA RTX 4090 represents the pinnacle of GPU power. Boasting 24GB of VRAM and featuring a cutting-edge Tensor Core architecture tailored for AI workloads, it outshines the RTX 3090 in terms of sheer capability. However, it's important to note that the RTX 4090 is also pricier and more power-hungry than its predecessor, the RTX 3090.
3. **AMD Radeon RX 6900 XT**: On the AMD side, the Radeon RX 6900 XT stands out as a high-end GPU with 16GB of VRAM. While it may not quite match the raw power of the RTX 3090 or RTX 4090, it strikes a balance between performance and affordability. Additionally, it tends to be more power-efficient, which could translate to a more sustainable and quieter setup in your home PC.
If budget constraints are a consideration, there are more cost-effective GPU options available:
- **NVIDIA RTX 3070**: The RTX 3070 is a solid mid-range GPU that can handle LLMs effectively. While it may not excel with the most massive or complex language models, it's a reliable choice for users looking for a balance between price and performance.
- **AMD Radeon RX 6800 XT**: Similarly, the RX 6800 XT from AMD offers commendable performance without breaking the bank. It's well-suited for running mid-sized LLMs and provides a competitive option in terms of both power and cost.
When selecting a GPU for LLMs, remember that it's not just about the GPU itself. Consider the synergy with other components in your PC:
- **CPU**: To ensure efficient processing, pair your GPU with a powerful CPU. LLMs benefit from fast processors, so having a capable CPU is essential.
@ -42,6 +10,41 @@ When selecting a GPU for LLMs, remember that it's not just about the GPU itself.
By taking all of these factors into account, you can build a home PC setup that's well-equipped to handle the demands of running LLMs effectively and efficiently.
## GPU Selection
Selecting the optimal GPU for running Large Language Models (LLMs) on your home PC is a decision influenced by your budget and the specific LLMs you intend to work with. Your choice should strike a balance between performance, efficiency, and cost-effectiveness.
### GPU Comparison
| GPU | Price | Cores | VRAM (GB) | Bandwth (T/s) | Power |
| --------------------- | ----- | ----- | --------- | ------------- | ----- |
| Nvidia H100 | 40000 | 18432 | 80 | 2 | |
| Nvidia A100 | 15000 | 6912 | 80 | | |
| Nvidia A100 | 7015 | 6912 | 40 | | |
| Nvidia A10 | 2799 | 9216 | 24 | | |
| Nvidia RTX A6000 | 4100 | 10752 | 48 | 0.768 | |
| Nvidia RTX 6000 | 6800 | 4608 | 46 | | |
| Nvidia RTX 4090 Ti | 2000 | 18176 | 24 | | |
| Nvidia RTX 4090 | 1800 | 16384 | 24 | 1.008 | |
| Nvidia RTX 3090 | 1450 | 10496 | 24 | | |
| Nvidia RTX 3080 | 700 | 8704 | 12 | | |
| Nvidia RTX 3070 | 900 | 6144 | 8 | | |
| Nvidia L4 | 2711 | 7424 | 24 | | |
| Nvidia T4 | 2299 | 2560 | 16 | | |
| AMD Radeon RX 6900 XT | 1000 | 5120 | 16 | | |
| AMD Radeon RX 6800 XT | 420 | 4608 | 16 | | |
\*Market prices as of Oct 2023 via Amazon/PCMag
### Other Considerations
In general, the following GPU features are important for running LLMs:
- **High VRAM:** LLMs are typically very large and complex models, so they require a GPU with a high amount of VRAM. This will allow the model to be loaded into memory and processed efficiently.
- **CUDA Compatibility:** When running LLMs on a GPU, CUDA compatibility is paramount. CUDA is NVIDIA's parallel computing platform, and it plays a vital role in accelerating deep learning tasks. LLMs, with their extensive matrix calculations, heavily rely on parallel processing. Ensuring your GPU supports CUDA is like having the right tool for the job. It allows the LLM to leverage the GPU's parallel processing capabilities, significantly speeding up model training and inference.
- **Number of CUDA, Tensor, and RT Cores:** High-performance NVIDIA GPUs have both CUDA and Tensor cores. These cores are responsible for executing the neural network computations that underpin LLMs' language understanding and generation. The more CUDA cores your GPU has, the better equipped it is to handle the massive computational load that LLMs impose. Tensor cores in your GPU, further enhance LLM performance by accelerating the critical matrix operations integral to language modeling tasks.
- **Generation (Series)**: When selecting a GPU for LLMs, consider its generation or series (e.g., RTX 30 series). Newer GPU generations often come with improved architectures and features. For LLM tasks, opting for the latest generation can mean better performance, energy efficiency, and support for emerging AI technologies. Avoid purchasing, RTX-2000 series GPUs which are much outdated nowadays.
## CPU Selection
Selecting the right CPU for running Large Language Models (LLMs) on your home PC is contingent on your budget and the specific LLMs you intend to work with. It's a decision that warrants careful consideration, as the CPU plays a pivotal role in determining the overall performance of your system.
@ -132,30 +135,7 @@ Unified Memory Architecture, as implemented in Apple's M1 and M2 series processo
The M1 and M2 Pro/Max chips offer varying levels of unified memory bandwidth, further underscoring their prowess in handling data-intensive tasks like AI processing. The M1/M2 Pro chip boasts an impressive capacity of up to 200 GB/s of unified memory bandwidth, while the M1/M2 Max takes it a step further, supporting up to a staggering 400 GB/s of unified memory bandwidth. This means that regardless of the complexity and demands of the AI tasks at hand, these Apple laptops armed with M1 or M2 processors are well-equipped to handle them with unparalleled efficiency and speed.
## Optimizing Memory Speed for AI Models
When it comes to utilizing LLMs effectively, you must delve into the intricacies of memory speed, as it plays a critical role in determining inference speed. These large language models necessitate full loading into RAM or VRAM each time they generate a new token, which is essentially a piece of text. For instance, a 4-bit 13-billion-parameter CodeLlama model consumes roughly 7.5GB of RAM.
To understand the impact of memory bandwidth, let's consider an example. If your system boasts a RAM bandwidth of 50 Gbps (achieved with components like DDR4-3200 in tandem with a Ryzen 5 5600X), you can generate approximately 6 tokens per second. However, if you aspire to attain faster speeds, such as 11 tokens per second, you'll need a higher memory bandwidth, like DDR5-5600 with around 90 Gbps.
For a broader context, consider top-tier GPUs like the Nvidia RTX 3090, which offers an impressive 930 Gbps of VRAM bandwidth. In contrast, the latest DDR5 RAM can provide up to 100GB/s of memory bandwidth. Recognizing and optimizing for memory bandwidth is paramount to efficiently running models like CodeLlama, as it directly influences the speed at which you can generate text tokens during inference.
## How to choose LLMs for your work
Choosing the right Large Language Model (LLM) doesn't have to be complicated. It's all about finding one that works well for your needs. Here's a simple guide to help you pick the perfect LLM:
1. **Set Up the Basics**: First, get everything ready on your computer. Make sure you have the right software and tools to run these models. Then, give them a try on your system.
2. **Watch Your Memory**: Pay attention to how much memory these models are using. Some are bigger than others, and you need to make sure your computer can handle them.
3. **Find Compatible Models**: Look for the models that are like the top players in the game. These models are known to work well with the tools you're using. Keep these models in your shortlist.
4. **Test Them Out**: Take the models on your shortlist and give them a try with your specific task. This is like comparing different cars by taking them for a test drive. It helps you see which one works best for what you need.
5. **Pick the Best Fit**: After testing, you'll have a better idea of which model is the winner for your project. Consider things like how well it performs, how fast it is, if it works with your computer, and the software you're using.
6. **Stay Updated**: Remember that this field is always changing and improving. Keep an eye out for updates and new models that might be even better for your needs.
And the good news is, finding the right LLM is easier now. We've got a handy tool called Extractum LLM Explorer that you can use online. It helps you discover, compare, and rank lots of different LLMs. Check it out at **[Extractum](http://llm.extractum.io/)**, and it'll make your selection process a breeze!
You can also use [Model Memory Calculator](https://huggingface.co/spaces/hf-accelerate/model-memory-usage) tool designed to assist in determining the required vRAM for training and conducting inference with large models hosted on the Hugging Face Hub. The tool identifies the minimum recommended vRAM based on the size of the 'largest layer' within the model. Additionally, it's worth noting that model training typically necessitates approximately four times the size of the model, especially when using the Adam optimization. Keep in mind When performing inference, expect to add up to an additional 20% to this as found by [EleutherAI](https://blog.eleuther.ai/transformer-math/). More tests will be performed in the future to get a more accurate benchmark for each model.
## How to Calculate How Much vRAM is Required to My Selected LLM
## Calculating vRAM Requirements for an LLM
**For example:** Calculating the VRAM required to run a 13-billion-parameter Large Language Model (LLM) involves considering the model size, batch size, sequence length, token size, and any additional overhead. Here's how you can estimate the VRAM required for a 13B LLM:

View File

@ -32,7 +32,12 @@ const sidebars = {
collapsible: true,
collapsed: false,
link: { type: "doc", id: "features/features" },
items: ["features/ai-models", "features/control", "features/acceleration", "features/extensions"],
items: [
"features/ai-models",
"features/control",
"features/acceleration",
"features/extensions",
],
},
],
@ -87,8 +92,8 @@ const sidebars = {
items: [
{
type: "doc",
label: "Cloud vs. Buy",
id: "hardware/overview/cloud-vs-buy",
label: "Cloud vs. Self-Hosting",
id: "hardware/overview/cloud-vs-self-hosting",
},
{
type: "doc",

View File

@ -1250,10 +1250,10 @@
"@docsearch/css" "3.5.2"
algoliasearch "^4.19.1"
"@docusaurus/core@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/core/-/core-2.4.1.tgz"
integrity sha512-SNsY7PshK3Ri7vtsLXVeAJGS50nJN3RgF836zkyUfAD01Fq+sAk5EwWgLw+nnm5KVNGDu7PRR2kRGDsWvqpo0g==
"@docusaurus/core@2.4.3", "@docusaurus/core@^2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/core/-/core-2.4.3.tgz#d86624901386fd8164ce4bff9cc7f16fde57f523"
integrity sha512-dWH5P7cgeNSIg9ufReX6gaCl/TmrGKD38Orbwuz05WPhAQtFXHd5B8Qym1TiXfvUNvwoYKkAJOJuGe8ou0Z7PA==
dependencies:
"@babel/core" "^7.18.6"
"@babel/generator" "^7.18.7"
@ -1265,13 +1265,13 @@
"@babel/runtime" "^7.18.6"
"@babel/runtime-corejs3" "^7.18.6"
"@babel/traverse" "^7.18.8"
"@docusaurus/cssnano-preset" "2.4.1"
"@docusaurus/logger" "2.4.1"
"@docusaurus/mdx-loader" "2.4.1"
"@docusaurus/cssnano-preset" "2.4.3"
"@docusaurus/logger" "2.4.3"
"@docusaurus/mdx-loader" "2.4.3"
"@docusaurus/react-loadable" "5.5.2"
"@docusaurus/utils" "2.4.1"
"@docusaurus/utils-common" "2.4.1"
"@docusaurus/utils-validation" "2.4.1"
"@docusaurus/utils" "2.4.3"
"@docusaurus/utils-common" "2.4.3"
"@docusaurus/utils-validation" "2.4.3"
"@slorber/static-site-generator-webpack-plugin" "^4.0.7"
"@svgr/webpack" "^6.2.1"
autoprefixer "^10.4.7"
@ -1327,33 +1327,33 @@
webpack-merge "^5.8.0"
webpackbar "^5.0.2"
"@docusaurus/cssnano-preset@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/cssnano-preset/-/cssnano-preset-2.4.1.tgz"
integrity sha512-ka+vqXwtcW1NbXxWsh6yA1Ckii1klY9E53cJ4O9J09nkMBgrNX3iEFED1fWdv8wf4mJjvGi5RLZ2p9hJNjsLyQ==
"@docusaurus/cssnano-preset@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/cssnano-preset/-/cssnano-preset-2.4.3.tgz#1d7e833c41ce240fcc2812a2ac27f7b862f32de0"
integrity sha512-ZvGSRCi7z9wLnZrXNPG6DmVPHdKGd8dIn9pYbEOFiYihfv4uDR3UtxogmKf+rT8ZlKFf5Lqne8E8nt08zNM8CA==
dependencies:
cssnano-preset-advanced "^5.3.8"
postcss "^8.4.14"
postcss-sort-media-queries "^4.2.1"
tslib "^2.4.0"
"@docusaurus/logger@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/logger/-/logger-2.4.1.tgz"
integrity sha512-5h5ysIIWYIDHyTVd8BjheZmQZmEgWDR54aQ1BX9pjFfpyzFo5puKXKYrYJXbjEHGyVhEzmB9UXwbxGfaZhOjcg==
"@docusaurus/logger@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/logger/-/logger-2.4.3.tgz#518bbc965fb4ebe8f1d0b14e5f4161607552d34c"
integrity sha512-Zxws7r3yLufk9xM1zq9ged0YHs65mlRmtsobnFkdZTxWXdTYlWWLWdKyNKAsVC+D7zg+pv2fGbyabdOnyZOM3w==
dependencies:
chalk "^4.1.2"
tslib "^2.4.0"
"@docusaurus/mdx-loader@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/mdx-loader/-/mdx-loader-2.4.1.tgz"
integrity sha512-4KhUhEavteIAmbBj7LVFnrVYDiU51H5YWW1zY6SmBSte/YLhDutztLTBE0PQl1Grux1jzUJeaSvAzHpTn6JJDQ==
"@docusaurus/mdx-loader@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/mdx-loader/-/mdx-loader-2.4.3.tgz#e8ff37f30a060eaa97b8121c135f74cb531a4a3e"
integrity sha512-b1+fDnWtl3GiqkL0BRjYtc94FZrcDDBV1j8446+4tptB9BAOlePwG2p/pK6vGvfL53lkOsszXMghr2g67M0vCw==
dependencies:
"@babel/parser" "^7.18.8"
"@babel/traverse" "^7.18.8"
"@docusaurus/logger" "2.4.1"
"@docusaurus/utils" "2.4.1"
"@docusaurus/logger" "2.4.3"
"@docusaurus/utils" "2.4.3"
"@mdx-js/mdx" "^1.6.22"
escape-html "^1.0.3"
file-loader "^6.2.0"
@ -1382,18 +1382,32 @@
react-helmet-async "*"
react-loadable "npm:@docusaurus/react-loadable@5.5.2"
"@docusaurus/plugin-content-blog@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/plugin-content-blog/-/plugin-content-blog-2.4.1.tgz"
integrity sha512-E2i7Knz5YIbE1XELI6RlTnZnGgS52cUO4BlCiCUCvQHbR+s1xeIWz4C6BtaVnlug0Ccz7nFSksfwDpVlkujg5Q==
"@docusaurus/module-type-aliases@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/module-type-aliases/-/module-type-aliases-2.4.3.tgz#d08ef67e4151e02f352a2836bcf9ecde3b9c56ac"
integrity sha512-cwkBkt1UCiduuvEAo7XZY01dJfRn7UR/75mBgOdb1hKknhrabJZ8YH+7savd/y9kLExPyrhe0QwdS9GuzsRRIA==
dependencies:
"@docusaurus/core" "2.4.1"
"@docusaurus/logger" "2.4.1"
"@docusaurus/mdx-loader" "2.4.1"
"@docusaurus/types" "2.4.1"
"@docusaurus/utils" "2.4.1"
"@docusaurus/utils-common" "2.4.1"
"@docusaurus/utils-validation" "2.4.1"
"@docusaurus/react-loadable" "5.5.2"
"@docusaurus/types" "2.4.3"
"@types/history" "^4.7.11"
"@types/react" "*"
"@types/react-router-config" "*"
"@types/react-router-dom" "*"
react-helmet-async "*"
react-loadable "npm:@docusaurus/react-loadable@5.5.2"
"@docusaurus/plugin-content-blog@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-content-blog/-/plugin-content-blog-2.4.3.tgz#6473b974acab98e967414d8bbb0d37e0cedcea14"
integrity sha512-PVhypqaA0t98zVDpOeTqWUTvRqCEjJubtfFUQ7zJNYdbYTbS/E/ytq6zbLVsN/dImvemtO/5JQgjLxsh8XLo8Q==
dependencies:
"@docusaurus/core" "2.4.3"
"@docusaurus/logger" "2.4.3"
"@docusaurus/mdx-loader" "2.4.3"
"@docusaurus/types" "2.4.3"
"@docusaurus/utils" "2.4.3"
"@docusaurus/utils-common" "2.4.3"
"@docusaurus/utils-validation" "2.4.3"
cheerio "^1.0.0-rc.12"
feed "^4.2.2"
fs-extra "^10.1.0"
@ -1404,18 +1418,18 @@
utility-types "^3.10.0"
webpack "^5.73.0"
"@docusaurus/plugin-content-docs@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/plugin-content-docs/-/plugin-content-docs-2.4.1.tgz"
integrity sha512-Lo7lSIcpswa2Kv4HEeUcGYqaasMUQNpjTXpV0N8G6jXgZaQurqp7E8NGYeGbDXnb48czmHWbzDL4S3+BbK0VzA==
"@docusaurus/plugin-content-docs@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-content-docs/-/plugin-content-docs-2.4.3.tgz#aa224c0512351e81807adf778ca59fd9cd136973"
integrity sha512-N7Po2LSH6UejQhzTCsvuX5NOzlC+HiXOVvofnEPj0WhMu1etpLEXE6a4aTxrtg95lQ5kf0xUIdjX9sh3d3G76A==
dependencies:
"@docusaurus/core" "2.4.1"
"@docusaurus/logger" "2.4.1"
"@docusaurus/mdx-loader" "2.4.1"
"@docusaurus/module-type-aliases" "2.4.1"
"@docusaurus/types" "2.4.1"
"@docusaurus/utils" "2.4.1"
"@docusaurus/utils-validation" "2.4.1"
"@docusaurus/core" "2.4.3"
"@docusaurus/logger" "2.4.3"
"@docusaurus/mdx-loader" "2.4.3"
"@docusaurus/module-type-aliases" "2.4.3"
"@docusaurus/types" "2.4.3"
"@docusaurus/utils" "2.4.3"
"@docusaurus/utils-validation" "2.4.3"
"@types/react-router-config" "^5.0.6"
combine-promises "^1.1.0"
fs-extra "^10.1.0"
@ -1426,95 +1440,95 @@
utility-types "^3.10.0"
webpack "^5.73.0"
"@docusaurus/plugin-content-pages@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/plugin-content-pages/-/plugin-content-pages-2.4.1.tgz"
integrity sha512-/UjuH/76KLaUlL+o1OvyORynv6FURzjurSjvn2lbWTFc4tpYY2qLYTlKpTCBVPhlLUQsfyFnshEJDLmPneq2oA==
"@docusaurus/plugin-content-pages@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-content-pages/-/plugin-content-pages-2.4.3.tgz#7f285e718b53da8c8d0101e70840c75b9c0a1ac0"
integrity sha512-txtDVz7y3zGk67q0HjG0gRttVPodkHqE0bpJ+7dOaTH40CQFLSh7+aBeGnPOTl+oCPG+hxkim4SndqPqXjQ8Bg==
dependencies:
"@docusaurus/core" "2.4.1"
"@docusaurus/mdx-loader" "2.4.1"
"@docusaurus/types" "2.4.1"
"@docusaurus/utils" "2.4.1"
"@docusaurus/utils-validation" "2.4.1"
"@docusaurus/core" "2.4.3"
"@docusaurus/mdx-loader" "2.4.3"
"@docusaurus/types" "2.4.3"
"@docusaurus/utils" "2.4.3"
"@docusaurus/utils-validation" "2.4.3"
fs-extra "^10.1.0"
tslib "^2.4.0"
webpack "^5.73.0"
"@docusaurus/plugin-debug@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/plugin-debug/-/plugin-debug-2.4.1.tgz"
integrity sha512-7Yu9UPzRShlrH/G8btOpR0e6INFZr0EegWplMjOqelIwAcx3PKyR8mgPTxGTxcqiYj6hxSCRN0D8R7YrzImwNA==
"@docusaurus/plugin-debug@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-debug/-/plugin-debug-2.4.3.tgz#2f90eb0c9286a9f225444e3a88315676fe02c245"
integrity sha512-LkUbuq3zCmINlFb+gAd4ZvYr+bPAzMC0hwND4F7V9bZ852dCX8YoWyovVUBKq4er1XsOwSQaHmNGtObtn8Av8Q==
dependencies:
"@docusaurus/core" "2.4.1"
"@docusaurus/types" "2.4.1"
"@docusaurus/utils" "2.4.1"
"@docusaurus/core" "2.4.3"
"@docusaurus/types" "2.4.3"
"@docusaurus/utils" "2.4.3"
fs-extra "^10.1.0"
react-json-view "^1.21.3"
tslib "^2.4.0"
"@docusaurus/plugin-google-analytics@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/plugin-google-analytics/-/plugin-google-analytics-2.4.1.tgz"
integrity sha512-dyZJdJiCoL+rcfnm0RPkLt/o732HvLiEwmtoNzOoz9MSZz117UH2J6U2vUDtzUzwtFLIf32KkeyzisbwUCgcaQ==
"@docusaurus/plugin-google-analytics@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-google-analytics/-/plugin-google-analytics-2.4.3.tgz#0d19993136ade6f7a7741251b4f617400d92ab45"
integrity sha512-KzBV3k8lDkWOhg/oYGxlK5o9bOwX7KpPc/FTWoB+SfKhlHfhq7qcQdMi1elAaVEIop8tgK6gD1E58Q+XC6otSQ==
dependencies:
"@docusaurus/core" "2.4.1"
"@docusaurus/types" "2.4.1"
"@docusaurus/utils-validation" "2.4.1"
"@docusaurus/core" "2.4.3"
"@docusaurus/types" "2.4.3"
"@docusaurus/utils-validation" "2.4.3"
tslib "^2.4.0"
"@docusaurus/plugin-google-gtag@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/plugin-google-gtag/-/plugin-google-gtag-2.4.1.tgz"
integrity sha512-mKIefK+2kGTQBYvloNEKtDmnRD7bxHLsBcxgnbt4oZwzi2nxCGjPX6+9SQO2KCN5HZbNrYmGo5GJfMgoRvy6uA==
"@docusaurus/plugin-google-gtag@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-google-gtag/-/plugin-google-gtag-2.4.3.tgz#e1a80b0696771b488562e5b60eff21c9932d9e1c"
integrity sha512-5FMg0rT7sDy4i9AGsvJC71MQrqQZwgLNdDetLEGDHLfSHLvJhQbTCUGbGXknUgWXQJckcV/AILYeJy+HhxeIFA==
dependencies:
"@docusaurus/core" "2.4.1"
"@docusaurus/types" "2.4.1"
"@docusaurus/utils-validation" "2.4.1"
"@docusaurus/core" "2.4.3"
"@docusaurus/types" "2.4.3"
"@docusaurus/utils-validation" "2.4.3"
tslib "^2.4.0"
"@docusaurus/plugin-google-tag-manager@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/plugin-google-tag-manager/-/plugin-google-tag-manager-2.4.1.tgz"
integrity sha512-Zg4Ii9CMOLfpeV2nG74lVTWNtisFaH9QNtEw48R5QE1KIwDBdTVaiSA18G1EujZjrzJJzXN79VhINSbOJO/r3g==
"@docusaurus/plugin-google-tag-manager@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-google-tag-manager/-/plugin-google-tag-manager-2.4.3.tgz#e41fbf79b0ffc2de1cc4013eb77798cff0ad98e3"
integrity sha512-1jTzp71yDGuQiX9Bi0pVp3alArV0LSnHXempvQTxwCGAEzUWWaBg4d8pocAlTpbP9aULQQqhgzrs8hgTRPOM0A==
dependencies:
"@docusaurus/core" "2.4.1"
"@docusaurus/types" "2.4.1"
"@docusaurus/utils-validation" "2.4.1"
"@docusaurus/core" "2.4.3"
"@docusaurus/types" "2.4.3"
"@docusaurus/utils-validation" "2.4.3"
tslib "^2.4.0"
"@docusaurus/plugin-sitemap@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/plugin-sitemap/-/plugin-sitemap-2.4.1.tgz"
integrity sha512-lZx+ijt/+atQ3FVE8FOHV/+X3kuok688OydDXrqKRJyXBJZKgGjA2Qa8RjQ4f27V2woaXhtnyrdPop/+OjVMRg==
"@docusaurus/plugin-sitemap@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-sitemap/-/plugin-sitemap-2.4.3.tgz#1b3930900a8f89670ce7e8f83fb4730cd3298c32"
integrity sha512-LRQYrK1oH1rNfr4YvWBmRzTL0LN9UAPxBbghgeFRBm5yloF6P+zv1tm2pe2hQTX/QP5bSKdnajCvfnScgKXMZQ==
dependencies:
"@docusaurus/core" "2.4.1"
"@docusaurus/logger" "2.4.1"
"@docusaurus/types" "2.4.1"
"@docusaurus/utils" "2.4.1"
"@docusaurus/utils-common" "2.4.1"
"@docusaurus/utils-validation" "2.4.1"
"@docusaurus/core" "2.4.3"
"@docusaurus/logger" "2.4.3"
"@docusaurus/types" "2.4.3"
"@docusaurus/utils" "2.4.3"
"@docusaurus/utils-common" "2.4.3"
"@docusaurus/utils-validation" "2.4.3"
fs-extra "^10.1.0"
sitemap "^7.1.1"
tslib "^2.4.0"
"@docusaurus/preset-classic@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/preset-classic/-/preset-classic-2.4.1.tgz"
integrity sha512-P4//+I4zDqQJ+UDgoFrjIFaQ1MeS9UD1cvxVQaI6O7iBmiHQm0MGROP1TbE7HlxlDPXFJjZUK3x3cAoK63smGQ==
"@docusaurus/preset-classic@^2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/preset-classic/-/preset-classic-2.4.3.tgz#074c57ebf29fa43d23bd1c8ce691226f542bc262"
integrity sha512-tRyMliepY11Ym6hB1rAFSNGwQDpmszvWYJvlK1E+md4SW8i6ylNHtpZjaYFff9Mdk3i/Pg8ItQq9P0daOJAvQw==
dependencies:
"@docusaurus/core" "2.4.1"
"@docusaurus/plugin-content-blog" "2.4.1"
"@docusaurus/plugin-content-docs" "2.4.1"
"@docusaurus/plugin-content-pages" "2.4.1"
"@docusaurus/plugin-debug" "2.4.1"
"@docusaurus/plugin-google-analytics" "2.4.1"
"@docusaurus/plugin-google-gtag" "2.4.1"
"@docusaurus/plugin-google-tag-manager" "2.4.1"
"@docusaurus/plugin-sitemap" "2.4.1"
"@docusaurus/theme-classic" "2.4.1"
"@docusaurus/theme-common" "2.4.1"
"@docusaurus/theme-search-algolia" "2.4.1"
"@docusaurus/types" "2.4.1"
"@docusaurus/core" "2.4.3"
"@docusaurus/plugin-content-blog" "2.4.3"
"@docusaurus/plugin-content-docs" "2.4.3"
"@docusaurus/plugin-content-pages" "2.4.3"
"@docusaurus/plugin-debug" "2.4.3"
"@docusaurus/plugin-google-analytics" "2.4.3"
"@docusaurus/plugin-google-gtag" "2.4.3"
"@docusaurus/plugin-google-tag-manager" "2.4.3"
"@docusaurus/plugin-sitemap" "2.4.3"
"@docusaurus/theme-classic" "2.4.3"
"@docusaurus/theme-common" "2.4.3"
"@docusaurus/theme-search-algolia" "2.4.3"
"@docusaurus/types" "2.4.3"
"@docusaurus/react-loadable@5.5.2", "react-loadable@npm:@docusaurus/react-loadable@5.5.2":
version "5.5.2"
@ -1524,23 +1538,23 @@
"@types/react" "*"
prop-types "^15.6.2"
"@docusaurus/theme-classic@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/theme-classic/-/theme-classic-2.4.1.tgz"
integrity sha512-Rz0wKUa+LTW1PLXmwnf8mn85EBzaGSt6qamqtmnh9Hflkc+EqiYMhtUJeLdV+wsgYq4aG0ANc+bpUDpsUhdnwg==
"@docusaurus/theme-classic@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/theme-classic/-/theme-classic-2.4.3.tgz#29360f2eb03a0e1686eb19668633ef313970ee8f"
integrity sha512-QKRAJPSGPfDY2yCiPMIVyr+MqwZCIV2lxNzqbyUW0YkrlmdzzP3WuQJPMGLCjWgQp/5c9kpWMvMxjhpZx1R32Q==
dependencies:
"@docusaurus/core" "2.4.1"
"@docusaurus/mdx-loader" "2.4.1"
"@docusaurus/module-type-aliases" "2.4.1"
"@docusaurus/plugin-content-blog" "2.4.1"
"@docusaurus/plugin-content-docs" "2.4.1"
"@docusaurus/plugin-content-pages" "2.4.1"
"@docusaurus/theme-common" "2.4.1"
"@docusaurus/theme-translations" "2.4.1"
"@docusaurus/types" "2.4.1"
"@docusaurus/utils" "2.4.1"
"@docusaurus/utils-common" "2.4.1"
"@docusaurus/utils-validation" "2.4.1"
"@docusaurus/core" "2.4.3"
"@docusaurus/mdx-loader" "2.4.3"
"@docusaurus/module-type-aliases" "2.4.3"
"@docusaurus/plugin-content-blog" "2.4.3"
"@docusaurus/plugin-content-docs" "2.4.3"
"@docusaurus/plugin-content-pages" "2.4.3"
"@docusaurus/theme-common" "2.4.3"
"@docusaurus/theme-translations" "2.4.3"
"@docusaurus/types" "2.4.3"
"@docusaurus/utils" "2.4.3"
"@docusaurus/utils-common" "2.4.3"
"@docusaurus/utils-validation" "2.4.3"
"@mdx-js/react" "^1.6.22"
clsx "^1.2.1"
copy-text-to-clipboard "^3.0.1"
@ -1555,18 +1569,18 @@
tslib "^2.4.0"
utility-types "^3.10.0"
"@docusaurus/theme-common@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/theme-common/-/theme-common-2.4.1.tgz"
integrity sha512-G7Zau1W5rQTaFFB3x3soQoZpkgMbl/SYNG8PfMFIjKa3M3q8n0m/GRf5/H/e5BqOvt8c+ZWIXGCiz+kUCSHovA==
"@docusaurus/theme-common@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/theme-common/-/theme-common-2.4.3.tgz#bb31d70b6b67d0bdef9baa343192dcec49946a2e"
integrity sha512-7KaDJBXKBVGXw5WOVt84FtN8czGWhM0lbyWEZXGp8AFfL6sZQfRTluFp4QriR97qwzSyOfQb+nzcDZZU4tezUw==
dependencies:
"@docusaurus/mdx-loader" "2.4.1"
"@docusaurus/module-type-aliases" "2.4.1"
"@docusaurus/plugin-content-blog" "2.4.1"
"@docusaurus/plugin-content-docs" "2.4.1"
"@docusaurus/plugin-content-pages" "2.4.1"
"@docusaurus/utils" "2.4.1"
"@docusaurus/utils-common" "2.4.1"
"@docusaurus/mdx-loader" "2.4.3"
"@docusaurus/module-type-aliases" "2.4.3"
"@docusaurus/plugin-content-blog" "2.4.3"
"@docusaurus/plugin-content-docs" "2.4.3"
"@docusaurus/plugin-content-pages" "2.4.3"
"@docusaurus/utils" "2.4.3"
"@docusaurus/utils-common" "2.4.3"
"@types/history" "^4.7.11"
"@types/react" "*"
"@types/react-router-config" "*"
@ -1577,34 +1591,34 @@
use-sync-external-store "^1.2.0"
utility-types "^3.10.0"
"@docusaurus/theme-live-codeblock@^2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/theme-live-codeblock/-/theme-live-codeblock-2.4.1.tgz"
integrity sha512-KBKrm34kcdNbSeEm6RujN5GWWg4F2dmAYZyHMMQM8FXokx8mNShRx6uq17WXi23JNm7niyMhNOBRfZWay+5Hkg==
"@docusaurus/theme-live-codeblock@^2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/theme-live-codeblock/-/theme-live-codeblock-2.4.3.tgz#889eb4e740d2e9f2dc5516f9407f1bc147887387"
integrity sha512-wx+iJCCoSewUkMzFy7pnbhDBCRcJRTLkpx1/zwnHhfiNWVvJ2XjtBKIviRyMhynZYyvO4sLTpCclzK8JOctkxw==
dependencies:
"@docusaurus/core" "2.4.1"
"@docusaurus/theme-common" "2.4.1"
"@docusaurus/theme-translations" "2.4.1"
"@docusaurus/utils-validation" "2.4.1"
"@docusaurus/core" "2.4.3"
"@docusaurus/theme-common" "2.4.3"
"@docusaurus/theme-translations" "2.4.3"
"@docusaurus/utils-validation" "2.4.3"
"@philpl/buble" "^0.19.7"
clsx "^1.2.1"
fs-extra "^10.1.0"
react-live "2.2.3"
tslib "^2.4.0"
"@docusaurus/theme-search-algolia@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/theme-search-algolia/-/theme-search-algolia-2.4.1.tgz"
integrity sha512-6BcqW2lnLhZCXuMAvPRezFs1DpmEKzXFKlYjruuas+Xy3AQeFzDJKTJFIm49N77WFCTyxff8d3E4Q9pi/+5McQ==
"@docusaurus/theme-search-algolia@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/theme-search-algolia/-/theme-search-algolia-2.4.3.tgz#32d4cbefc3deba4112068fbdb0bde11ac51ece53"
integrity sha512-jziq4f6YVUB5hZOB85ELATwnxBz/RmSLD3ksGQOLDPKVzat4pmI8tddNWtriPpxR04BNT+ZfpPUMFkNFetSW1Q==
dependencies:
"@docsearch/react" "^3.1.1"
"@docusaurus/core" "2.4.1"
"@docusaurus/logger" "2.4.1"
"@docusaurus/plugin-content-docs" "2.4.1"
"@docusaurus/theme-common" "2.4.1"
"@docusaurus/theme-translations" "2.4.1"
"@docusaurus/utils" "2.4.1"
"@docusaurus/utils-validation" "2.4.1"
"@docusaurus/core" "2.4.3"
"@docusaurus/logger" "2.4.3"
"@docusaurus/plugin-content-docs" "2.4.3"
"@docusaurus/theme-common" "2.4.3"
"@docusaurus/theme-translations" "2.4.3"
"@docusaurus/utils" "2.4.3"
"@docusaurus/utils-validation" "2.4.3"
algoliasearch "^4.13.1"
algoliasearch-helper "^3.10.0"
clsx "^1.2.1"
@ -1614,10 +1628,10 @@
tslib "^2.4.0"
utility-types "^3.10.0"
"@docusaurus/theme-translations@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/theme-translations/-/theme-translations-2.4.1.tgz"
integrity sha512-T1RAGP+f86CA1kfE8ejZ3T3pUU3XcyvrGMfC/zxCtc2BsnoexuNI9Vk2CmuKCb+Tacvhxjv5unhxXce0+NKyvA==
"@docusaurus/theme-translations@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/theme-translations/-/theme-translations-2.4.3.tgz#91ac73fc49b8c652b7a54e88b679af57d6ac6102"
integrity sha512-H4D+lbZbjbKNS/Zw1Lel64PioUAIT3cLYYJLUf3KkuO/oc9e0QCVhIYVtUI2SfBCF2NNdlyhBDQEEMygsCedIg==
dependencies:
fs-extra "^10.1.0"
tslib "^2.4.0"
@ -1636,30 +1650,44 @@
webpack "^5.73.0"
webpack-merge "^5.8.0"
"@docusaurus/utils-common@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/utils-common/-/utils-common-2.4.1.tgz"
integrity sha512-bCVGdZU+z/qVcIiEQdyx0K13OC5mYwxhSuDUR95oFbKVuXYRrTVrwZIqQljuo1fyJvFTKHiL9L9skQOPokuFNQ==
"@docusaurus/types@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/types/-/types-2.4.3.tgz#4aead281ca09f721b3c0a9b926818450cfa3db31"
integrity sha512-W6zNLGQqfrp/EoPD0bhb9n7OobP+RHpmvVzpA+Z/IuU3Q63njJM24hmT0GYboovWcDtFmnIJC9wcyx4RVPQscw==
dependencies:
"@types/history" "^4.7.11"
"@types/react" "*"
commander "^5.1.0"
joi "^17.6.0"
react-helmet-async "^1.3.0"
utility-types "^3.10.0"
webpack "^5.73.0"
webpack-merge "^5.8.0"
"@docusaurus/utils-common@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/utils-common/-/utils-common-2.4.3.tgz#30656c39ef1ce7e002af7ba39ea08330f58efcfb"
integrity sha512-/jascp4GbLQCPVmcGkPzEQjNaAk3ADVfMtudk49Ggb+131B1WDD6HqlSmDf8MxGdy7Dja2gc+StHf01kiWoTDQ==
dependencies:
tslib "^2.4.0"
"@docusaurus/utils-validation@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/utils-validation/-/utils-validation-2.4.1.tgz"
integrity sha512-unII3hlJlDwZ3w8U+pMO3Lx3RhI4YEbY3YNsQj4yzrkZzlpqZOLuAiZK2JyULnD+TKbceKU0WyWkQXtYbLNDFA==
"@docusaurus/utils-validation@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/utils-validation/-/utils-validation-2.4.3.tgz#8122c394feef3e96c73f6433987837ec206a63fb"
integrity sha512-G2+Vt3WR5E/9drAobP+hhZQMaswRwDlp6qOMi7o7ZypB+VO7N//DZWhZEwhcRGepMDJGQEwtPv7UxtYwPL9PBw==
dependencies:
"@docusaurus/logger" "2.4.1"
"@docusaurus/utils" "2.4.1"
"@docusaurus/logger" "2.4.3"
"@docusaurus/utils" "2.4.3"
joi "^17.6.0"
js-yaml "^4.1.0"
tslib "^2.4.0"
"@docusaurus/utils@2.4.1":
version "2.4.1"
resolved "https://registry.npmjs.org/@docusaurus/utils/-/utils-2.4.1.tgz"
integrity sha512-1lvEZdAQhKNht9aPXPoh69eeKnV0/62ROhQeFKKxmzd0zkcuE/Oc5Gpnt00y/f5bIsmOsYMY7Pqfm/5rteT5GA==
"@docusaurus/utils@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/utils/-/utils-2.4.3.tgz#52b000d989380a2125831b84e3a7327bef471e89"
integrity sha512-fKcXsjrD86Smxv8Pt0TBFqYieZZCPh4cbf9oszUq/AMhZn3ujwpKaVYZACPX8mmjtYx0JOgNx52CREBfiGQB4A==
dependencies:
"@docusaurus/logger" "2.4.1"
"@docusaurus/logger" "2.4.3"
"@svgr/webpack" "^6.2.1"
escape-string-regexp "^4.0.0"
file-loader "^6.2.0"
@ -8677,4 +8705,4 @@ yocto-queue@^0.1.0:
zwitch@^1.0.0:
version "1.0.5"
resolved "https://registry.npmjs.org/zwitch/-/zwitch-1.0.5.tgz"
integrity sha512-V50KMwwzqJV0NpZIZFwfOD5/lyny3WlSzRiXgA0G7VUnRlqttta1L6UQIHzd6EuBY/cHGfwTIck7w1yH6Q5zUw==
integrity sha512-V50KMwwzqJV0NpZIZFwfOD5/lyny3WlSzRiXgA0G7VUnRlqttta1L6UQIHzd6EuBY/cHGfwTIck7w1yH6Q5zUw==