Merge pull request #240 from janhq/Hardware

docs: initial hardware content
This commit is contained in:
0xSage 2023-10-10 14:56:44 +08:00 committed by GitHub
commit be05dc2a85
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
20 changed files with 733 additions and 211 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 388 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 945 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 453 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 349 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 553 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 636 KiB

View File

@ -2,7 +2,131 @@
title: GPUs and VRAM title: GPUs and VRAM
--- ---
- GPUs plugging in to Motherboard via PCIe ## What Is a GPU?
- Multiple GPUs
- NVLink A Graphics Card, or GPU (Graphics Processing Unit), is a fundamental component in modern computing. Think of it as the powerhouse behind rendering the stunning visuals you see on your screen. Similar to the motherboard in your computer, the graphics card is a printed circuit board. However, it's not just a passive piece of hardware; it's a sophisticated device equipped with essential components like fans, onboard RAM, a dedicated memory controller, BIOS, and various other features. If you want to learn more about GPUs then read here to [Understand the architecture of a GPU.](https://medium.com/codex/understanding-the-architecture-of-a-gpu-d5d2d2e8978b)
- PCIe (and Motherboard limitations)
![GPU Image](concepts-images/GPU_Image.png)
## What Are GPUs Used For?
Two decades ago, GPUs primarily enhanced real-time 3D graphics in gaming. But as the 21st century dawned, a revelation occurred among computer scientists. They recognized that GPUs held untapped potential to solve some of the world's most intricate computing tasks.
This revelation marked the dawn of the general-purpose GPU era. Today's GPUs have evolved into versatile tools, more adaptable than ever before. They now have the capability to accelerate a diverse range of applications that stretch well beyond their original graphics-focused purpose.
### **Here are some example use cases:**
1. **Gaming**: They make games look good and run smoothly.
2. **Content Creation**: Help with video editing, 3D design, and graphics work.
3. **AI and Machine Learning**: Used for training smart machines.
4. **Science**: Speed up scientific calculations and simulations.
5. **Cryptocurrency Mining**: Mine digital currencies like Bitcoin.
6. **Medical Imaging**: Aid in analyzing medical images.
7. **Self-Driving Cars**: Help cars navigate autonomously.
8. **Simulations**: Create realistic virtual experiences.
9. **Data Analysis**: Speed up data processing and visualization.
10. **Video Streaming**: Improve video quality and streaming efficiency.
## What is VRAM In GPU?
VRAM, or video random-access memory, is a type of high-speed memory that is specifically designed for use with graphics processing units (GPUs). VRAM is used to store the textures, images, and other data that the GPU needs to render graphics. Its allows the GPU to access the data it needs quickly and efficiently. This is essential for rendering complex graphics at high frame rates.
VRAM is different from other types of memory, such as the system RAM that is used by the CPU. VRAM is optimized for high bandwidth and low latency, which means that it can read and write data very quickly. The amount of VRAM that a GPU has is one of the factors that determines its performance. More VRAM allows the GPU to store more data and render more complex graphics. However, VRAM is also one of the most expensive components of a GPU. So when choosing a graphics card, it is important to consider the amount of VRAM that it has. If you are planning on running demanding LLMs or video games, or 3D graphics software, you will need a graphics card with more VRAM.
![VRAM](concepts-images/VRAM-Image.png)
## What makes VRAM and RAM different from each other?
RAM (Random Access Memory) and VRAM (Video Random Access Memory) are both types of memory used in computers, but they have different functions and characteristics. Here are the differences between RAM and VRAM.
### RAM (Random Access Memory):
- RAM is a general-purpose memory that stores data and instructions that the CPU needs to access quickly.
- RAM is used for short-term data storage and is volatile, meaning that it loses its contents when the computer is turned off.
- RAM is connected to the motherboard and is accessed by the CPU.
- RAM typically has a larger capacity compared to VRAM, which is designed to store smaller amounts of data with faster access times.
- RAM stores data related to the operating system and the various programs that are running, including code, program files, and user data.
### VRAM (Video Random Access Memory):
- VRAM is a type of RAM that is specifically used to store image data for a computer display.
- VRAM is a graphics card component that is connected to the GPU (Graphics Processing Unit).
- VRAM is used exclusively by the GPU and doesnt need to store as much data as the CPU.
- VRAM is similar to RAM in that it is volatile and loses its contents when the computer is turned off.
- VRAM stores data related specifically to graphics, such as textures, frames, and other graphical data.
- VRAM is designed to store smaller amounts of data with faster access times than RAM.
In summary, RAM is used for general-purpose memory, while VRAM is used for graphics-related tasks. RAM has a larger capacity and is accessed by the CPU, while VRAM has a smaller capacity and is accessed by the GPU.
**Key differences between VRAM and RAM:**
| Characteristic | VRAM | RAM |
| -------------- | --------------------- | --------------------- |
| Purpose | Graphics processing | General processing |
| Speed | Faster | Slower |
| Latency | Lower | Higher |
| Bandwidth | Higher | Lower |
| Cost | More expensive | Less expensive |
| Availability | Less widely available | More widely available |
![RAM-VRAM](concepts-images/RAM-VRAM.png)
## How to Connect GPU to the Motherboard via PCIe
Connecting hardware components to a motherboard is often likened to assembling LEGO pieces. If the parts fit together seamlessly, you're on the right track. Experienced PC builders find this process straightforward. However, for first-time builders, identifying where each hardware component belongs on the motherboard can be a bit perplexing.
**So follow the below 5 steps to Connect your GPU to the Motherboard:**
1. First, make sure your computer is powered off and unplugged from the electrical outlet to ensure safety.
2. Open your computer case if necessary to access the motherboard. Locate the PCIe x16 on the motherboard where you'll install the GPU. These slots are typically longer than other expansion slots and are used for graphics cards.
Remove Slot Covers (if applicable): Some PCIe slots may have protective covers or brackets covering them. Remove these covers by unscrewing them from the case using a Phillips-head screwdriver. And PCIe x16 will have plastic lock on one side only. There may be more than one PCIe x16 slot depending on the motherboard. You can use any of the slots according to your choice.
![PCIe x16](concepts-images/PCIex16.png)
3. Now Insert the Graphics Card slowly:
- Unlock the plastic lock on one side of the PCIe x16 slot by pulling it outwards.
![slot](concepts-images/slot.png)
- Align the PCIe slot with your graphics card, making sure that the HDMI port side of the GPU faces the rear side of the CPU case.
- Gently press on the card until you hear it securely snap in place.
![GPU](concepts-images/GPU.png)
4. Insert the Power Connector: If your GPU requires additional power (most modern GPUs do), connect the necessary power cables from your power supply to the GPU's power connectors. These connectors are usually located on the top or side of the GPU.
![Power](concepts-images/Power.png)
5. Power on the System: After turning on the PC see if the fans on your graphics card spin. If it does not spin, remove the power cable from the GPU, reconnect it, and power on the PC again.
> :memo: Note: To better understand you can also watch YouTube tutorials on how to Connect the GPU to the Motherboard via PCIe
## How to Choose a Graphics Card for your AI works
Selecting the optimal GPU for running Large Language Models (LLMs) on your home PC is a decision influenced by your budget and the specific LLMs you intend to work with. Your choice should strike a balance between performance, efficiency, and cost-effectiveness.
In general, the following GPU features are important for running LLMs:
- **High VRAM:** LLMs are typically very large and complex models, so they require a GPU with a high amount of VRAM. This will allow the model to be loaded into memory and processed efficiently.
- **CUDA Compatibility:** When running LLMs on a GPU, CUDA compatibility is paramount. CUDA is NVIDIA's parallel computing platform, and it plays a vital role in accelerating deep learning tasks. LLMs, with their extensive matrix calculations, heavily rely on parallel processing. Ensuring your GPU supports CUDA is like having the right tool for the job. It allows the LLM to leverage the GPU's parallel processing capabilities, significantly speeding up model training and inference.
- **Number of CUDA, Tensor, and RT Cores:** High-performance NVIDIA GPUs have both CUDA and Tensor cores. These cores are responsible for executing the neural network computations that underpin LLMs' language understanding and generation. The more CUDA cores your GPU has, the better equipped it is to handle the massive computational load that LLMs impose. Tensor cores in your GPU, further enhance LLM performance by accelerating the critical matrix operations integral to language modeling tasks.
- **Generation (Series)**: When selecting a GPU for LLMs, consider its generation or series (e.g., RTX 30 series). Newer GPU generations often come with improved architectures and features. For LLM tasks, opting for the latest generation can mean better performance, energy efficiency, and support for emerging AI technologies. Avoid purchasing, RTX-2000 series GPUs which are much outdated nowadays.
### Here are some of the best GPU options for this purpose:
1. **NVIDIA RTX 3090**: The NVIDIA RTX 3090 is a high-end GPU with a substantial 24GB of VRAM. This copious VRAM capacity makes it exceptionally well-suited for handling large LLMs. Moreover, it's known for its relative efficiency, meaning it won't overheat or strain your home PC's cooling system excessively. The RTX 3090's robust capabilities are a boon for those who need to work with hefty language models.
2. **NVIDIA RTX 4090**: If you're looking for peak performance and can afford the investment, the NVIDIA RTX 4090 represents the pinnacle of GPU power. Boasting 24GB of VRAM and featuring a cutting-edge Tensor Core architecture tailored for AI workloads, it outshines the RTX 3090 in terms of sheer capability. However, it's important to note that the RTX 4090 is also pricier and more power-hungry than its predecessor, the RTX 3090.
3. **AMD Radeon RX 6900 XT**: On the AMD side, the Radeon RX 6900 XT stands out as a high-end GPU with 16GB of VRAM. While it may not quite match the raw power of the RTX 3090 or RTX 4090, it strikes a balance between performance and affordability. Additionally, it tends to be more power-efficient, which could translate to a more sustainable and quieter setup in your home PC.
If budget constraints are a consideration, there are more cost-effective GPU options available:
- **NVIDIA RTX 3070**: The RTX 3070 is a solid mid-range GPU that can handle LLMs effectively. While it may not excel with the most massive or complex language models, it's a reliable choice for users looking for a balance between price and performance.
- **AMD Radeon RX 6800 XT**: Similarly, the RX 6800 XT from AMD offers commendable performance without breaking the bank. It's well-suited for running mid-sized LLMs and provides a competitive option in terms of both power and cost.
When selecting a GPU for LLMs, remember that it's not just about the GPU itself. Consider the synergy with other components in your PC:
- **CPU**: To ensure efficient processing, pair your GPU with a powerful CPU. LLMs benefit from fast processors, so having a capable CPU is essential.
- **RAM**: Sufficient RAM is crucial for LLMs. They can be memory-intensive, and having enough RAM ensures smooth operation.
- **Cooling System**: LLMs can push your PC's hardware to the limit. A robust cooling system helps maintain optimal temperatures, preventing overheating and performance throttling.
By taking all of these factors into account, you can build a home PC setup that's well-equipped to handle the demands of running LLMs effectively and efficiently.

View File

@ -1,3 +1,19 @@
--- ---
title: "@janhq: 2x4090 Workstation" title: "@janhq: 2x4090 Workstation"
--- ---
![Jan-Workstation](https://media.discordapp.net/attachments/964896173401976932/1158437407675387964/Jan-workstation_812x520_via_10015_io.png?ex=651c3e68&is=651aece8&hm=e2548dd8ee20f9ecbc5d13bec7040d00b6e91cb055e5d0fad33a1e232d275caf&=&width=668&height=428)
## This is Jan 2x4090 Workstation setup components list:
| Type | Item | Price |
| :------------------- | :----------------------------------------------- | :------ |
| **CPU** | [RYZEN THREADDRIPPER PRO 5965WX 280W SP3 WOF](#) | $2,229 |
| **Motherboard** | [ASUS PRO WS WRX80E SAGE SE WIFI](#) | $933 |
| **GPU** | [ASUS STRIX RTX 4090 24GB OC](#) | $4,345 |
| **RAM** | [G.SKILL RIPJAW S5 2x32 6000C32](#) | $92.99 |
| **Storage PCIe-SSD** | [SAMSUNG 990 PRO 2TB NVME 2.0](#) | $134.99 |
| **Cooler** | [BEQUIET DARK ROCK 4 PRO TR4](#) | $89.90 |
| **Power Supply** | [FSP CANNON 2000W PRO 92+ FULL MODULAR PSU](#) | $449.99 |
| **Case** | [VEDDHA 6GPUS FRAME BLACK](#) | $59.99 |
| **Total cost** | | $8334 |

View File

@ -1,6 +1,4 @@
--- ---
sidebar_position: 1 sidebar_position: 1
title: Hardware title: Introduction
--- ---
TODO

View File

@ -1,3 +0,0 @@
---
title: Cloud vs. Buy
---

View File

@ -0,0 +1,62 @@
---
title: Cloud vs. Self-hosting Your AI
---
The choice of how to run your AI - on GPU cloud services, on-prem, or just using an API provider - involves various trade-offs. The following is a naive exploration of the pros and cons of renting vs self-hosting.
## Cost Comparison
The following estimations use these general assumptions:
| | Self-Hosted | GPT 4.0 | GPU Rental |
| ---------- | ---------------------------------------- | -------------- | ------------------ |
| Unit Costs | $10k upfront for 2x4090s (5 year amort.) | $0.00012/token | $4.42 for 1xH100/h |
- 800 average tokens (input & output) in a single request
- Inference speed is at 24 tokens per second
### Low Usage
When operating at low capacity:
| | Self-Hosted | GPT 4.0 | GPU Rental |
| ---------------- | ----------- | ------- | ---------- |
| Cost per Request | $2.33 | $0.10 | $0.04 |
### High Usage
When operating at high capacity, i.e. 24 hours in a day, ~77.8k requests per month:
| | Self-Hosted | GPT 4.0 | GPU Rental |
| -------------- | ------------ | ------- | ---------- |
| Cost per Month | $166 (fixed) | $7465 | $3182 |
### Incremental Costs
Large context use cases are also interesting to evaluate. For example, if you had to write a 500 word essay summarizing Tolstoy's "War and Peace":
| | Self-Hosted | GPT 4.0 | GPU Rental |
| ----------------------- | -------------------- | ------- | ---------- |
| Cost of "War and Peace" | (upfront fixed cost) | $94 | $40 |
> **Takeaway**: Renting on cloud or using an API is great for initially scaling. However, it can quickly become expensive when dealing with large datasets and context windows. For predictable costs, self-hosting is an attractive option.
## Business Considerations
Other business level considerations may include:
| | Self-Hosted | GPT 4.0 | GPU Rental |
| ----------------------- | ----------- | ------- | ---------- |
| Data Privacy | ✅ | ❌ | ❌ |
| Offline Mode | ✅ | ❌ | ❌ |
| Customization & Control | ✅ | ❌ | ✅ |
| Auditing | ✅ | ❌ | ✅ |
| Setup Complexity | ❌ | ✅ | ✅ |
| Setup Cost | ❌ | ✅ | ✅ |
| Maintenance | ❌ | ✅ | ❌ |
## Conclusion
The decision to run LLMs in the cloud or on in-house servers is not one-size-fits-all. It depends on your business's specific needs, budget, and security considerations. Cloud-based LLMs offer scalability and cost-efficiency but come with potential security concerns, while in-house servers provide greater control, customization, and cost predictability.
In some situations, using a mix of cloud and in-house resources can be the best way to go. Businesses need to assess their needs and assets carefully to pick the right method for using LLMs in the ever-changing world of AI technology.

View File

@ -1,3 +1,14 @@
--- ---
title: CPU vs. GPU title: GPU vs CPU What's the Difference?
--- ---
## CPU vs. GPU
| | CPU | GPU |
| ------------------- | ------------------------------------------------------------------------ | ------------------------------------------------------- |
| **Function** | Generalized component that handles main processing functions of a server | Specialized component that excels at parallel computing |
| **Processing** | Designed for serial instruction processing | Designed for parallel instruction processing |
| **Design** | Fewer, more powerful cores | More cores than CPUs, but less powerful than CPU cores |
| **Best suited for** | General-purpose computing applications | High-performance computing applications |
![CPU VS GPU](https://media.discordapp.net/attachments/964896173401976932/1157998193741660222/CPU-vs-GPU-rendering.png?ex=651aa55b&is=651953db&hm=a22c80ed108a0d25106a20aa25236f7d0fa74167a50788194470f57ce7f4a6ca&=&width=807&height=426)

View File

@ -2,12 +2,61 @@
title: Recommended AI Hardware by Budget title: Recommended AI Hardware by Budget
--- ---
## $1,000 > :warning: **Warning:** Do your own research before any purchase. Jan is not liable for compatibility, performance or other issues. Products can become outdated quickly.
## $2,500 ## Entry-level PC Build at $1000
## $5,000 | Type | Item | Price |
| :------------------- | :--------------------------------------------------------- | :------- |
| **CPU** | [Intel Core i5 12400 2.5GHz 6-Core Processor](#) | $170.99 |
| **CPU Cooler** | [Intel Boxed Cooler (Included with CPU)](#) | Included |
| **Motherboard** | [ASUS Prime B660-PLUS DDR4 ATX LGA1700](#) | $169.95 |
| **GPU** | [Nvidia RTX 3050 8GB - ZOTAC Gaming Twin Edge](#) | $250 |
| **Memory** | [16GB (2 x 8GB) G.Skill Ripjaws V DDR4-3200 C16](#) | $49.99 |
| **Storage PCIe-SSD** | [ADATA XPG SX8200 Pro 512GB NVMe M.2 Solid State Drive](#) | $46.50 |
| **Power Supply** | [Corsair CX-M Series CX450M 450W ATX 2.4 Power Supply](#) | $89.99 |
| **Case** | [be quiet! Pure Base 600 Black ATX Mid Tower Case](#) | $97.00 |
| **Total cost** | | $870 |
## $7,500 ## Entry-level PC Build at $1,500
## $10,000 | Type | Item | Price |
| :------------------- | :------------------------------------------------------- | :------ |
| **CPU** | [Intel Core i5 12600K 3.7GHz 6-Core Processor](#) | $269.99 |
| **CPU Cooler** | [be quiet! Dark Rock Pro 4](#) | $99.99 |
| **Motherboard** | [ASUS ProArt B660-Creator DDR4 ATX LGA1700](#) | $229.99 |
| **GPU** | [Nvidia RTX 3050 8GB - ZOTAC Gaming Twin Edge](#) | $349.99 |
| **Memory** | [32GB (2 x 16GB) G.Skill Ripjaws V DDR4-3200 C16](#) | $129.99 |
| **Storage PCIe-SSD** | [ADATA XPG SX8200 Pro 1TB NVMe M.2 Solid State Drive](#) | $109.99 |
| **Power Supply** | [Corsair RMx Series RM650x 650W ATX 2.4 Power Supply](#) | $119.99 |
| **Case** | [Corsair Carbide Series 200R ATX Mid Tower Case](#) | $59.99 |
| **Total cost** | | $1371 |
## Mid-range PC Build at $3000
| Type | Item | Price |
| :--------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------- |
| **CPU** | [AMD Ryzen 9 7950X 4.5 GHz 16-Core Processor](https://de.pcpartpicker.com/product/22XJ7P/amd-ryzen-9-7950x-45-ghz-16-core-processor-100-100000514wof) | $556 |
| **CPU Cooler** | [Thermalright Peerless Assassin 120 White 66.17 CFM CPU Cooler](https://de.pcpartpicker.com/product/476p99/thermalright-peerless-assassin-120-white-6617-cfm-cpu-cooler-pa120-white) | $59.99 |
| **Motherboard** | [Gigabyte B650 GAMING X AX ATX AM5 Motherboard](https://de.pcpartpicker.com/product/YZgFf7/gigabyte-b650-gaming-x-ax-atx-am5-motherboard-b650-gaming-x-ax) | $199.99 |
| **Memory** | [G.Skill Ripjaws S5 64 GB (2 x 32 GB) DDR5-6000 CL32 Memory](https://de.pcpartpicker.com/product/BJcG3C/gskill-ripjaws-s5-64-gb-2-x-32-gb-ddr5-6000-cl32-memory-f5-6000j3238g32gx2-rs5k) | $194 |
| **Storage** | [Crucial P5 Plus 2 TB M.2-2280 PCIe 4.0 X4 NVME Solid ](https://de.pcpartpicker.com/product/VZWzK8/crucial-p5-plus-2-tb-m2-2280-pcie-40-x4-nvme-solid-state-drive-ct2000p5pssd8) | $165.99 |
| **GPU** | [PNY XLR8 Gaming VERTO EPIC-X RGB OC GeForce RTX 4090 24 GB](https://de.pcpartpicker.com/product/TvpzK8/pny-xlr8-gaming-verto-epic-x-rgb-oc-geforce-rtx-4090-24-gb-video-card-vcg409024tfxxpb1-o) | $1,599.99 |
| **Case** | [Fractal Design Pop Air ATX Mid Tower Case](https://de.pcpartpicker.com/product/QnD7YJ/fractal-design-pop-air-atx-mid-tower-case-fd-c-poa1a-02) | $89.99 |
| **Power Supply** | [Thermaltake Toughpower GF A3 - TT Premium Edition 1050 W 80+ Gold](https://de.pcpartpicker.com/product/4v3NnQ/thermaltake-toughpower-gf-a3-1050-w-80-gold-certified-fully-modular-atx-power-supply-ps-tpd-1050fnfagu-l) | $139.99 |
| |
| **Total cost** | **$3000** |
## High-End PC Build at $6,000
| Type | Item | Price |
| :--------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------- |
| **CPU** | [AMD Ryzen 9 3900X 3.8 GHz 12-Core Processor](https://pcpartpicker.com/product/tLCD4D/amd-ryzen-9-3900x-36-ghz-12-core-processor-100-100000023box) | $365.00 |
| **CPU Cooler** | [Noctua NH-U12S chromax.black 55 CFM CPU Cooler](https://pcpartpicker.com/product/dMVG3C/noctua-nh-u12s-chromaxblack-55-cfm-cpu-cooler-nh-u12s-chromaxblack) | $89.95 |
| **Motherboard** | [Asus ProArt X570-CREATOR WIFI ATX AM4 Motherboard](https://pcpartpicker.com/product/8y8bt6/asus-proart-x570-creator-wifi-atx-am4-motherboard-proart-x570-creator-wifi) | $599.99 |
| **Memory** | [Corsair Vengeance LPX 128 GB (4 x 32 GB) DDR4-3200 CL16 Memory](https://pcpartpicker.com/product/tRH8TW/corsair-vengeance-lpx-128-gb-4-x-32-gb-ddr4-3200-memory-cmk128gx4m4e3200c16) | $249.99 |
| **Storage** | [Sabrent Rocket 4 Plus 2 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive](https://pcpartpicker.com/product/PMBhP6/sabrent-rocket-4-plus-2-tb-m2-2280-nvme-solid-state-drive-sb-rkt4p-2tb) | $129.99 |
| **GPU** | [PNY RTX A-Series RTX A6000 48 GB Video Card](https://pcpartpicker.com/product/HWt9TW/pny-rtx-a-series-rtx-a6000-48-gb-video-card-vcnrtxa6000-pb) | $4269.00 |
| **Power Supply** | [EVGA SuperNOVA 850 G2 850 W 80+ Gold ](https://pcpartpicker.com/product/LCfp99/evga-supernova-850-g2-850-w-80-gold-certified-fully-modular-atx-power-supply-220-g2-0850-xr) | $322.42 |
| |
| **Total cost** | **$6026.34** |

View File

@ -1,7 +1,182 @@
--- ---
title: Recommended AI Models by Hardware title: Selecting AI Hardware
--- ---
When selecting a GPU for LLMs, remember that it's not just about the GPU itself. Consider the synergy with other components in your PC:
- **CPU**: To ensure efficient processing, pair your GPU with a powerful CPU. LLMs benefit from fast processors, so having a capable CPU is essential.
- **RAM**: Sufficient RAM is crucial for LLMs. They can be memory-intensive, and having enough RAM ensures smooth operation.
- **Cooling System**: LLMs can push your PC's hardware to the limit. A robust cooling system helps maintain optimal temperatures, preventing overheating and performance throttling.
By taking all of these factors into account, you can build a home PC setup that's well-equipped to handle the demands of running LLMs effectively and efficiently.
## GPU Selection
Selecting the optimal GPU for running Large Language Models (LLMs) on your home PC is a decision influenced by your budget and the specific LLMs you intend to work with. Your choice should strike a balance between performance, efficiency, and cost-effectiveness.
### GPU Comparison
| GPU | Price | Cores | VRAM (GB) | Bandwth (T/s) | Power |
| --------------------- | ----- | ----- | --------- | ------------- | ----- |
| Nvidia H100 | 40000 | 18432 | 80 | 2 | |
| Nvidia A100 | 15000 | 6912 | 80 | | |
| Nvidia A100 | 7015 | 6912 | 40 | | |
| Nvidia A10 | 2799 | 9216 | 24 | | |
| Nvidia RTX A6000 | 4100 | 10752 | 48 | 0.768 | |
| Nvidia RTX 6000 | 6800 | 4608 | 46 | | |
| Nvidia RTX 4090 Ti | 2000 | 18176 | 24 | | |
| Nvidia RTX 4090 | 1800 | 16384 | 24 | 1.008 | |
| Nvidia RTX 3090 | 1450 | 10496 | 24 | | |
| Nvidia RTX 3080 | 700 | 8704 | 12 | | |
| Nvidia RTX 3070 | 900 | 6144 | 8 | | |
| Nvidia L4 | 2711 | 7424 | 24 | | |
| Nvidia T4 | 2299 | 2560 | 16 | | |
| AMD Radeon RX 6900 XT | 1000 | 5120 | 16 | | |
| AMD Radeon RX 6800 XT | 420 | 4608 | 16 | | |
\*Market prices as of Oct 2023 via Amazon/PCMag
### Other Considerations
In general, the following GPU features are important for running LLMs:
- **High VRAM:** LLMs are typically very large and complex models, so they require a GPU with a high amount of VRAM. This will allow the model to be loaded into memory and processed efficiently.
- **CUDA Compatibility:** When running LLMs on a GPU, CUDA compatibility is paramount. CUDA is NVIDIA's parallel computing platform, and it plays a vital role in accelerating deep learning tasks. LLMs, with their extensive matrix calculations, heavily rely on parallel processing. Ensuring your GPU supports CUDA is like having the right tool for the job. It allows the LLM to leverage the GPU's parallel processing capabilities, significantly speeding up model training and inference.
- **Number of CUDA, Tensor, and RT Cores:** High-performance NVIDIA GPUs have both CUDA and Tensor cores. These cores are responsible for executing the neural network computations that underpin LLMs' language understanding and generation. The more CUDA cores your GPU has, the better equipped it is to handle the massive computational load that LLMs impose. Tensor cores in your GPU, further enhance LLM performance by accelerating the critical matrix operations integral to language modeling tasks.
- **Generation (Series)**: When selecting a GPU for LLMs, consider its generation or series (e.g., RTX 30 series). Newer GPU generations often come with improved architectures and features. For LLM tasks, opting for the latest generation can mean better performance, energy efficiency, and support for emerging AI technologies. Avoid purchasing, RTX-2000 series GPUs which are much outdated nowadays.
## CPU Selection
Selecting the right CPU for running Large Language Models (LLMs) on your home PC is contingent on your budget and the specific LLMs you intend to work with. It's a decision that warrants careful consideration, as the CPU plays a pivotal role in determining the overall performance of your system.
In general, the following CPU features are important for running LLMs:
- **Number of Cores and Threads:** the number of CPU cores and threads influences parallel processing. More cores and threads help handle the complex computations involved in language models. For tasks like training and inference, a higher core/thread count can significantly improve processing speed and efficiency, enabling quicker results.
- **High clock speed:** The base clock speed, or base frequency, represents the CPU's default operating speed. So having a CPU with a high clock speed. This will allow the model to process instructions more quickly, which can further improve performance.
- **Base Power (TDP):** LLMs often involve long training sessions and demanding computations. Therefore, a lower Thermal Design Power (TDP) is desirable. A CPU with a lower TDP consumes less power and generates less heat during prolonged LLM operations. This not only contributes to energy efficiency but also helps maintain stable temperatures in your system, preventing overheating and potential performance throttling.
- **Generation (Series):** Consider its generation or series (e.g., 9th Gen, 11th Gen Intel Core). Newer CPU generations often come with architectural improvements that enhance performance and efficiency. For LLM tasks, opting for a more recent generation can lead to faster and more efficient language model training and inference.
- **Support for AVX512:** AVX512 is a set of vector instruction extensions that can be used to accelerate machine learning workloads. Many LLMs are optimized to take advantage of AVX512, so it is important to make sure that your CPU supports this instruction set.
### Here are some CPU options for running LLMs:
1. **Intel Core i7-12700K**: Slightly less potent than the Core i9-12900K, the Intel Core i7-12700K is still a powerful CPU. With 12 cores and 20 threads, it strikes a balance between performance and cost-effectiveness. This CPU is well-suited for running mid-sized and large LLMs, making it a compelling option.
2. **Intel Core i9-12900K**: Positioned as a high-end CPU, the Intel Core i9-12900K packs a formidable punch with its 16 cores and 24 threads. It's one of the fastest CPUs available, making it an excellent choice for handling large and intricate LLMs. The abundance of cores and threads translates to exceptional parallel processing capabilities, which is crucial for tasks involving massive language models.
3. **AMD Ryzen 9 5950X**: Representing AMD's high-end CPU offering, the Ryzen 9 5950X boasts 16 cores and 32 threads. While it may not quite match the speed of the Core i9-12900K, it remains a robust and cost-effective choice. Its multicore prowess enables smooth handling of LLM workloads, and its affordability makes it an attractive alternative.
4. **AMD Ryzen 7 5800X**: Slightly less potent than the Ryzen 9 5950X, the Ryzen 7 5800X is still a formidable CPU with 8 cores and 16 threads. It's well-suited for running mid-sized and smaller LLMs, providing a compelling blend of performance and value.
For those operating within budget constraints, there are more budget-friendly CPU options:
- **Intel Core i5-12600K**: The Core i5-12600K is a capable mid-range CPU that can still handle LLMs effectively, though it may not be optimized for the largest or most complex models.
- **AMD Ryzen 5 5600X**: The Ryzen 5 5600X offers a balance of performance and affordability. It's suitable for running smaller to mid-sized LLMs without breaking the bank.
**When selecting a CPU for LLMs, consider the synergy with other components in your PC:**
- **GPU**: Pair your CPU with a powerful GPU to ensure smooth processing of LLMs. Some language models, particularly those used for AI, rely on GPU acceleration for optimal performance.
- **RAM**: Adequate RAM is essential for LLMs, as these models can be memory-intensive. Having enough RAM ensures that your CPU can operate efficiently without bottlenecks.
- **Cooling System**: Given the resource-intensive nature of LLMs, a robust cooling system is crucial to maintain optimal temperatures and prevent performance throttling.
By carefully weighing your budget and performance requirements and considering the interplay of components in your PC, you can assemble a well-rounded system that's up to the task of running LLMs efficiently.
> :memo: **Note:** It is important to note that these are just general recommendations. The specific CPU requirements for your LLM will vary depending on the specific model you are using and the tasks that you want to perform with it. If you are unsure what CPU to get, it is best to consult with an expert.
## RAM Selection
The amount of RAM you need to run an LLM depends on the size and complexity of the model, as well as the tasks you want to perform with it. For example, if you are simply running inference on a pre-trained LLM, you may be able to get away with using a relatively modest amount of RAM. However, if you are training a new LLM from scratch, or if you are running complex tasks like fine-tuning or code generation, you will need more RAM.
### Here is a general guide to RAM selection for running LLMs:
- **Capacity:** The amount of RAM you need will depend on the size and complexity of the LLM model you want to run. For inference, you will need at least 16GB of RAM, but 32GB or more is ideal for larger models and more complex tasks. For training, you will need at least 64GB of RAM, but 128GB or more is ideal for larger models and more complex tasks.
- **Speed:** LLMs can benefit from having fast RAM, so it is recommended to use DDR4 or DDR5 RAM with a speed of at least 3200MHz.
- **Latency:** RAM latency is the amount of time it takes for the CPU to access data in memory. Lower latency is better for performance, so it is recommended to look for RAM with a low latency rating.
- **Timing:** RAM timing is a set of parameters that control how the RAM operates. It is important to make sure that the RAM timing is compatible with your motherboard and CPU.
R**ecommended RAM** **options for running LLMs:**
- **Inference:** For inference on pre-trained LLMs, you will need at least 16GB of RAM. However, 32GB or more is ideal for larger models and more complex tasks.
- **Training:** For training LLMs from scratch, you will need at least 64GB of RAM. However, 128GB or more is ideal for larger models and more complex tasks.
In addition to the amount of RAM, it is also important to consider the speed of the RAM. LLMs can benefit from having fast RAM, so it is recommended to use DDR4 or DDR5 RAM with a speed of at least 3200MHz.
## Motherboard Selection
When picking a motherboard to run advanced language models, you need to think about a few things. First, consider the specific language model you want to use, the type of CPU and GPU in your computer, and your budget. Here are some suggestions:
1. **ASUS ROG Maximus Z790 Hero:** This is a top-notch motherboard with lots of great features. It works well with Intel's latest CPUs, fast DDR5 memory, and PCIe 5.0 devices. It's also good at keeping things cool, which is important for running demanding language models.
2. **MSI MEG Z790 Ace:** Similar to the ASUS ROG Maximus, this motherboard is high-end and has similar features. It's good for running language models too.
3. **Gigabyte Z790 Aorus Master:** This one is more budget-friendly but still works great with Intel's latest CPUs, DDR5 memory, and fast PCIe 5.0 devices. It's got a strong power system, which helps with running language models.
If you're on a tighter budget, you might want to check out mid-range options like the **ASUS TUF Gaming Z790-Plus WiFi** or the **MSI MPG Z790 Edge WiFi DDR5**. They offer good performance without breaking the bank.
No matter which motherboard you pick, make sure it works with your CPU and GPU. Also, check that it has the features you need, like enough slots for your GPU and storage drives.
Other things to think about when choosing a motherboard for language models:
- **Cooling:** Language models can make your CPU work hard, so a motherboard with good cooling is a must. This keeps your CPU from getting too hot.
- **Memory:** Language models need lots of memory, so make sure your motherboard supports a good amount of it. Check if it works with the type of memory you want to use, like DDR5 or DDR4.
- **Storage:** Language models can create and store a ton of data. So, look for a motherboard with enough slots for your storage drives.
- **BIOS:** The BIOS controls your motherboard. Make sure it's up-to-date and has the latest features, especially if you plan to overclock or undervolt your system.
## Cooling System Selection
Modern computers have two critical components, the CPU and GPU, which can heat up during high-performance tasks. To prevent overheating, they come with built-in temperature controls that automatically reduce performance when temperatures rise. To keep them cool and maintain optimal performance, you need a reliable cooling system.
For laptops, the only choice is a fan-based cooling system. Laptops have built-in fans and copper pipes to dissipate heat. Many gaming laptops even have two separate fans: one for the CPU and another for the GPU.
For desktop computers, you have the option to install more efficient water cooling systems. These are highly effective but can be expensive. Or you can install more cooling fans to keep you components cool.
Keep in mind that dust can accumulate in fan-based cooling systems, leading to malfunctions. So periodically clean the dust to keep your cooling system running smoothly.
## Use MacBook to run LLMs
An Apple MacBook equipped with either the M1 or the newer M2 Pro/Max processor. These cutting-edge chips leverage Apple's innovative Unified Memory Architecture (UMA), which revolutionizes the way the CPU and GPU interact with memory resources. This advancement plays a pivotal role in enhancing the performance and capabilities of LLMs.
Unified Memory Architecture, as implemented in Apple's M1 and M2 series processors, facilitates seamless and efficient data access for both the CPU and GPU. Unlike traditional systems where data needs to be shuttled between various memory pools, UMA offers a unified and expansive memory pool that can be accessed by both processing units without unnecessary data transfers. This transformative approach significantly minimizes latency while concurrently boosting data access bandwidth, resulting in substantial improvements in both the speed and quality of outputs.
![UMA](https://media.discordapp.net/attachments/1148534242104574012/1156600109967089714/IMG_3722.webp?ex=6516380a&is=6514e68a&hm=ebe3b6ecb1edb44cde58bd8d3fdd46cef66b60aa41ea6c03b51325fa65f8517e&=&width=807&height=426)
The M1 and M2 Pro/Max chips offer varying levels of unified memory bandwidth, further underscoring their prowess in handling data-intensive tasks like AI processing. The M1/M2 Pro chip boasts an impressive capacity of up to 200 GB/s of unified memory bandwidth, while the M1/M2 Max takes it a step further, supporting up to a staggering 400 GB/s of unified memory bandwidth. This means that regardless of the complexity and demands of the AI tasks at hand, these Apple laptops armed with M1 or M2 processors are well-equipped to handle them with unparalleled efficiency and speed.
## Calculating vRAM Requirements for an LLM
**For example:** Calculating the VRAM required to run a 13-billion-parameter Large Language Model (LLM) involves considering the model size, batch size, sequence length, token size, and any additional overhead. Here's how you can estimate the VRAM required for a 13B LLM:
1. **Model Size**: Find out the size of the 13B LLM in terms of the number of parameters. This information is typically provided in the model's documentation. A 13-billion-parameter model has 13,000,000,000 parameters.
2. **Batch Size**: Decide on the batch size you want to use during inference. The batch size represents how many input samples you process simultaneously. Smaller batch sizes require less VRAM.
3. **Sequence Length**: Determine the average length of the input text sequences you'll be working with. Sequence length can impact VRAM requirements; longer sequences need more memory.
4. **Token Size**: Understand the memory required to store one token in bytes. Most LLMs use 4 bytes per token.
5. **Overhead**: Consider any additional memory overhead for intermediate computations and framework requirements. Overhead can vary but should be estimated based on your specific setup.
Use the following formula to estimate the VRAM required:
**VRAM Required (in gigabytes)** = `Model Parameters x Token Size x Batch Size x Sequence Length + Overhead`
- **Model Parameters**: 13,000,000,000 parameters for a 13B LLM.
- **Token Size**: Usually 4 bytes per token.
- **Batch Size**: Choose your batch size.
- **Sequence Length**: The average length of input sequences.
- **Overhead**: Any additional VRAM required based on your setup.
Here's an example:
Suppose you want to run a 13B LLM with the following parameters:
- **Batch Size**: 4
- **Sequence Length**: 512 tokens
- **Token Size**: 4 bytes
- **Estimated Overhead**: 2 GB
VRAM Required (in gigabytes) = `(13,000,000,000 x 4 x 4 x 512) + 2`
VRAM Required (in gigabytes) = `(8,388,608,000) + 2,000`
VRAM Required (in gigabytes) ≈ `8,390,608,000 bytes`
To convert this to gigabytes, divide by `1,073,741,824 (1 GB)`
VRAM Required (in gigabytes) ≈ `8,390,608,000 / 1,073,741,824 ≈ 7.8 GB`
So, to run a 13-billion-parameter LLM with the specified parameters and overhead, you would need approximately 7.8 gigabytes of VRAM on your GPU. Make sure to have some additional VRAM for stable operation and consider testing the setup in practice to monitor VRAM usage accurately.
<!--
## Macbook 8GB RAM ## Macbook 8GB RAM
## Macbook 16GB RAM ## Macbook 16GB RAM -->

View File

@ -4,5 +4,65 @@ title: Recommended AI Hardware by Model
## Codellama 34b ## Codellama 34b
## Falcon 180b ### System Requirements:
**For example**: If you want to use [Codellama 7B](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ/tree/main) models on your own computer, you can take advantage of your GPU and run this with GPTQ file models.
GPTQ is a format that compresses the model parameters to 4-bit, which reduces the VRAM requirements significantly. You can use the [oobabooga webui](https://github.com/oobabooga/text-generation-webui) or [JanAI](https://jan.ai/), which are simple interfaces that let you interact with different LLMS on your browser. It is pretty easy to set up and run. You can install it on Windows or Linux. (linked it to our installation page)
**For 7B Parameter Models (4-bit Quantization)**
| Format | RAM Requirements | VRAM Requirements | Minimum recommended GPU |
| ------------------------------------------------ | -------------------- | ----------------- | ----------------------------------------- |
| GPTQ (GPU inference) | 6GB (Swap to Load\*) | 6GB | GTX 1660, 2060,RTX 3050, 3060 AMD 5700 XT |
| GGML / GGUF (CPU inference) | 4GB | 300MB | |
| Combination of GPTQ and GGML / GGUF (offloading) | 2GB | 2GB | |
**For 13B Parameter Models (4-bit Quantization)**
| Format | RAM Requirements | VRAM Requirements | Minimum recommended GPU |
| ------------------------------------------------ | --------------------- | ----------------- | -------------------------------------------------- |
| GPTQ (GPU inference) | 12GB (Swap to Load\*) | 10GB | |
| GGML / GGUF (CPU inference) | 8GB | 500MB | AMD 6900 XT, RTX 2060 12GB, 3060 12GB, 3080, A2000 |
| Combination of GPTQ and GGML / GGUF (offloading) | 10GB | 10GB | |
**For 34B Parameter Models (4-bit Quantization)**
| Format | RAM Requirements | VRAM Requirements | Minimum recommended GPU |
| ------------------------------------------------ | --------------------- | ----------------- | -------------------------------------------------------------------- |
| GPTQ (GPU inference) | 32GB (Swap to Load\*) | 20GB | |
| GGML / GGUF (CPU inference) | 20GB | 500MB | RTX 3080 20GB, A4500, A5000, 3090, 4090, 6000, Tesla V100, Tesla P40 |
| Combination of GPTQ and GGML / GGUF (offloading) | 10GB | 4GB | |
**For 7B Parameter Models (8-bit Quantization)**
| Format | RAM Requirements | VRAM Requirements | Minimum recommended GPU |
| ------------------------------------------------ | --------------------- | ----------------- | -------------------------------------- |
| GPTQ (GPU inference) | 24GB (Swap to Load\*) | 12GB | RTX 3080, RTX 3080 Ti, RTX 3090, A5000 |
| GGML / GGUF (CPU inference) | 16GB | 1GB | RTX 3060 12GB, RTX 3070, A2000 |
| Combination of GPTQ and GGML / GGUF (offloading) | 12GB | 4GB | RTX 3060, RTX 3060 Ti, A2000 |
**For 13B Parameter Models (8-bit Quantization)**
| Format | RAM Requirements | VRAM Requirements | Minimum recommended GPU |
| ------------------------------------------------ | --------------------- | ----------------- | --------------------------------- |
| GPTQ (GPU inference) | 36GB (Swap to Load\*) | 20GB | RTX 4090, A6000, A6000 Ti, A8000 |
| GGML / GGUF (CPU inference) | 24GB | 2GB | RTX 3080 20GB, RTX 3080 Ti, A5000 |
| Combination of GPTQ and GGML / GGUF (offloading) | 20GB | 8GB | RTX 3080, RTX 3080 Ti, A5000 |
**For 34B Parameter Models (8-bit Quantization)**
| Format | RAM Requirements | VRAM Requirements | Minimum recommended GPU |
| ------------------------------------------------ | --------------------- | ----------------- | -------------------------------- |
| GPTQ (GPU inference) | 64GB (Swap to Load\*) | 40GB | A8000, A8000 Ti, A9000 |
| GGML / GGUF (CPU inference) | 40GB | 2GB | RTX 4090, A6000, A6000 Ti, A8000 |
| Combination of GPTQ and GGML / GGUF (offloading) | 48GB | 20GB | RTX 4090, A6000, A6000 Ti, A8000 |
> :memo: **Note**: System RAM, not VRAM, required to load the model, in addition to having enough VRAM. Not required to run the model. You can use swap space if you do not have enough RAM.
### Performance Recommendations:
1. **Optimal Performance**: To achieve the best performance when working with CodeLlama models, consider investing in a high-end GPU such as NVIDIA's latest RTX 3090 or RTX 4090. For the largest models like the 65B and 70B, a dual GPU setup is recommended. Additionally, ensure your system boasts sufficient RAM, with a minimum of 16 GB, although 64 GB is ideal for seamless operation.
2. **Budget-Friendly Approach**: If budget constraints are a concern, focus on utilizing CodeLlama GGML/GGUF models that can comfortably fit within your system's available RAM. Keep in mind that while you can allocate some model weights to the system RAM to save GPU memory, this may result in a performance trade-off.
> :memo: **Note**: It's essential to note that these recommendations are guidelines, and the actual performance you experience will be influenced by various factors. These factors include the specific task you're performing, the implementation of the model, and the concurrent system processes. To optimize your setup, consider these recommendations as a starting point and adapt them to your unique requirements and constraints.

View File

@ -2,19 +2,21 @@
title: Recommended AI Hardware by Use Case title: Recommended AI Hardware by Use Case
--- ---
## Personal Use ## Which AI Hardware to Choose Based on Your Use Case
### Entry-level Experimentation Artificial intelligence (AI) is rapidly changing the world, and AI hardware is becoming increasingly important for businesses and individuals alike. Choosing the right hardware for your AI needs is crucial to get the best performance and results. Here are some tips for selecting AI hardware based on your specific use case and requirements.
### Personal Use ### Entry-level Experimentation:
- Macbook (16gb) **Personal Use:**
- 3090 When venturing into the world of AI as an individual, your choice of hardware can significantly impact your experience. Here's a more detailed breakdown:
### Prosumer Use - **Macbook (16GB):** A Macbook equipped with 16GB of RAM and either the M1 or the newer M2 Pro/Max processor is an excellent starting point for AI enthusiasts. These cutting-edge chips leverage Apple's innovative Unified Memory Architecture (UMA), which revolutionizes the way the CPU and GPU interact with memory resources. This advancement plays a pivotal role in enhancing the performance and capabilities of LLMs.
- **Nvidia GeForce RTX 3090:** This powerful graphics card is a solid alternative for AI beginners, offering exceptional performance for basic experiments.
- Apple Silicon 2. **Serious AI Work:**
- 2 x 3090 (48gb RAM)
- **2 x 3090 RTX Card (48GB RAM):** For those committed to more advanced AI projects, this configuration provides the necessary muscle. Its dual Nvidia GeForce RTX 3090 GPUs and ample RAM make it suitable for complex AI tasks and model training.
## Business Use ## Business Use

View File

@ -14,9 +14,9 @@
"write-heading-ids": "docusaurus write-heading-ids" "write-heading-ids": "docusaurus write-heading-ids"
}, },
"dependencies": { "dependencies": {
"@docusaurus/core": "2.4.1", "@docusaurus/core": "^2.4.3",
"@docusaurus/preset-classic": "2.4.1", "@docusaurus/preset-classic": "^2.4.3",
"@docusaurus/theme-live-codeblock": "^2.4.1", "@docusaurus/theme-live-codeblock": "^2.4.3",
"@headlessui/react": "^1.7.17", "@headlessui/react": "^1.7.17",
"@heroicons/react": "^2.0.18", "@heroicons/react": "^2.0.18",
"@mdx-js/react": "^1.6.22", "@mdx-js/react": "^1.6.22",

View File

@ -92,8 +92,8 @@ const sidebars = {
items: [ items: [
{ {
type: "doc", type: "doc",
label: "Cloud vs. Buy", label: "Cloud vs. Self-Hosting",
id: "hardware/overview/cloud-vs-buy", id: "hardware/overview/cloud-vs-self-hosting",
}, },
{ {
type: "doc", type: "doc",

View File

@ -1250,10 +1250,10 @@
"@docsearch/css" "3.5.2" "@docsearch/css" "3.5.2"
algoliasearch "^4.19.1" algoliasearch "^4.19.1"
"@docusaurus/core@2.4.1": "@docusaurus/core@2.4.3", "@docusaurus/core@^2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/core/-/core-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/core/-/core-2.4.3.tgz#d86624901386fd8164ce4bff9cc7f16fde57f523"
integrity sha512-SNsY7PshK3Ri7vtsLXVeAJGS50nJN3RgF836zkyUfAD01Fq+sAk5EwWgLw+nnm5KVNGDu7PRR2kRGDsWvqpo0g== integrity sha512-dWH5P7cgeNSIg9ufReX6gaCl/TmrGKD38Orbwuz05WPhAQtFXHd5B8Qym1TiXfvUNvwoYKkAJOJuGe8ou0Z7PA==
dependencies: dependencies:
"@babel/core" "^7.18.6" "@babel/core" "^7.18.6"
"@babel/generator" "^7.18.7" "@babel/generator" "^7.18.7"
@ -1265,13 +1265,13 @@
"@babel/runtime" "^7.18.6" "@babel/runtime" "^7.18.6"
"@babel/runtime-corejs3" "^7.18.6" "@babel/runtime-corejs3" "^7.18.6"
"@babel/traverse" "^7.18.8" "@babel/traverse" "^7.18.8"
"@docusaurus/cssnano-preset" "2.4.1" "@docusaurus/cssnano-preset" "2.4.3"
"@docusaurus/logger" "2.4.1" "@docusaurus/logger" "2.4.3"
"@docusaurus/mdx-loader" "2.4.1" "@docusaurus/mdx-loader" "2.4.3"
"@docusaurus/react-loadable" "5.5.2" "@docusaurus/react-loadable" "5.5.2"
"@docusaurus/utils" "2.4.1" "@docusaurus/utils" "2.4.3"
"@docusaurus/utils-common" "2.4.1" "@docusaurus/utils-common" "2.4.3"
"@docusaurus/utils-validation" "2.4.1" "@docusaurus/utils-validation" "2.4.3"
"@slorber/static-site-generator-webpack-plugin" "^4.0.7" "@slorber/static-site-generator-webpack-plugin" "^4.0.7"
"@svgr/webpack" "^6.2.1" "@svgr/webpack" "^6.2.1"
autoprefixer "^10.4.7" autoprefixer "^10.4.7"
@ -1327,33 +1327,33 @@
webpack-merge "^5.8.0" webpack-merge "^5.8.0"
webpackbar "^5.0.2" webpackbar "^5.0.2"
"@docusaurus/cssnano-preset@2.4.1": "@docusaurus/cssnano-preset@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/cssnano-preset/-/cssnano-preset-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/cssnano-preset/-/cssnano-preset-2.4.3.tgz#1d7e833c41ce240fcc2812a2ac27f7b862f32de0"
integrity sha512-ka+vqXwtcW1NbXxWsh6yA1Ckii1klY9E53cJ4O9J09nkMBgrNX3iEFED1fWdv8wf4mJjvGi5RLZ2p9hJNjsLyQ== integrity sha512-ZvGSRCi7z9wLnZrXNPG6DmVPHdKGd8dIn9pYbEOFiYihfv4uDR3UtxogmKf+rT8ZlKFf5Lqne8E8nt08zNM8CA==
dependencies: dependencies:
cssnano-preset-advanced "^5.3.8" cssnano-preset-advanced "^5.3.8"
postcss "^8.4.14" postcss "^8.4.14"
postcss-sort-media-queries "^4.2.1" postcss-sort-media-queries "^4.2.1"
tslib "^2.4.0" tslib "^2.4.0"
"@docusaurus/logger@2.4.1": "@docusaurus/logger@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/logger/-/logger-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/logger/-/logger-2.4.3.tgz#518bbc965fb4ebe8f1d0b14e5f4161607552d34c"
integrity sha512-5h5ysIIWYIDHyTVd8BjheZmQZmEgWDR54aQ1BX9pjFfpyzFo5puKXKYrYJXbjEHGyVhEzmB9UXwbxGfaZhOjcg== integrity sha512-Zxws7r3yLufk9xM1zq9ged0YHs65mlRmtsobnFkdZTxWXdTYlWWLWdKyNKAsVC+D7zg+pv2fGbyabdOnyZOM3w==
dependencies: dependencies:
chalk "^4.1.2" chalk "^4.1.2"
tslib "^2.4.0" tslib "^2.4.0"
"@docusaurus/mdx-loader@2.4.1": "@docusaurus/mdx-loader@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/mdx-loader/-/mdx-loader-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/mdx-loader/-/mdx-loader-2.4.3.tgz#e8ff37f30a060eaa97b8121c135f74cb531a4a3e"
integrity sha512-4KhUhEavteIAmbBj7LVFnrVYDiU51H5YWW1zY6SmBSte/YLhDutztLTBE0PQl1Grux1jzUJeaSvAzHpTn6JJDQ== integrity sha512-b1+fDnWtl3GiqkL0BRjYtc94FZrcDDBV1j8446+4tptB9BAOlePwG2p/pK6vGvfL53lkOsszXMghr2g67M0vCw==
dependencies: dependencies:
"@babel/parser" "^7.18.8" "@babel/parser" "^7.18.8"
"@babel/traverse" "^7.18.8" "@babel/traverse" "^7.18.8"
"@docusaurus/logger" "2.4.1" "@docusaurus/logger" "2.4.3"
"@docusaurus/utils" "2.4.1" "@docusaurus/utils" "2.4.3"
"@mdx-js/mdx" "^1.6.22" "@mdx-js/mdx" "^1.6.22"
escape-html "^1.0.3" escape-html "^1.0.3"
file-loader "^6.2.0" file-loader "^6.2.0"
@ -1382,18 +1382,32 @@
react-helmet-async "*" react-helmet-async "*"
react-loadable "npm:@docusaurus/react-loadable@5.5.2" react-loadable "npm:@docusaurus/react-loadable@5.5.2"
"@docusaurus/plugin-content-blog@2.4.1": "@docusaurus/module-type-aliases@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/plugin-content-blog/-/plugin-content-blog-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/module-type-aliases/-/module-type-aliases-2.4.3.tgz#d08ef67e4151e02f352a2836bcf9ecde3b9c56ac"
integrity sha512-E2i7Knz5YIbE1XELI6RlTnZnGgS52cUO4BlCiCUCvQHbR+s1xeIWz4C6BtaVnlug0Ccz7nFSksfwDpVlkujg5Q== integrity sha512-cwkBkt1UCiduuvEAo7XZY01dJfRn7UR/75mBgOdb1hKknhrabJZ8YH+7savd/y9kLExPyrhe0QwdS9GuzsRRIA==
dependencies: dependencies:
"@docusaurus/core" "2.4.1" "@docusaurus/react-loadable" "5.5.2"
"@docusaurus/logger" "2.4.1" "@docusaurus/types" "2.4.3"
"@docusaurus/mdx-loader" "2.4.1" "@types/history" "^4.7.11"
"@docusaurus/types" "2.4.1" "@types/react" "*"
"@docusaurus/utils" "2.4.1" "@types/react-router-config" "*"
"@docusaurus/utils-common" "2.4.1" "@types/react-router-dom" "*"
"@docusaurus/utils-validation" "2.4.1" react-helmet-async "*"
react-loadable "npm:@docusaurus/react-loadable@5.5.2"
"@docusaurus/plugin-content-blog@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-content-blog/-/plugin-content-blog-2.4.3.tgz#6473b974acab98e967414d8bbb0d37e0cedcea14"
integrity sha512-PVhypqaA0t98zVDpOeTqWUTvRqCEjJubtfFUQ7zJNYdbYTbS/E/ytq6zbLVsN/dImvemtO/5JQgjLxsh8XLo8Q==
dependencies:
"@docusaurus/core" "2.4.3"
"@docusaurus/logger" "2.4.3"
"@docusaurus/mdx-loader" "2.4.3"
"@docusaurus/types" "2.4.3"
"@docusaurus/utils" "2.4.3"
"@docusaurus/utils-common" "2.4.3"
"@docusaurus/utils-validation" "2.4.3"
cheerio "^1.0.0-rc.12" cheerio "^1.0.0-rc.12"
feed "^4.2.2" feed "^4.2.2"
fs-extra "^10.1.0" fs-extra "^10.1.0"
@ -1404,18 +1418,18 @@
utility-types "^3.10.0" utility-types "^3.10.0"
webpack "^5.73.0" webpack "^5.73.0"
"@docusaurus/plugin-content-docs@2.4.1": "@docusaurus/plugin-content-docs@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/plugin-content-docs/-/plugin-content-docs-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/plugin-content-docs/-/plugin-content-docs-2.4.3.tgz#aa224c0512351e81807adf778ca59fd9cd136973"
integrity sha512-Lo7lSIcpswa2Kv4HEeUcGYqaasMUQNpjTXpV0N8G6jXgZaQurqp7E8NGYeGbDXnb48czmHWbzDL4S3+BbK0VzA== integrity sha512-N7Po2LSH6UejQhzTCsvuX5NOzlC+HiXOVvofnEPj0WhMu1etpLEXE6a4aTxrtg95lQ5kf0xUIdjX9sh3d3G76A==
dependencies: dependencies:
"@docusaurus/core" "2.4.1" "@docusaurus/core" "2.4.3"
"@docusaurus/logger" "2.4.1" "@docusaurus/logger" "2.4.3"
"@docusaurus/mdx-loader" "2.4.1" "@docusaurus/mdx-loader" "2.4.3"
"@docusaurus/module-type-aliases" "2.4.1" "@docusaurus/module-type-aliases" "2.4.3"
"@docusaurus/types" "2.4.1" "@docusaurus/types" "2.4.3"
"@docusaurus/utils" "2.4.1" "@docusaurus/utils" "2.4.3"
"@docusaurus/utils-validation" "2.4.1" "@docusaurus/utils-validation" "2.4.3"
"@types/react-router-config" "^5.0.6" "@types/react-router-config" "^5.0.6"
combine-promises "^1.1.0" combine-promises "^1.1.0"
fs-extra "^10.1.0" fs-extra "^10.1.0"
@ -1426,95 +1440,95 @@
utility-types "^3.10.0" utility-types "^3.10.0"
webpack "^5.73.0" webpack "^5.73.0"
"@docusaurus/plugin-content-pages@2.4.1": "@docusaurus/plugin-content-pages@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/plugin-content-pages/-/plugin-content-pages-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/plugin-content-pages/-/plugin-content-pages-2.4.3.tgz#7f285e718b53da8c8d0101e70840c75b9c0a1ac0"
integrity sha512-/UjuH/76KLaUlL+o1OvyORynv6FURzjurSjvn2lbWTFc4tpYY2qLYTlKpTCBVPhlLUQsfyFnshEJDLmPneq2oA== integrity sha512-txtDVz7y3zGk67q0HjG0gRttVPodkHqE0bpJ+7dOaTH40CQFLSh7+aBeGnPOTl+oCPG+hxkim4SndqPqXjQ8Bg==
dependencies: dependencies:
"@docusaurus/core" "2.4.1" "@docusaurus/core" "2.4.3"
"@docusaurus/mdx-loader" "2.4.1" "@docusaurus/mdx-loader" "2.4.3"
"@docusaurus/types" "2.4.1" "@docusaurus/types" "2.4.3"
"@docusaurus/utils" "2.4.1" "@docusaurus/utils" "2.4.3"
"@docusaurus/utils-validation" "2.4.1" "@docusaurus/utils-validation" "2.4.3"
fs-extra "^10.1.0" fs-extra "^10.1.0"
tslib "^2.4.0" tslib "^2.4.0"
webpack "^5.73.0" webpack "^5.73.0"
"@docusaurus/plugin-debug@2.4.1": "@docusaurus/plugin-debug@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/plugin-debug/-/plugin-debug-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/plugin-debug/-/plugin-debug-2.4.3.tgz#2f90eb0c9286a9f225444e3a88315676fe02c245"
integrity sha512-7Yu9UPzRShlrH/G8btOpR0e6INFZr0EegWplMjOqelIwAcx3PKyR8mgPTxGTxcqiYj6hxSCRN0D8R7YrzImwNA== integrity sha512-LkUbuq3zCmINlFb+gAd4ZvYr+bPAzMC0hwND4F7V9bZ852dCX8YoWyovVUBKq4er1XsOwSQaHmNGtObtn8Av8Q==
dependencies: dependencies:
"@docusaurus/core" "2.4.1" "@docusaurus/core" "2.4.3"
"@docusaurus/types" "2.4.1" "@docusaurus/types" "2.4.3"
"@docusaurus/utils" "2.4.1" "@docusaurus/utils" "2.4.3"
fs-extra "^10.1.0" fs-extra "^10.1.0"
react-json-view "^1.21.3" react-json-view "^1.21.3"
tslib "^2.4.0" tslib "^2.4.0"
"@docusaurus/plugin-google-analytics@2.4.1": "@docusaurus/plugin-google-analytics@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/plugin-google-analytics/-/plugin-google-analytics-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/plugin-google-analytics/-/plugin-google-analytics-2.4.3.tgz#0d19993136ade6f7a7741251b4f617400d92ab45"
integrity sha512-dyZJdJiCoL+rcfnm0RPkLt/o732HvLiEwmtoNzOoz9MSZz117UH2J6U2vUDtzUzwtFLIf32KkeyzisbwUCgcaQ== integrity sha512-KzBV3k8lDkWOhg/oYGxlK5o9bOwX7KpPc/FTWoB+SfKhlHfhq7qcQdMi1elAaVEIop8tgK6gD1E58Q+XC6otSQ==
dependencies: dependencies:
"@docusaurus/core" "2.4.1" "@docusaurus/core" "2.4.3"
"@docusaurus/types" "2.4.1" "@docusaurus/types" "2.4.3"
"@docusaurus/utils-validation" "2.4.1" "@docusaurus/utils-validation" "2.4.3"
tslib "^2.4.0" tslib "^2.4.0"
"@docusaurus/plugin-google-gtag@2.4.1": "@docusaurus/plugin-google-gtag@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/plugin-google-gtag/-/plugin-google-gtag-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/plugin-google-gtag/-/plugin-google-gtag-2.4.3.tgz#e1a80b0696771b488562e5b60eff21c9932d9e1c"
integrity sha512-mKIefK+2kGTQBYvloNEKtDmnRD7bxHLsBcxgnbt4oZwzi2nxCGjPX6+9SQO2KCN5HZbNrYmGo5GJfMgoRvy6uA== integrity sha512-5FMg0rT7sDy4i9AGsvJC71MQrqQZwgLNdDetLEGDHLfSHLvJhQbTCUGbGXknUgWXQJckcV/AILYeJy+HhxeIFA==
dependencies: dependencies:
"@docusaurus/core" "2.4.1" "@docusaurus/core" "2.4.3"
"@docusaurus/types" "2.4.1" "@docusaurus/types" "2.4.3"
"@docusaurus/utils-validation" "2.4.1" "@docusaurus/utils-validation" "2.4.3"
tslib "^2.4.0" tslib "^2.4.0"
"@docusaurus/plugin-google-tag-manager@2.4.1": "@docusaurus/plugin-google-tag-manager@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/plugin-google-tag-manager/-/plugin-google-tag-manager-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/plugin-google-tag-manager/-/plugin-google-tag-manager-2.4.3.tgz#e41fbf79b0ffc2de1cc4013eb77798cff0ad98e3"
integrity sha512-Zg4Ii9CMOLfpeV2nG74lVTWNtisFaH9QNtEw48R5QE1KIwDBdTVaiSA18G1EujZjrzJJzXN79VhINSbOJO/r3g== integrity sha512-1jTzp71yDGuQiX9Bi0pVp3alArV0LSnHXempvQTxwCGAEzUWWaBg4d8pocAlTpbP9aULQQqhgzrs8hgTRPOM0A==
dependencies: dependencies:
"@docusaurus/core" "2.4.1" "@docusaurus/core" "2.4.3"
"@docusaurus/types" "2.4.1" "@docusaurus/types" "2.4.3"
"@docusaurus/utils-validation" "2.4.1" "@docusaurus/utils-validation" "2.4.3"
tslib "^2.4.0" tslib "^2.4.0"
"@docusaurus/plugin-sitemap@2.4.1": "@docusaurus/plugin-sitemap@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/plugin-sitemap/-/plugin-sitemap-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/plugin-sitemap/-/plugin-sitemap-2.4.3.tgz#1b3930900a8f89670ce7e8f83fb4730cd3298c32"
integrity sha512-lZx+ijt/+atQ3FVE8FOHV/+X3kuok688OydDXrqKRJyXBJZKgGjA2Qa8RjQ4f27V2woaXhtnyrdPop/+OjVMRg== integrity sha512-LRQYrK1oH1rNfr4YvWBmRzTL0LN9UAPxBbghgeFRBm5yloF6P+zv1tm2pe2hQTX/QP5bSKdnajCvfnScgKXMZQ==
dependencies: dependencies:
"@docusaurus/core" "2.4.1" "@docusaurus/core" "2.4.3"
"@docusaurus/logger" "2.4.1" "@docusaurus/logger" "2.4.3"
"@docusaurus/types" "2.4.1" "@docusaurus/types" "2.4.3"
"@docusaurus/utils" "2.4.1" "@docusaurus/utils" "2.4.3"
"@docusaurus/utils-common" "2.4.1" "@docusaurus/utils-common" "2.4.3"
"@docusaurus/utils-validation" "2.4.1" "@docusaurus/utils-validation" "2.4.3"
fs-extra "^10.1.0" fs-extra "^10.1.0"
sitemap "^7.1.1" sitemap "^7.1.1"
tslib "^2.4.0" tslib "^2.4.0"
"@docusaurus/preset-classic@2.4.1": "@docusaurus/preset-classic@^2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/preset-classic/-/preset-classic-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/preset-classic/-/preset-classic-2.4.3.tgz#074c57ebf29fa43d23bd1c8ce691226f542bc262"
integrity sha512-P4//+I4zDqQJ+UDgoFrjIFaQ1MeS9UD1cvxVQaI6O7iBmiHQm0MGROP1TbE7HlxlDPXFJjZUK3x3cAoK63smGQ== integrity sha512-tRyMliepY11Ym6hB1rAFSNGwQDpmszvWYJvlK1E+md4SW8i6ylNHtpZjaYFff9Mdk3i/Pg8ItQq9P0daOJAvQw==
dependencies: dependencies:
"@docusaurus/core" "2.4.1" "@docusaurus/core" "2.4.3"
"@docusaurus/plugin-content-blog" "2.4.1" "@docusaurus/plugin-content-blog" "2.4.3"
"@docusaurus/plugin-content-docs" "2.4.1" "@docusaurus/plugin-content-docs" "2.4.3"
"@docusaurus/plugin-content-pages" "2.4.1" "@docusaurus/plugin-content-pages" "2.4.3"
"@docusaurus/plugin-debug" "2.4.1" "@docusaurus/plugin-debug" "2.4.3"
"@docusaurus/plugin-google-analytics" "2.4.1" "@docusaurus/plugin-google-analytics" "2.4.3"
"@docusaurus/plugin-google-gtag" "2.4.1" "@docusaurus/plugin-google-gtag" "2.4.3"
"@docusaurus/plugin-google-tag-manager" "2.4.1" "@docusaurus/plugin-google-tag-manager" "2.4.3"
"@docusaurus/plugin-sitemap" "2.4.1" "@docusaurus/plugin-sitemap" "2.4.3"
"@docusaurus/theme-classic" "2.4.1" "@docusaurus/theme-classic" "2.4.3"
"@docusaurus/theme-common" "2.4.1" "@docusaurus/theme-common" "2.4.3"
"@docusaurus/theme-search-algolia" "2.4.1" "@docusaurus/theme-search-algolia" "2.4.3"
"@docusaurus/types" "2.4.1" "@docusaurus/types" "2.4.3"
"@docusaurus/react-loadable@5.5.2", "react-loadable@npm:@docusaurus/react-loadable@5.5.2": "@docusaurus/react-loadable@5.5.2", "react-loadable@npm:@docusaurus/react-loadable@5.5.2":
version "5.5.2" version "5.5.2"
@ -1524,23 +1538,23 @@
"@types/react" "*" "@types/react" "*"
prop-types "^15.6.2" prop-types "^15.6.2"
"@docusaurus/theme-classic@2.4.1": "@docusaurus/theme-classic@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/theme-classic/-/theme-classic-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/theme-classic/-/theme-classic-2.4.3.tgz#29360f2eb03a0e1686eb19668633ef313970ee8f"
integrity sha512-Rz0wKUa+LTW1PLXmwnf8mn85EBzaGSt6qamqtmnh9Hflkc+EqiYMhtUJeLdV+wsgYq4aG0ANc+bpUDpsUhdnwg== integrity sha512-QKRAJPSGPfDY2yCiPMIVyr+MqwZCIV2lxNzqbyUW0YkrlmdzzP3WuQJPMGLCjWgQp/5c9kpWMvMxjhpZx1R32Q==
dependencies: dependencies:
"@docusaurus/core" "2.4.1" "@docusaurus/core" "2.4.3"
"@docusaurus/mdx-loader" "2.4.1" "@docusaurus/mdx-loader" "2.4.3"
"@docusaurus/module-type-aliases" "2.4.1" "@docusaurus/module-type-aliases" "2.4.3"
"@docusaurus/plugin-content-blog" "2.4.1" "@docusaurus/plugin-content-blog" "2.4.3"
"@docusaurus/plugin-content-docs" "2.4.1" "@docusaurus/plugin-content-docs" "2.4.3"
"@docusaurus/plugin-content-pages" "2.4.1" "@docusaurus/plugin-content-pages" "2.4.3"
"@docusaurus/theme-common" "2.4.1" "@docusaurus/theme-common" "2.4.3"
"@docusaurus/theme-translations" "2.4.1" "@docusaurus/theme-translations" "2.4.3"
"@docusaurus/types" "2.4.1" "@docusaurus/types" "2.4.3"
"@docusaurus/utils" "2.4.1" "@docusaurus/utils" "2.4.3"
"@docusaurus/utils-common" "2.4.1" "@docusaurus/utils-common" "2.4.3"
"@docusaurus/utils-validation" "2.4.1" "@docusaurus/utils-validation" "2.4.3"
"@mdx-js/react" "^1.6.22" "@mdx-js/react" "^1.6.22"
clsx "^1.2.1" clsx "^1.2.1"
copy-text-to-clipboard "^3.0.1" copy-text-to-clipboard "^3.0.1"
@ -1555,18 +1569,18 @@
tslib "^2.4.0" tslib "^2.4.0"
utility-types "^3.10.0" utility-types "^3.10.0"
"@docusaurus/theme-common@2.4.1": "@docusaurus/theme-common@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/theme-common/-/theme-common-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/theme-common/-/theme-common-2.4.3.tgz#bb31d70b6b67d0bdef9baa343192dcec49946a2e"
integrity sha512-G7Zau1W5rQTaFFB3x3soQoZpkgMbl/SYNG8PfMFIjKa3M3q8n0m/GRf5/H/e5BqOvt8c+ZWIXGCiz+kUCSHovA== integrity sha512-7KaDJBXKBVGXw5WOVt84FtN8czGWhM0lbyWEZXGp8AFfL6sZQfRTluFp4QriR97qwzSyOfQb+nzcDZZU4tezUw==
dependencies: dependencies:
"@docusaurus/mdx-loader" "2.4.1" "@docusaurus/mdx-loader" "2.4.3"
"@docusaurus/module-type-aliases" "2.4.1" "@docusaurus/module-type-aliases" "2.4.3"
"@docusaurus/plugin-content-blog" "2.4.1" "@docusaurus/plugin-content-blog" "2.4.3"
"@docusaurus/plugin-content-docs" "2.4.1" "@docusaurus/plugin-content-docs" "2.4.3"
"@docusaurus/plugin-content-pages" "2.4.1" "@docusaurus/plugin-content-pages" "2.4.3"
"@docusaurus/utils" "2.4.1" "@docusaurus/utils" "2.4.3"
"@docusaurus/utils-common" "2.4.1" "@docusaurus/utils-common" "2.4.3"
"@types/history" "^4.7.11" "@types/history" "^4.7.11"
"@types/react" "*" "@types/react" "*"
"@types/react-router-config" "*" "@types/react-router-config" "*"
@ -1577,34 +1591,34 @@
use-sync-external-store "^1.2.0" use-sync-external-store "^1.2.0"
utility-types "^3.10.0" utility-types "^3.10.0"
"@docusaurus/theme-live-codeblock@^2.4.1": "@docusaurus/theme-live-codeblock@^2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/theme-live-codeblock/-/theme-live-codeblock-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/theme-live-codeblock/-/theme-live-codeblock-2.4.3.tgz#889eb4e740d2e9f2dc5516f9407f1bc147887387"
integrity sha512-KBKrm34kcdNbSeEm6RujN5GWWg4F2dmAYZyHMMQM8FXokx8mNShRx6uq17WXi23JNm7niyMhNOBRfZWay+5Hkg== integrity sha512-wx+iJCCoSewUkMzFy7pnbhDBCRcJRTLkpx1/zwnHhfiNWVvJ2XjtBKIviRyMhynZYyvO4sLTpCclzK8JOctkxw==
dependencies: dependencies:
"@docusaurus/core" "2.4.1" "@docusaurus/core" "2.4.3"
"@docusaurus/theme-common" "2.4.1" "@docusaurus/theme-common" "2.4.3"
"@docusaurus/theme-translations" "2.4.1" "@docusaurus/theme-translations" "2.4.3"
"@docusaurus/utils-validation" "2.4.1" "@docusaurus/utils-validation" "2.4.3"
"@philpl/buble" "^0.19.7" "@philpl/buble" "^0.19.7"
clsx "^1.2.1" clsx "^1.2.1"
fs-extra "^10.1.0" fs-extra "^10.1.0"
react-live "2.2.3" react-live "2.2.3"
tslib "^2.4.0" tslib "^2.4.0"
"@docusaurus/theme-search-algolia@2.4.1": "@docusaurus/theme-search-algolia@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/theme-search-algolia/-/theme-search-algolia-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/theme-search-algolia/-/theme-search-algolia-2.4.3.tgz#32d4cbefc3deba4112068fbdb0bde11ac51ece53"
integrity sha512-6BcqW2lnLhZCXuMAvPRezFs1DpmEKzXFKlYjruuas+Xy3AQeFzDJKTJFIm49N77WFCTyxff8d3E4Q9pi/+5McQ== integrity sha512-jziq4f6YVUB5hZOB85ELATwnxBz/RmSLD3ksGQOLDPKVzat4pmI8tddNWtriPpxR04BNT+ZfpPUMFkNFetSW1Q==
dependencies: dependencies:
"@docsearch/react" "^3.1.1" "@docsearch/react" "^3.1.1"
"@docusaurus/core" "2.4.1" "@docusaurus/core" "2.4.3"
"@docusaurus/logger" "2.4.1" "@docusaurus/logger" "2.4.3"
"@docusaurus/plugin-content-docs" "2.4.1" "@docusaurus/plugin-content-docs" "2.4.3"
"@docusaurus/theme-common" "2.4.1" "@docusaurus/theme-common" "2.4.3"
"@docusaurus/theme-translations" "2.4.1" "@docusaurus/theme-translations" "2.4.3"
"@docusaurus/utils" "2.4.1" "@docusaurus/utils" "2.4.3"
"@docusaurus/utils-validation" "2.4.1" "@docusaurus/utils-validation" "2.4.3"
algoliasearch "^4.13.1" algoliasearch "^4.13.1"
algoliasearch-helper "^3.10.0" algoliasearch-helper "^3.10.0"
clsx "^1.2.1" clsx "^1.2.1"
@ -1614,10 +1628,10 @@
tslib "^2.4.0" tslib "^2.4.0"
utility-types "^3.10.0" utility-types "^3.10.0"
"@docusaurus/theme-translations@2.4.1": "@docusaurus/theme-translations@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/theme-translations/-/theme-translations-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/theme-translations/-/theme-translations-2.4.3.tgz#91ac73fc49b8c652b7a54e88b679af57d6ac6102"
integrity sha512-T1RAGP+f86CA1kfE8ejZ3T3pUU3XcyvrGMfC/zxCtc2BsnoexuNI9Vk2CmuKCb+Tacvhxjv5unhxXce0+NKyvA== integrity sha512-H4D+lbZbjbKNS/Zw1Lel64PioUAIT3cLYYJLUf3KkuO/oc9e0QCVhIYVtUI2SfBCF2NNdlyhBDQEEMygsCedIg==
dependencies: dependencies:
fs-extra "^10.1.0" fs-extra "^10.1.0"
tslib "^2.4.0" tslib "^2.4.0"
@ -1636,30 +1650,44 @@
webpack "^5.73.0" webpack "^5.73.0"
webpack-merge "^5.8.0" webpack-merge "^5.8.0"
"@docusaurus/utils-common@2.4.1": "@docusaurus/types@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/utils-common/-/utils-common-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/types/-/types-2.4.3.tgz#4aead281ca09f721b3c0a9b926818450cfa3db31"
integrity sha512-bCVGdZU+z/qVcIiEQdyx0K13OC5mYwxhSuDUR95oFbKVuXYRrTVrwZIqQljuo1fyJvFTKHiL9L9skQOPokuFNQ== integrity sha512-W6zNLGQqfrp/EoPD0bhb9n7OobP+RHpmvVzpA+Z/IuU3Q63njJM24hmT0GYboovWcDtFmnIJC9wcyx4RVPQscw==
dependencies:
"@types/history" "^4.7.11"
"@types/react" "*"
commander "^5.1.0"
joi "^17.6.0"
react-helmet-async "^1.3.0"
utility-types "^3.10.0"
webpack "^5.73.0"
webpack-merge "^5.8.0"
"@docusaurus/utils-common@2.4.3":
version "2.4.3"
resolved "https://registry.yarnpkg.com/@docusaurus/utils-common/-/utils-common-2.4.3.tgz#30656c39ef1ce7e002af7ba39ea08330f58efcfb"
integrity sha512-/jascp4GbLQCPVmcGkPzEQjNaAk3ADVfMtudk49Ggb+131B1WDD6HqlSmDf8MxGdy7Dja2gc+StHf01kiWoTDQ==
dependencies: dependencies:
tslib "^2.4.0" tslib "^2.4.0"
"@docusaurus/utils-validation@2.4.1": "@docusaurus/utils-validation@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/utils-validation/-/utils-validation-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/utils-validation/-/utils-validation-2.4.3.tgz#8122c394feef3e96c73f6433987837ec206a63fb"
integrity sha512-unII3hlJlDwZ3w8U+pMO3Lx3RhI4YEbY3YNsQj4yzrkZzlpqZOLuAiZK2JyULnD+TKbceKU0WyWkQXtYbLNDFA== integrity sha512-G2+Vt3WR5E/9drAobP+hhZQMaswRwDlp6qOMi7o7ZypB+VO7N//DZWhZEwhcRGepMDJGQEwtPv7UxtYwPL9PBw==
dependencies: dependencies:
"@docusaurus/logger" "2.4.1" "@docusaurus/logger" "2.4.3"
"@docusaurus/utils" "2.4.1" "@docusaurus/utils" "2.4.3"
joi "^17.6.0" joi "^17.6.0"
js-yaml "^4.1.0" js-yaml "^4.1.0"
tslib "^2.4.0" tslib "^2.4.0"
"@docusaurus/utils@2.4.1": "@docusaurus/utils@2.4.3":
version "2.4.1" version "2.4.3"
resolved "https://registry.npmjs.org/@docusaurus/utils/-/utils-2.4.1.tgz" resolved "https://registry.yarnpkg.com/@docusaurus/utils/-/utils-2.4.3.tgz#52b000d989380a2125831b84e3a7327bef471e89"
integrity sha512-1lvEZdAQhKNht9aPXPoh69eeKnV0/62ROhQeFKKxmzd0zkcuE/Oc5Gpnt00y/f5bIsmOsYMY7Pqfm/5rteT5GA== integrity sha512-fKcXsjrD86Smxv8Pt0TBFqYieZZCPh4cbf9oszUq/AMhZn3ujwpKaVYZACPX8mmjtYx0JOgNx52CREBfiGQB4A==
dependencies: dependencies:
"@docusaurus/logger" "2.4.1" "@docusaurus/logger" "2.4.3"
"@svgr/webpack" "^6.2.1" "@svgr/webpack" "^6.2.1"
escape-string-regexp "^4.0.0" escape-string-regexp "^4.0.0"
file-loader "^6.2.0" file-loader "^6.2.0"