💾
AI For Dev
💾
AI For Dev
  • README
  • Introduction
    • How To Use This Guide
    • Why Run Large Language Models Locally?
  • Understanding Large Language Models
    • Overview of Large Language Models (LLMs)
    • Models Variations
    • Model Naming Conventions
    • Parameter Size
    • Quantization
    • Quantization Schemes
    • How Models Are Scaled
  • Selecting Models
    • Selecting the Right Model
    • Pick the Right Model for Your Memory
    • Model Size and Memory
    • CPU vs. GPU and Inferencing
    • Offloading to the GPU
    • CodeLlama 3.1 Instruct Variations
  • Models For Coding
    • Overview of Code Llama Variations
    • Other Notable Open Source Models for Coding
  • Installation
    • Installing LM Studio
    • Configuring LM Studio on Apple Silicon
    • Optimizing LM Studio for Apple Silicon
    • Picking Models in LM Studio
  • Evaluation
    • Testing Each Model
    • Evaluating Models with a Standardized Test
  • Best Practices
    • Best Practices for Running Local LLMs
  • Prompts
    • Collection of Prompts
    • Creating a Shell Script
  • Advanced Usage
    • Advanced Model Tuning
    • Integrating LLMs into Workflows
    • Custom Built PCs.md
  • Additional Resources
    • Tools
    • Articles
    • In the News
    • Geekbench AI Results
Powered by GitBook
On this page
Edit on GitHub
  1. Advanced Usage

Integrating LLMs into Workflows

This section discusses integrating LLMs into development workflows.

Last updated 10 months ago