General

  • Simple API Wrappers
    • {tidychatmodels} - Communicates with different chatbot vendors like openAI, mistral.ai, etc. using the same interface.
    • {gemini.R} - Wrapper around Google Gemini API
    • {rollama} - Wrapper around the Ollama API
    • {chatAI4R} - Wrapper around OpenAI API
    • {TheOpenAIR} - Wrapper around OpenAI models
  • Code Assistants
    • {chattr} - Code assistant for RStudio
    • {gander} - A higher-performance and lower-friction chat experience for data scientists in RStudio and Positron–sort of like completions with Copilot, but it knows how to talk to the objects in your R environment.
      • Brings {ellmer} chats into your project sessions, automatically incorporating relevant context and streaming their responses directly into your documents.
    • {btw} - Helps you describe your computational environment to LLMs
      • Assembles context on your R environment, package documentation, and working directory, copying the results to your clipboard for easy pasting into chat interfaces.
      • Wraps methods that can be easily incorporated into ellmer tool calls for describing various kinds of objects in R
      • Support for {mcptools}
  • {ellmer} - Supports a wide variety of LLM providers and implements a rich set of features including streaming outputs, tool/function calling, structured data extraction, and more.
  • {ollamar} - R version of {{ollama}} python and {{{ollama}}} JS libraries
    • Makes it easy to work with data structures (e.g., conversational/chat histories) that are standard for different LLMs (such as those provided by OpenAI and Anthropic).
    • Lets you specify different output formats (e.g., dataframes, text/vector, lists) that best suit your need, allowing easy integration with other libraries/tools and parallelization via the {httr2} library.
  • {mall} (Intro)- Text analysis by using rows of a dataframe along with a pre-determined (depending on the function), one-shot prompt. The prompt + row gets sent to an Ollama LLM for the prediction
    • Also available in Python
    • Features
      • Sentiment analysis
      • Text summarizing
      • Classify text
      • Extract one, or several, specific pieces information from the text
      • Translate text
      • Verify that something it true about the text (binary)
      • Custom prompt
  • {batchLLM} - Process prompts through multiple LLMs at the same time.
    • Uses data frames and column rows as LLM input and a new column with the text completions as the output.
    • Supports OpenAI, Claude & Gemini.
  • {llmR} - Interface to OpenAI’s GPT models, Azure’s language models, Google’s Gemini models, or custom local servers
    • Unified API: Setup and easily switch between different LLM providers and models using a consistent set of functions.
    • Prompt Processing: Convert chat messages into a standard format suitable for LLMs.
    • Output Processing: Can request JSON output from the LLMs and tries to sanitize the response if the parsing fails.
    • Error Handling: Automatically handle errors and retry requests when rate limits are exceeded. If a response is cut due to token limits, the package will ask the LLM to complete the response.
    • Custom Providers: Interrogate custom endpoints (local and online) and allow implementation of ad-hoc LLM connection functions.
    • Mock Calls: Allows simulation of LLM interactions for testing purposes.
    • Logging: Option to log the LLM response details for performance and cost monitoring
  • {aigenflow} - Enables you to create intelligent agents and orchestrate workflows with just a few lines of code, making advanced AI capabilities accessible to developers, data scientists, and researchers across diverse fields.
  • {samesies} - A reliability tool for comparing the similarity of texts, factors, or numbers across two or more lists. The motivating use case is to evaluate the reliability of Large Language Model (LLM) responses across models, providers, or prompts
  • {hellmer} - Enables sequential or parallel batch processing for chat models from ellmer.
  • {querychat} (also in python) - A drop-in component for Shiny that allows users to query a data frame using natural language. The results are available as a reactive data frame, so they can be easily used from Shiny outputs, reactive expressions, downloads, etc.
  • {shinychat} - Shiny ui component for LLM apps
    • Example: Basic (source)

      library(shiny)
      library(shinychat)
      
      ui <- bslib::page_fluid(
        chat_ui("chat")
      )
      
      server <- function(input, output, session) {
        chat <- ellmer::chat_ollama(system_prompt = "You are a helpful assistant", model = "phi4")
      
        observeEvent(input$chat_user_input, {
          stream <- chat$stream_async(input$chat_user_input)
          chat_append("chat", stream)
        })
      }
      
      shinyApp(ui, server)