Machine Learning Beginner

LLM Bootcamp

This project-based bootcamp is designed for beginners to dive practically into the world of Large Language Models (LLMs). Through hands-on building, you will learn how to interact with top-tier AI APIs, master prompt engineering, orchestrate complex workflows using LangChain, and implement Retrieval-Augmented Generation (RAG) to query your own documents. By the end of this course, you will have the skills to build, test, and deploy a fully functional, custom AI web application.

7 Weeks
Project-Based Learning

About this Course

This project-based bootcamp is designed for beginners to dive practically into the world of Large Language Models (LLMs). Through hands-on building, you will learn how to interact with top-tier AI APIs, master prompt engineering, orchestrate complex workflows using LangChain, and implement Retrieval-Augmented Generation (RAG) to query your own documents. By the end of this course, you will have the skills to build, test, and deploy a fully functional, custom AI web application. This comprehensive course is designed to take you from beginner concepts to advanced mastery. You will build real-world skills through hands-on practice and expert guidance.

Course Syllabus

Module 1: Introduction to LLMs & Prompt Engineering Fundamentals

  • What are Large Language Models? (Basic intuition, tokens, and context windows).
  • Setting up and interacting with LLM APIs (OpenAI / Gemini API).
  • Prompt Engineering 101: Zero-shot and Few-shot prompting.
  • Controlling the output: Understanding Temperature, Top-P, and Max Tokens.
  • Mini Project: Building a CLI-based Language Translator and Tone Analyzer.

Module 2: Advanced Prompting Strategies

  • Chain-of-Thought (CoT) prompting for complex reasoning.
  • System prompts and persona assignments.Handling hallucinations and setting guardrails in prompts.
  • Output formatting (forcing JSON outputs).
  • Mini Project: Creating a structured Data Extraction tool (extracting entities from unstructured text into JSON).

Module 3: Orchestrating LLMs with LangChain

  • Introduction to LangChain: Why use an LLM framework?
  • Core components: Models, Prompts, and Output Parsers.
  • Building simple LLM Chains.
  • Implementing Memory: Giving your LLM conversation history.
  • Mini Project: Building a Terminal Chatbot that remembers user context and previous conversations.

Module 4: Text Embeddings & Vector Databases

  • Understanding Text Embeddings: How AI reads and measures semantic similarity.
  • Document Loading and Text Splitting strategies (Chunking).
  • Introduction to Vector Databases.Storing and querying vectors using ChromaDB (Local).
  • Mini Project: Building a semantic search engine to find relevant paragraphs within a large text file.

Module 5: Retrieval-Augmented Generation (RAG) Basics

  • The architecture of a RAG system.
  • Connecting the Vector Store Retriever to an LLM Chain.
  • Crafting the perfect RAG prompt to synthesize retrieved data.
  • Mini Project: "Chat with a Document" (A script that answers questions strictly based on a single uploaded PDF).

Module 6: Building LLM Agents & Tool Integration

  • What is an LLM Agent? (Reasoning and acting).
  • Giving LLMs access to the outside world (Tools).
  • Integrating external APIs (e.g., Wikipedia, Web Search, Calculators).
  • Mini Project: Creating a "Research Assistant Agent" that can search the internet to answer current-event questions and summarize the findings.

Module 7: Exploring Open-Source LLMs & Local Execution

  • Navigating the Hugging Face ecosystem.
  • Introduction to Ollama for local LLM inference.
  • Running models (like Llama 3 or Mistral) locally on your machine.
  • When to use Cloud APIs vs. Local Open-Source models.
  • Mini Project: Modifying the previous RAG pipeline to run 100% locally and offline.

Module 8: Building User Interfaces & Deployment

  • Introduction to Streamlit for rapid web app development.
  • Connecting LangChain logic and session state to a Streamlit UI.
  • Designing an intuitive chat interface.
  • Basic deployment concepts (e.g., Streamlit Community Cloud).
  • Mini Project: Wrapping the terminal chatbot into a responsive web application.

Capstone Project

Featured Project

Domain-Specific AI Knowledge Assistant

For the final project, students will build an interactive, end-to-end web application that serves as a specialized virtual assistant for a specific domain of their choice (e.g., coding documentation, HR manuals, cooking recipes, or academic papers). Using a complete RAG architecture, the application will bypass general LLM knowledge to answer user queries strictly based on the custom documents uploaded to its database.

Core Project Goal

Apply all the skills you've learned to build a production-ready application from scratch. This project serves as your portfolio piece.

Key Features:

  • Dynamic Document Processing: A sidebar interface allowing users to upload new PDF or TXT files, which the app automatically chunks, embeds, and stores in the vector database.
  • Context-Aware Chat UI: A modern chat interface built with Streamlit that maintains conversation history, allowing users to ask follow-up questions naturally.
  • Strict Guardrails (Anti-Hallucination): System instructions designed so the AI politely declines to answer questions that fall outside the context of the uploaded documents.
  • Source Citation: The assistant will display the exact chunks of text or document names it used to generate its answer, ensuring transparency and trustworthiness.
Total Investment
Rp 7,000,000
One-time payment
Duration: 7 Weeks
Mode: Online
Certificate of Completion