Posts

Building Production-Ready, Full Stack AI Agents With LangGraph, FastAPI, and Next.js 15

Live Demo Here GitHub Repo Here Shipping an AI agent from notebook to production-ready app usually means wiring up observability, auth, streaming UI, and persistence across two very different stacks. This template cuts straight through that friction by pairing a LangGraph and FastAPI microservice with a Next.js 15 frontend that already speaks the Vercel AI SDK stream protocol. You start with a pipeline that handles retrieval-augmented generation, tool calls, and real-time user experience out of the box. The backend is a modernized fork of Joshua Carroll’s agent-service-toolkit: Python 3.13+, LangGraph graphs, Pydantic models, Uvicorn, and an AI SDK compatible SSE layer. It already ships with history endpoints, multi-tenant settings through environment files, and optional Mongo or Postgres checkpoints. Because it is built around LangGraph, you can drop in single agents or orchestrate multi-agent workflows without fighting the HTTP layer. On the frontend, we remix Vercel’s AI Chatbot tem...

Ai SDK ui - 1 - Learning AI SDK UI with Next.js

I recently started learning AI SDK UI . I’ve been working with Next.js already, but this library adds something special. It's  framework agnostic !  It works with Angular, Vue, Svelte, and React. Of course, I’m doing it with Next.js (React) because that’s what I know best. One of the first things I tried was the useChat hook. This makes building a chatbot UI feel way simpler. Normally, handling messages, updating state, and keeping track of what the AI is saying can be messy. With useChat , a lot of that heavy lifting is done for you. Here are some cool things I noticed about it: Message streaming out-of-the-box. You don't need to implement manual streaming! The hook manages states like input, messages, status, and errors. You don’t need to write all that boilerplate yourself. I also liked that it handles more than just happy paths. For example: You can stop a response midway if you want. It provides it out of the box! There are even helpers for modifying message...

NMLS - 4. Recommender Systems - Content Based Filtering - Introductory Machine Learning

In 2022 I started the new  Machine Learning Specialization by Andrew NG. I completed it's first two courses in 2022. I'll share the jupyter notebooks from this specialization soon. Now, in 2023 when I am finally free from my M1, I am going to complete the specialization. Here is the fourth Assignment of course 3 (last course) on Recommender Systems.   Procedure : In Content based filtering, for a movie streaming website, we have a set of users and their ratings to the movies they watched. We can also create engineered features e.g, the average rating that a user gave to each genre. We then have another dataset that contains all of the movies and their corresponding average ratings and their corresponding one-hot-encoded genres. The users data is fed into a neural network. The movies data is fed into another neural network. Both neural networks can have different architects, but the final layer of each neural network must be identical to the other one. The final layers give...

NMLS - 2. Anomaly Detection - Unsupervised Learning - Introductory Machine Learning

  In 2022 I started the new  Machine Learning Specialization by Andrew NG. I completed it's first two courses in 2022. I'll share the jupyter notebooks from this specialization soon. Now, in 2023 when I am finally free from my M1, I am going to complete the specialization. Here is the third Assignment of course 3 (last course) on Recommender Systems.   Problem Statement : 307 measurements of two features throughput (mb/s) latency (ms)  of several servers , were given, to us, we were tasked to find anomalous behavior of a server.   Procedure : Since in addition to unlabelled data, we, also had some labelled data, so, we made a training dataser from unlabelled data and a cross validation set from labelled data. First we used numpy to find the mean and variance of the features. Then we using mean and variance we created probability distribution functions for each feature. Assuming that all features are statistically independent, the total probability for a samp...

NMLS - 3. Recommender Systems - Collaborative Filtering - Introductory Machine Learning

  In 2022 I started the new  Machine Learning Specialization by Andrew NG. I completed it's first two courses in 2022. I'll share the jupyter notebooks from this specialization soon. Now, in 2023 when I am finally free from my M1, I am going to complete the specialization. Here is the third Assignment of course 3 (last course) on Recommender Systems.   We were given a dataset which contained 'nu' users and 'nm' movies. The ratings that each user gave to various movies were present in the dataset. The task was: We have a new user who has not yet rated a lot of movies. We have have to recommended him movies based on his previous ratings.   Procedure : The cost function was implemented using For Loop & then without any loop (Vectorized Implementation) A list of movie ratings for a new user was created and was added to the training data The rows(movies) were normalized by taking average of each row for non-zero values and then subtracting each value from the...

NMLS - 1. Clustering - Kmeans - Unsupervised Learning - Introductory Machine Learning

In 2022 I started the new  Machine Learning Specialization by Andrew NG. I completed it's first two courses in 2022. I'll sgare the jupyter notebooks from this specialization soon. Now, in 2023 when I am finally free from my M1, I am going to complete the specialization. Here is the first Assignment of course 3 (last course) on Clustering. Procedure : Distances between each data-point and all of the cluster centroids were computed The cluster centroid closest to the each data-point was assigned to that data-point The average positions of all of the data-points in a cluster were computed Corresponding centroids were moved to these new average positions The procedure was repeated for a specific number of iterations. Relevant  github link Relevant  video explanation (in Urdu)

CONVOLUTIONAL NETWORK from Scratch - Binary Classification

 I used the world renowned dogs-vs-cats dataset from kaggle, to train a convolutional neural network. My procedure was as follows: -   I downloaded the dataset from kaggle  -   Created directories and transferred the relevant data.  I created an ' original_dataset_dir ' and transferred all of the data (data of all classes) to this directory. I then created a base directory: ' base_dir '. I created the folders 'train', 'validation' and 'test' inside base directory. Then I created the folders 'cats', 'dogs', inside the folders created in last step. Then I transferred the data according to the following ratio: (Train, Validation, Test) = (50, 25, 25) -  Initiated a small convolutional network In the conv base, I created 4 successive 'conv2D' layers, each followed by a 'Maxpooling2D' layer, without any padding. In the classifier base, I added a 'Flatten' layer, followed by two Dense Layers -  Compiled the mod...