• About Blog

    What's Blog?

    A blog is a discussion or informational website published on the World Wide Web consisting of discrete, often informal diary-style text entries or posts.

  • About Cauvery Calling

    Cauvery Calling. Action Now!

    Cauvery Calling is a first of its kind campaign, setting the standard for how India’s rivers – the country’s lifelines – can be revitalized.

  • About Quinbay Publications

    Quinbay Publication

    We follow our passion for digital innovation. Our high performing team comprising of talented and committed engineers are building the future of business tech.

Thursday, August 28, 2025

Your AI Sidekick: How Claude took over Pritee’s Repetitive tasks

 


It was a classic Wednesday morning in our Bengaluru office. Pritee, one of our sharpest Project Managers, had just stepped out of a stakeholder call. She was juggling three high-priority projects, and you could see the mental load—keeping track of deadlines, risks, and RAG statuses scattered across different Excel sheets and tools.

During our coffee break, she vented, “Yaar, I spend half my day just gathering information. If I could just ask Claude, What’s the health of the phoenix project and are we on track? and get a real answer from my data... that would be a dream.”

We've all been there, right? That’s the classic "last mile" problem with even the most brilliant AI models. They know everything about the world, but they know nothing about your world—your project tracker, your team’s progress, your specific ground reality.

What if you could build a bridge for that? A way to give Claude a special key to unlock your custom tools, so it can do the needful for you. This bridge is what we can call an MCP Server and today, I'll show you how we can build one for Pritee using Python.

What exactly is this MCP Server ?

Imagine Claude is a super-smart new intern. This intern can write, analyse and reason better than anyone. But if you ask them to check the RAG status of your project from an internal dashboard, they'll just stare back blankly. They don’t have access.

An MCP Server is like you giving that intern a set of keys and a clear instruction manual. Each "key" is a tool (an API endpoint), and the "manual" is a specification that tells the intern (Claude) exactly what each key does and how to use it.

So, when Pritee asks, "Hey, what's the status of the phoenix project?", Claude consults its manual, finds the right key (get_project_status), uses it to unlock her server, gets the data, and then gives her a perfectly framed answer. All sorted.

Let's Get Our Hands Dirty: Building a PM's Best Friend in Python

We're going to build a server with two powerful tools for Pritee:

  1. Get Project Status: A tool to fetch a project's health (RAG status) and any overdue tasks.

  2. Log a New Risk: A tool to quickly log a new risk against a project.

We'll use FastMCP, a modern and super-fast Python framework. If you haven't installed it, just run a quick command to install all required modules:

pip install fastmcp pydantic requests


Now, let's create our server file. Call it pm_mcp_server.py

# pm_mcp_server.py

from fastmcp import FastMCP
from pydantic import BaseModel, Field
from datetime import date
import requests

mcp = FastMCP("Pritee's PM Tool Server for Claude")

# --- Mock Database for our Projects ---
# In a real-world scenario, this data would come from Jira, a database, or a project management tool.
mock_project_data = {
    "ProjectPhoenix": {
        "status": "Amber",
        "tasks": [
            {"name": "Finalize UI/UX designs", "due_date": "2025-08-20", "owner": "Hari Prasad"},
            {"name": "API integration", "due_date": "2025-08-30", "owner": "Rishabh Kochar"}
        ],
        "risks": []
    },
    "ProjectTitan": {
        "status": "Green",
        "tasks": [
            {"name": "Complete user testing", "due_date": "2025-09-05", "owner": "Evangeline"},
            {"name": "Deploy to staging", "due_date": "2025-09-15", "owner": "Shashank"}
        ],
        "risks": []
    }
}

# --- Tool 1: Get Project Status ---

class ProjectStatusInput(BaseModel):
    project_name: str = Field(..., description="The name of the project, e.g., 'ProjectPhoenix'.")

@app.post("/get_project_status", tags=["Project Tools"])
def get_project_status(payload: ProjectStatusInput):
    project = mock_project_data.get(payload.project_name)
    if not project:
        return {"status": "error", "message": f"Project '{payload.project_name}' not found."}

    today = date.today()
    overdue_tasks = [
        task for task in project["tasks"] 
        if date.fromisoformat(task["due_date"]) < today
    ]    

    return {
        "project_name": payload.project_name,
        "rag_status": project["status"],
        "overdue_tasks": overdue_tasks
    }

# --- Tool 2: Log a New Risk ---

class LogRiskInput(BaseModel):
    project_name: str = Field(..., description="The project to log the risk against.")
    risk_description: str = Field(..., description="A clear description of the new risk.")
    priority: str = Field(..., description="The priority of the risk (e.g., 'High', 'Medium', 'Low').")

@app.post("/log_new_risk", tags=["Project Tools"])
def log_new_risk(payload: LogRiskInput):
    project = mock_project_data.get(payload.project_name)
    if not project:
        return {"status": "error", "message": f"Project '{payload.project_name}' not found."}

    new_risk = {"description": payload.risk_description, "priority": payload.priority, "logged_on": str(date.today())}
    project["risks"].append(new_risk) # Adding to our mock data        

    return {"status": "success", "message": f"New risk logged for {payload.project_name}."}


This code sets up an API specifically for a PM's workflow. It checks for overdue tasks by comparing dates and allows for logging new risks on the fly.

Putting It All Together: A PM's Dream Workflow

Okay, so our FastMCP server is humming along nicely on our local machine and is ready for action. Now for the most important step: the actual introduction. How do we make Claude aware of these fantastic new powers we've built for it?

The Handshake: Introducing your Tools to Claude

Think of this part like giving a briefing to a new, super-intelligent team member. You need to give them their tools, tell them where the office is (your server address), and make sure they understand their tasks.

Claude Desktop looks for a JSON configuration file (usually stored in your user directory under .config/calude or a similar path, depending on your OS). If it doesn’t exist, you can create one manually.

The file should be named exactly like claude_desktop_config.json

Inside this JSON, you’ll declare one or more MCP servers under the "mcpServers" key. Each entry includes:

  • A unique server name (e.g., "pm-tool-server")

  • The "type" of connection ("stdio" in most cases)

  • The "command" to launch the server (here, the Python binary).

  • Any "args" needed (the path to your MCP server script).

Here’s an example configuration:

{
  "mcpServers": {
    "pm-tool-server": {
      "type": "stdio",
      "command": "/usr/local/opt/python@3.11/libexec/bin/python",
      "args": ["/Users/nsrikantaiah/Projects/Python/pm-tool-server/pm_mcp_server.py"]
    }
  }
}

This tells Claude Desktop to spin up your pm-tool-server by running the script pm_mcp_server.py using Python. The app will then communicate with it through standard input/output streams.

Once you’ve saved your configuration file, restart Claude Desktop. On startup, it will read the JSON configuration, launch your MCP server and automatically establish the connection.

If everything is set up correctly, Claude will now be able to call your server whenever needed, extending its capabilities seamlessly.

Start Delegating!

Now that Claude is briefed and ready, Pritee can issue commands in plain English in the chat session.

Watch this:

Review the status of Phoenix project. If you find any overdue tasks, create a new medium-priority risk with the description as Potential timeline slippage due to design delays.

Review the status of the Phoenix project. 

Behind the scenes, Claude will execute the command flawlessly:

  • It calls http://127.0.0.1:8000/get_project_status for "Phoenix".

  • Your server sees that the "Finalize UI/UX designs" task is overdue (since today is August 26, 2025, and the due date was August 20, 2025) and returns this information.

  • Finally, it confirms to Pritee that the status was checked and highlights the risks with overdue tasks.

Chat script between Pritee and Claude

From Overwhelmed to Empowered

This isn’t just a flashy tech demo. For Pritee, it’s a shift from being a data collector to becoming a decision-maker. The routine work gets automated, freeing her up to focus on strategy, problem-solving and helping her team without getting burnt. 

When you build a simple MCP server, you’re not merely wiring up another API—you’re creating a personalized extension of your AI. One that adapts to your workflow and truly acts as an assistant.

Now, look at your own daily grind. Which repetitive tasks could you hand over to automation? And what’s the first tool you’d build to make your AI genuinely your own?

Wednesday, July 2, 2025

How to Build an AI Stock Analyst Agent with Python and CrewAI

For Python developers looking to leverage the power of Large Language Models (LLMs) for complex, multi-step tasks, crewAI offers a robust framework for orchestrating autonomous AI agents. This guide provides a strictly technical, step-by-step walkthrough for building a financial analysis "crew" that researches a stock and provides a recommendation.

This system will use two specialized agents:

  1. Fundamental Analyst Agent: Gathers the latest news and essential financial data for a given stock.
  2. Technical Analyst Agent: Consumes the fundamental data, performs a technical analysis, and delivers a final buy, sell, or hold recommendation.

We will use a free, high-speed LLM from Groq and create a custom tool for the technical analysis, providing a practical, real-world example of crewAI's capabilities.

Core CrewAI Concepts

Before writing the code, it's essential to understand the primary components of the framework:

  • Agents: These are the AI workers. Each agent is configured with a role, a goal, and a backstory to define its area of expertise and operational context. They are also equipped with an llm and a set of tools to perform their functions.

  • Tools: These extend an agent's abilities beyond the LLM's inherent knowledge. A tool can be anything from a web search function to a custom Python function that interacts with a database or API.

  • Tasks: A task is a single, well-defined unit of work assigned to an agent. It includes a description of what needs to be done and an expected_output format. Crucially, tasks can be chained together using the context parameter, which passes the output of one or more preceding tasks to the current one.

  • Crew: A crew is the collaborative unit that brings together agents and tasks. It defines the process by which tasks will be executed, such as Process.sequential, where tasks are completed one after another in a defined order.

1. Prerequisites and Environment Setup

First, ensure you have Python installed. Then, install the necessary libraries for this project.

pip install crewai crewai-tools langchain-groq yfinance pandas pandas-ta

  • crewai & crewai-tools: The core framework and its standard tools.
  • langchain-groq: Allows integration with the fast, free-tier LLMs provided by Groq.
  • yfinance: A popular library for fetching historical stock market data from Yahoo Finance.
  • pandas & pandas-ta: For data manipulation and applying technical analysis indicators.

Next, you need to acquire API keys from Groq and Serper for LLM access and web search capabilities, respectively. Create a .env file in your project's root directory to store these keys securely.

GROQ_API_KEY="your-groq-api-key"
SERPER_API_KEY="your-serper-api-key"

2. Defining a Custom Technical Analysis Tool

While crewAI provides built-in tools like web search, its real power is unlocked with custom tools. We will create a tool that fetches historical stock data, calculates key technical indicators, and returns an analysis.

Create a file named stock_tools.py

# stock_tools.py
from crewai_tools import BaseTool
import yfinance as yf
import pandas_ta as ta

class StockTechnicalAnalysisTool(BaseTool):

    name: str = "Stock Technical Analysis Tool"
    description: str = (
        "This tool performs technical analysis on a stock's historical data. "
        "It fetches price data, calculates RSI, MACD, and moving averages, "
        "and provides a summary of these technical indicators."
    )

    def _run(self, ticker: str) -> str:
        try:
            # Fetch historical data for the last 6 months
            stock_data = yf.Ticker(ticker).history(period="6mo")

            if stock_data.empty:
                return f"Error: No data found for ticker {ticker}."

            # Calculate Technical Indicators using pandas_ta
            stock_data.ta.rsi(append=True)
            stock_data.ta.macd(append=True)
            stock_data.ta.sma(length=20, append=True)
            stock_data.ta.sma(length=50, append=True)

            # Get the most recent data
            latest_data = stock_data.iloc[-1]            

            # Create a summary string
            analysis_summary = (
                f"Technical Analysis for {ticker}:\n"
                f"Latest Close Price: {latest_data['Close']:.2f}\n"
                f"RSI (14): {latest_data['RSI_14']:.2f}\n"
                f"SMA (20): {latest_data['SMA_20']:.2f}\n"
                f"SMA (50): {latest_data['SMA_50']:.2f}\n"
                f"MACD: {latest_data['MACD_12_26_9']:.2f} | Signal: {latest_data['MACDs_12_26_9']:.2f}"
            )

            return analysis_summary

        except Exception as e:
            return f"An error occurred: {str(e)}"

This class inherits from BaseTool and implements the _run method, which contains the logic for fetching data and performing calculations.

3. Assembling the crewAI Script

Now, create your main Python file (e.g., main.py) to define and run the crew.

Step 3.1: Imports and Initialization

Load the environment variables and initialize the LLM and tools.

# main.py
import os
from dotenv import load_dotenv
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool
from langchain_groq import ChatGroq

# Import our custom tool
from stock_tools import StockTechnicalAnalysisTool

# Load environment variables from .env file
load_dotenv()

# Initialize the LLM (Groq's Llama3)
# Set temperature to 0 for deterministic, fact-based outputs
llm = ChatGroq(
    api_key=os.getenv("GROQ_API_KEY"),
    model="llama3-8b-8192",
    temperature=0.2
)

# Initialize the tools
search_tool = SerperDevTool()
technical_analysis_tool = StockTechnicalAnalysisTool()

Step 3.2: Defining the Agents

Create the two agents, assigning them roles, goals, tools, and the LLM. Setting verbose=True is highly recommended during development to see the agent's reasoning process.

# Agent 1: Fundamental Analyst
fundamental_analyst = Agent(
    role="Fundamental Stock Analyst",
    goal="Gather, analyze, and summarize the latest news and fundamental financial data for a given stock.",
    backstory=(
        "You are an expert in financial markets, skilled at sifting through news articles, "
        "earnings reports, and market announcements to find key information that impacts a stoc,k's value. "
        "Your analysis is purely factual and data-driven."
    ),
    verbose=True,
    allow_delegation=False,
    tools=[search_tool],
    llm=llm
)

# Agent 2: Technical Analyst
technical_analyst = Agent(
    role="Senior Technical Stock Analyst",
    goal="Perform a detailed technical analysis using stock data and indicators, then synthesize all information to provide a clear investment recommendation.",
    backstory=(
        "You are a master of technical analysis, interpreting charts and indicators to predict market movements. "
        "You take fundamental context and combine it with your technical findings to form a holistic view. "
        "Your final output is always a direct and actionable recommendation."
    ),
    verbose=True,
    allow_delegation=False,
    tools=[technical_analysis_tool],
    llm=llm
)

Step 3.3: Defining the Tasks

Create the tasks for each agent. The context in analysis_task is the key to chaining them; it ensures the technical_analyst receives the fundamental_analyst's report.

# Task 1: Fundamental Research

fundamental_research_task = Task(
    description="
        For the stock ticker {stock}, conduct a thorough fundamental analysis.
        Search for the latest news, recent earnings reports, and any major announcements.
        Summarize the key findings in a structured, easy-to-read format.",
    expected_output="A summarized report of the latest news and fundamental data for the stock.",
    agent=fundamental_analyst
)

# Task 2: Technical Analysis and Recommendation

technical_analysis_task = Task(
    description="
        Using the provided fundamental analysis report for {stock}, perform a technical analysis.
        Use your tool to get the latest technical indicators (RSI, MACD, SMAs).
        Synthesize both the fundamental and technical data to provide a final investment recommendation.    ",
    expected_output="A one-paragraph summary of the technical analysis, followed by a final,
        bolded verdict: **BUY**, **SELL**, or **HOLD**.",
    agent=technical_analyst,
    context=[fundamental_research_task] # Pass the output of the first task
)

Step 3.4: Creating and Running the Crew

Finally, assemble the Crew and kick off the process. The process is set to sequential to ensure the research happens before the analysis.

# Assemble the crew

stock_analysis_crew = Crew(
    agents=[fundamental_analyst, technical_analyst],
    tasks=[fundamental_research_task, technical_analysis_task],
    process=Process.sequential,
    verbose=2 # 'verbose=2' provides detailed logs of the crew's execution
)

# Execute the crew for a specific stock
inputs = {'stock': 'RPOWER'} # Example: Reliance Power
result = stock_analysis_crew.kickoff(inputs=inputs)

print("\n\n########################")
print("## Final Stock Analysis Report")
print("########################\n")
print(result)


Conclusion

This guide demonstrates how to construct a multi-agent system using crewAI for a practical, technical task. By defining specialized agents, creating custom tools for specific functionalities (StockTechnicalAnalysisTool), and chaining tasks sequentially, you can automate complex workflows that require both data gathering and analytical reasoning. The modularity of this framework allows for easy extension—you could add a portfolio management agent, a risk assessment agent, or even integrate with trading APIs to create a fully autonomous financial analysis and execution system.

⚠️ Disclaimer:

This AI agent is intended for educational and informational purposes only. Do not use this system to make real-world trading decisions or investments. Always consult with a certified financial professional before making any trades. Use at your own risk.

Wednesday, June 18, 2025

Let's Build a Storyteller with Spring AI

Photo Courtesy DevDocsMaster

Remember those childhood summer holidays? After a long day of cricket in the neighbourhood lane, we’d all gather around, and there was always someone—a grandparent, an uncle, or an older cousin—who was the master storyteller.

I still remember my Mom, she could spin up the most fascinating tales out of thin air. Stories of a clever fox who outsmarted a lion, a tiny sparrow on a big adventure, or a king who learned a lesson from a poor farmer. We’d listen, completely captivated, our imaginations painting vivid pictures. Those simple stories were a magical part of growing up.

Now, as developers, what if we could bring a slice of that magic into our digital world? How about we build our very own storyteller? An application where you give it a tiny spark of an idea—say, "a curious robot who discovers desi chai"—and it instantly writes a wonderful short story for you.

Sounds like a fun project, right? But my mind immediately jumps to the challenges. Figuring out complex AI libraries, handling API calls, all that backend hassle… it seems like it would take all the fun out of it. How can we build something so creative using our solid, reliable Java and Spring Boot?

Well, this is where the story gets really interesting for us. It turns out, the brilliant minds at Spring have already thought about this. And their answer is Spring AI.

So, What’s All This Hungama About Spring AI?

Think of Spring AI as a friendly bridge. On one side, you have your solid, dependable Spring Boot application. On the other, you have the incredible power of AI models like OpenAI's GPT, Google's Gemini, and others. Spring AI connects these two worlds so seamlessly that you'll wonder why you ever thought AI was difficult.

In simple terms, it takes away all the boilerplate code and complex configurations. You don't have to manually handle HTTP requests to AI services or parse messy JSON responses. Spring AI gives you a clean, straightforward way to talk to AI, just like you would talk to any other service in your Spring application.

Let's Build Something! Your First AI-Powered Spring Boot App

Enough talk, let’s get our hands dirty. Let's build our little "Story Generator." You need to give a simple idea, and it cooks up a short story for you.

We'll be building this faster than it takes to get your food delivery on a Friday night.

Step 1: The Foundation - Setting Up Your Project

First things first, we need a basic Spring Boot project. The easiest way is to use the Spring Initializr. It’s our go-to starting point for any new Spring project.

  1. Head over to start.spring.io.
  2. Choose Maven as the project type and Java as the language.
  3. Select a recent stable version of Spring Boot (3.2.x or higher is good).
  4. Give your project a name, something like ai-story-generator.
  5. Now, for the important part – the dependencies. Add the following:
    • Spring Web: Because we want to create a REST endpoint.
    • Spring Boot Actuator: Good practice to monitor our app.
    • OpenAI: This is the Spring AI magic wand we need. Just type "OpenAI" and add the dependency.

Once you’re done, click "Generate". A zip file will be downloaded. Unzip it and open the project in your favourite IDE (IntelliJ or VS Code, your choice!).

Step 2: The Secret Ingredient - Your API Key

To talk to an AI model like OpenAI's, you need an API key. It's like a secret password.

  1. Go to the OpenAI Platform and create an account.
  2. Navigate to the API Keys section and create a new secret key.
  3. Important: Copy this key immediately and save it somewhere safe. You won’t be able to see it again!

Now, open the src/main/resources/application.properties file in your project and add this line:

spring.ai.openai.api-key=YOUR_OPENAI_API_KEY_HERE

Step 3: Writing the Code - Where the Magic Happens

This is the best part. You'll be surprised at how little code we need to write.

Let's create a simple REST controller. Create a new Java class called StoryController.java.

package com.bhargav.ai.storygenerator;

import org.springframework.ai.chat.client.ChatClient;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class StoryController {

    private final ChatClient chatClient;

    public StoryController(ChatClient.Builder chatClientBuilder) {
        this.chatClient = chatClientBuilder.build();
    }

    @GetMapping("/story")
    public String generateStory(@RequestParam(value = "topic",
            defaultValue = "a curious robot who discovered desi chai") String topic) {
        return this.chatClient.prompt()
                .user("Tell me a short story about " + topic)
                .call()
                .content();
    }
}

Let's break down this simple code, shall we?

  • @RestController: This tells Spring that this class will handle web requests.
  • private final ChatClient chatClient;: This is the hero of our story! The ChatClient is a part of Spring AI that makes talking to the AI model incredibly easy. We inject it using the constructor. Spring Boot automatically configures it for us because we added the OpenAI dependency and the API key. No manual setup needed. Kitna aasan hai! (How easy is that!)
  • @GetMapping("/story"): This creates a web endpoint. You can access it at http://localhost:8080/story.
  • The generateStory method is where the action is.
  • chatClient.prompt(): We start building our request to the AI.
  • .user("Tell me a short story about " + topic): We are telling the AI what to do. This is our "prompt." We take a topic from the user's request.
  • .call(): This sends our request to the AI model.
  • .content(): This gets the text response back from the AI.

And that’s it! We’re done. Seriously.

Step 4: Run the Application!

Now, just run your Spring Boot application from your IDE. Once it starts up, open your web browser and go to:

http://localhost:8080/story

You should see a short story about a curious robot discovering chai.

Want to try another topic? Just add a topic parameter to the URL:

http://localhost:8080/story?topic=a cat who wanted to be a software engineer in Bengaluru

And watch as the AI instantly generates a new story for you.

What Did We Just Do?

Think about it. In just a few minutes, with a handful of dependencies and less than 20 lines of Java code, we built an AI-powered application. We didn't have to wrestle with HTTP clients, authentication headers, or complex JSON.

We just told Spring AI what we wanted, and it did the needful.

This is just the tip of the iceberg. Spring AI allows you to get structured output (like JSON objects), connect to your own data, and much more. It makes AI a first-class citizen in the Spring ecosystem.

So, the next time you feel that spark of a creative idea, don't think it's out of reach for a Java developer. With Spring AI in your toolkit, you're more than ready to build your own magic. Happy coding!


Featured Post

Your AI Sidekick: How Claude took over Pritee’s Repetitive tasks

  It was a classic Wednesday morning in our Bengaluru office. Pritee, one of our sharpest Project Managers, had just stepped out of a stakeh...