• About Blog

    What's Blog?

    A blog is a discussion or informational website published on the World Wide Web consisting of discrete, often informal diary-style text entries or posts.

  • About Cauvery Calling

    Cauvery Calling. Action Now!

    Cauvery Calling is a first of its kind campaign, setting the standard for how India’s rivers – the country’s lifelines – can be revitalized.

  • About Quinbay Publications

    Quinbay Publication

    We follow our passion for digital innovation. Our high performing team comprising of talented and committed engineers are building the future of business tech.

Thursday, August 28, 2025

Your AI Sidekick: How Claude took over Pritee’s Repetitive tasks

 


It was a classic Wednesday morning in our Bengaluru office. Pritee, one of our sharpest Project Managers, had just stepped out of a stakeholder call. She was juggling three high-priority projects, and you could see the mental load—keeping track of deadlines, risks, and RAG statuses scattered across different Excel sheets and tools.

During our coffee break, she vented, “Yaar, I spend half my day just gathering information. If I could just ask Claude, What’s the health of the phoenix project and are we on track? and get a real answer from my data... that would be a dream.”

We've all been there, right? That’s the classic "last mile" problem with even the most brilliant AI models. They know everything about the world, but they know nothing about your world—your project tracker, your team’s progress, your specific ground reality.

What if you could build a bridge for that? A way to give Claude a special key to unlock your custom tools, so it can do the needful for you. This bridge is what we can call an MCP Server and today, I'll show you how we can build one for Pritee using Python.

What exactly is this MCP Server ?

Imagine Claude is a super-smart new intern. This intern can write, analyse and reason better than anyone. But if you ask them to check the RAG status of your project from an internal dashboard, they'll just stare back blankly. They don’t have access.

An MCP Server is like you giving that intern a set of keys and a clear instruction manual. Each "key" is a tool (an API endpoint), and the "manual" is a specification that tells the intern (Claude) exactly what each key does and how to use it.

So, when Pritee asks, "Hey, what's the status of the phoenix project?", Claude consults its manual, finds the right key (get_project_status), uses it to unlock her server, gets the data, and then gives her a perfectly framed answer. All sorted.

Let's Get Our Hands Dirty: Building a PM's Best Friend in Python

We're going to build a server with two powerful tools for Pritee:

  1. Get Project Status: A tool to fetch a project's health (RAG status) and any overdue tasks.

  2. Log a New Risk: A tool to quickly log a new risk against a project.

We'll use FastMCP, a modern and super-fast Python framework. If you haven't installed it, just run a quick command to install all required modules:

pip install fastmcp pydantic requests


Now, let's create our server file. Call it pm_mcp_server.py

# pm_mcp_server.py

from fastmcp import FastMCP
from pydantic import BaseModel, Field
from datetime import date
import requests

mcp = FastMCP("Pritee's PM Tool Server for Claude")

# --- Mock Database for our Projects ---
# In a real-world scenario, this data would come from Jira, a database, or a project management tool.
mock_project_data = {
    "ProjectPhoenix": {
        "status": "Amber",
        "tasks": [
            {"name": "Finalize UI/UX designs", "due_date": "2025-08-20", "owner": "Hari Prasad"},
            {"name": "API integration", "due_date": "2025-08-30", "owner": "Rishabh Kochar"}
        ],
        "risks": []
    },
    "ProjectTitan": {
        "status": "Green",
        "tasks": [
            {"name": "Complete user testing", "due_date": "2025-09-05", "owner": "Evangeline"},
            {"name": "Deploy to staging", "due_date": "2025-09-15", "owner": "Shashank"}
        ],
        "risks": []
    }
}

# --- Tool 1: Get Project Status ---

class ProjectStatusInput(BaseModel):
    project_name: str = Field(..., description="The name of the project, e.g., 'ProjectPhoenix'.")

@app.post("/get_project_status", tags=["Project Tools"])
def get_project_status(payload: ProjectStatusInput):
    project = mock_project_data.get(payload.project_name)
    if not project:
        return {"status": "error", "message": f"Project '{payload.project_name}' not found."}

    today = date.today()
    overdue_tasks = [
        task for task in project["tasks"] 
        if date.fromisoformat(task["due_date"]) < today
    ]    

    return {
        "project_name": payload.project_name,
        "rag_status": project["status"],
        "overdue_tasks": overdue_tasks
    }

# --- Tool 2: Log a New Risk ---

class LogRiskInput(BaseModel):
    project_name: str = Field(..., description="The project to log the risk against.")
    risk_description: str = Field(..., description="A clear description of the new risk.")
    priority: str = Field(..., description="The priority of the risk (e.g., 'High', 'Medium', 'Low').")

@app.post("/log_new_risk", tags=["Project Tools"])
def log_new_risk(payload: LogRiskInput):
    project = mock_project_data.get(payload.project_name)
    if not project:
        return {"status": "error", "message": f"Project '{payload.project_name}' not found."}

    new_risk = {"description": payload.risk_description, "priority": payload.priority, "logged_on": str(date.today())}
    project["risks"].append(new_risk) # Adding to our mock data        

    return {"status": "success", "message": f"New risk logged for {payload.project_name}."}


This code sets up an API specifically for a PM's workflow. It checks for overdue tasks by comparing dates and allows for logging new risks on the fly.

Putting It All Together: A PM's Dream Workflow

Okay, so our FastMCP server is humming along nicely on our local machine and is ready for action. Now for the most important step: the actual introduction. How do we make Claude aware of these fantastic new powers we've built for it?

The Handshake: Introducing your Tools to Claude

Think of this part like giving a briefing to a new, super-intelligent team member. You need to give them their tools, tell them where the office is (your server address), and make sure they understand their tasks.

Claude Desktop looks for a JSON configuration file (usually stored in your user directory under .config/calude or a similar path, depending on your OS). If it doesn’t exist, you can create one manually.

The file should be named exactly like claude_desktop_config.json

Inside this JSON, you’ll declare one or more MCP servers under the "mcpServers" key. Each entry includes:

  • A unique server name (e.g., "pm-tool-server")

  • The "type" of connection ("stdio" in most cases)

  • The "command" to launch the server (here, the Python binary).

  • Any "args" needed (the path to your MCP server script).

Here’s an example configuration:

{
  "mcpServers": {
    "pm-tool-server": {
      "type": "stdio",
      "command": "/usr/local/opt/python@3.11/libexec/bin/python",
      "args": ["/Users/nsrikantaiah/Projects/Python/pm-tool-server/pm_mcp_server.py"]
    }
  }
}

This tells Claude Desktop to spin up your pm-tool-server by running the script pm_mcp_server.py using Python. The app will then communicate with it through standard input/output streams.

Once you’ve saved your configuration file, restart Claude Desktop. On startup, it will read the JSON configuration, launch your MCP server and automatically establish the connection.

If everything is set up correctly, Claude will now be able to call your server whenever needed, extending its capabilities seamlessly.

Start Delegating!

Now that Claude is briefed and ready, Pritee can issue commands in plain English in the chat session.

Watch this:

Review the status of Phoenix project. If you find any overdue tasks, create a new medium-priority risk with the description as Potential timeline slippage due to design delays.

Review the status of the Phoenix project. 

Behind the scenes, Claude will execute the command flawlessly:

  • It calls http://127.0.0.1:8000/get_project_status for "Phoenix".

  • Your server sees that the "Finalize UI/UX designs" task is overdue (since today is August 26, 2025, and the due date was August 20, 2025) and returns this information.

  • Finally, it confirms to Pritee that the status was checked and highlights the risks with overdue tasks.

Chat script between Pritee and Claude

From Overwhelmed to Empowered

This isn’t just a flashy tech demo. For Pritee, it’s a shift from being a data collector to becoming a decision-maker. The routine work gets automated, freeing her up to focus on strategy, problem-solving and helping her team without getting burnt. 

When you build a simple MCP server, you’re not merely wiring up another API—you’re creating a personalized extension of your AI. One that adapts to your workflow and truly acts as an assistant.

Now, look at your own daily grind. Which repetitive tasks could you hand over to automation? And what’s the first tool you’d build to make your AI genuinely your own?

Wednesday, July 2, 2025

How to Build an AI Stock Analyst Agent with Python and CrewAI

For Python developers looking to leverage the power of Large Language Models (LLMs) for complex, multi-step tasks, crewAI offers a robust framework for orchestrating autonomous AI agents. This guide provides a strictly technical, step-by-step walkthrough for building a financial analysis "crew" that researches a stock and provides a recommendation.

This system will use two specialized agents:

  1. Fundamental Analyst Agent: Gathers the latest news and essential financial data for a given stock.
  2. Technical Analyst Agent: Consumes the fundamental data, performs a technical analysis, and delivers a final buy, sell, or hold recommendation.

We will use a free, high-speed LLM from Groq and create a custom tool for the technical analysis, providing a practical, real-world example of crewAI's capabilities.

Core CrewAI Concepts

Before writing the code, it's essential to understand the primary components of the framework:

  • Agents: These are the AI workers. Each agent is configured with a role, a goal, and a backstory to define its area of expertise and operational context. They are also equipped with an llm and a set of tools to perform their functions.

  • Tools: These extend an agent's abilities beyond the LLM's inherent knowledge. A tool can be anything from a web search function to a custom Python function that interacts with a database or API.

  • Tasks: A task is a single, well-defined unit of work assigned to an agent. It includes a description of what needs to be done and an expected_output format. Crucially, tasks can be chained together using the context parameter, which passes the output of one or more preceding tasks to the current one.

  • Crew: A crew is the collaborative unit that brings together agents and tasks. It defines the process by which tasks will be executed, such as Process.sequential, where tasks are completed one after another in a defined order.

1. Prerequisites and Environment Setup

First, ensure you have Python installed. Then, install the necessary libraries for this project.

pip install crewai crewai-tools langchain-groq yfinance pandas pandas-ta

  • crewai & crewai-tools: The core framework and its standard tools.
  • langchain-groq: Allows integration with the fast, free-tier LLMs provided by Groq.
  • yfinance: A popular library for fetching historical stock market data from Yahoo Finance.
  • pandas & pandas-ta: For data manipulation and applying technical analysis indicators.

Next, you need to acquire API keys from Groq and Serper for LLM access and web search capabilities, respectively. Create a .env file in your project's root directory to store these keys securely.

GROQ_API_KEY="your-groq-api-key"
SERPER_API_KEY="your-serper-api-key"

2. Defining a Custom Technical Analysis Tool

While crewAI provides built-in tools like web search, its real power is unlocked with custom tools. We will create a tool that fetches historical stock data, calculates key technical indicators, and returns an analysis.

Create a file named stock_tools.py

# stock_tools.py
from crewai_tools import BaseTool
import yfinance as yf
import pandas_ta as ta

class StockTechnicalAnalysisTool(BaseTool):

    name: str = "Stock Technical Analysis Tool"
    description: str = (
        "This tool performs technical analysis on a stock's historical data. "
        "It fetches price data, calculates RSI, MACD, and moving averages, "
        "and provides a summary of these technical indicators."
    )

    def _run(self, ticker: str) -> str:
        try:
            # Fetch historical data for the last 6 months
            stock_data = yf.Ticker(ticker).history(period="6mo")

            if stock_data.empty:
                return f"Error: No data found for ticker {ticker}."

            # Calculate Technical Indicators using pandas_ta
            stock_data.ta.rsi(append=True)
            stock_data.ta.macd(append=True)
            stock_data.ta.sma(length=20, append=True)
            stock_data.ta.sma(length=50, append=True)

            # Get the most recent data
            latest_data = stock_data.iloc[-1]            

            # Create a summary string
            analysis_summary = (
                f"Technical Analysis for {ticker}:\n"
                f"Latest Close Price: {latest_data['Close']:.2f}\n"
                f"RSI (14): {latest_data['RSI_14']:.2f}\n"
                f"SMA (20): {latest_data['SMA_20']:.2f}\n"
                f"SMA (50): {latest_data['SMA_50']:.2f}\n"
                f"MACD: {latest_data['MACD_12_26_9']:.2f} | Signal: {latest_data['MACDs_12_26_9']:.2f}"
            )

            return analysis_summary

        except Exception as e:
            return f"An error occurred: {str(e)}"

This class inherits from BaseTool and implements the _run method, which contains the logic for fetching data and performing calculations.

3. Assembling the crewAI Script

Now, create your main Python file (e.g., main.py) to define and run the crew.

Step 3.1: Imports and Initialization

Load the environment variables and initialize the LLM and tools.

# main.py
import os
from dotenv import load_dotenv
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool
from langchain_groq import ChatGroq

# Import our custom tool
from stock_tools import StockTechnicalAnalysisTool

# Load environment variables from .env file
load_dotenv()

# Initialize the LLM (Groq's Llama3)
# Set temperature to 0 for deterministic, fact-based outputs
llm = ChatGroq(
    api_key=os.getenv("GROQ_API_KEY"),
    model="llama3-8b-8192",
    temperature=0.2
)

# Initialize the tools
search_tool = SerperDevTool()
technical_analysis_tool = StockTechnicalAnalysisTool()

Step 3.2: Defining the Agents

Create the two agents, assigning them roles, goals, tools, and the LLM. Setting verbose=True is highly recommended during development to see the agent's reasoning process.

# Agent 1: Fundamental Analyst
fundamental_analyst = Agent(
    role="Fundamental Stock Analyst",
    goal="Gather, analyze, and summarize the latest news and fundamental financial data for a given stock.",
    backstory=(
        "You are an expert in financial markets, skilled at sifting through news articles, "
        "earnings reports, and market announcements to find key information that impacts a stoc,k's value. "
        "Your analysis is purely factual and data-driven."
    ),
    verbose=True,
    allow_delegation=False,
    tools=[search_tool],
    llm=llm
)

# Agent 2: Technical Analyst
technical_analyst = Agent(
    role="Senior Technical Stock Analyst",
    goal="Perform a detailed technical analysis using stock data and indicators, then synthesize all information to provide a clear investment recommendation.",
    backstory=(
        "You are a master of technical analysis, interpreting charts and indicators to predict market movements. "
        "You take fundamental context and combine it with your technical findings to form a holistic view. "
        "Your final output is always a direct and actionable recommendation."
    ),
    verbose=True,
    allow_delegation=False,
    tools=[technical_analysis_tool],
    llm=llm
)

Step 3.3: Defining the Tasks

Create the tasks for each agent. The context in analysis_task is the key to chaining them; it ensures the technical_analyst receives the fundamental_analyst's report.

# Task 1: Fundamental Research

fundamental_research_task = Task(
    description="
        For the stock ticker {stock}, conduct a thorough fundamental analysis.
        Search for the latest news, recent earnings reports, and any major announcements.
        Summarize the key findings in a structured, easy-to-read format.",
    expected_output="A summarized report of the latest news and fundamental data for the stock.",
    agent=fundamental_analyst
)

# Task 2: Technical Analysis and Recommendation

technical_analysis_task = Task(
    description="
        Using the provided fundamental analysis report for {stock}, perform a technical analysis.
        Use your tool to get the latest technical indicators (RSI, MACD, SMAs).
        Synthesize both the fundamental and technical data to provide a final investment recommendation.    ",
    expected_output="A one-paragraph summary of the technical analysis, followed by a final,
        bolded verdict: **BUY**, **SELL**, or **HOLD**.",
    agent=technical_analyst,
    context=[fundamental_research_task] # Pass the output of the first task
)

Step 3.4: Creating and Running the Crew

Finally, assemble the Crew and kick off the process. The process is set to sequential to ensure the research happens before the analysis.

# Assemble the crew

stock_analysis_crew = Crew(
    agents=[fundamental_analyst, technical_analyst],
    tasks=[fundamental_research_task, technical_analysis_task],
    process=Process.sequential,
    verbose=2 # 'verbose=2' provides detailed logs of the crew's execution
)

# Execute the crew for a specific stock
inputs = {'stock': 'RPOWER'} # Example: Reliance Power
result = stock_analysis_crew.kickoff(inputs=inputs)

print("\n\n########################")
print("## Final Stock Analysis Report")
print("########################\n")
print(result)


Conclusion

This guide demonstrates how to construct a multi-agent system using crewAI for a practical, technical task. By defining specialized agents, creating custom tools for specific functionalities (StockTechnicalAnalysisTool), and chaining tasks sequentially, you can automate complex workflows that require both data gathering and analytical reasoning. The modularity of this framework allows for easy extension—you could add a portfolio management agent, a risk assessment agent, or even integrate with trading APIs to create a fully autonomous financial analysis and execution system.

⚠️ Disclaimer:

This AI agent is intended for educational and informational purposes only. Do not use this system to make real-world trading decisions or investments. Always consult with a certified financial professional before making any trades. Use at your own risk.

Wednesday, June 18, 2025

Let's Build a Storyteller with Spring AI

Photo Courtesy DevDocsMaster

Remember those childhood summer holidays? After a long day of cricket in the neighbourhood lane, we’d all gather around, and there was always someone—a grandparent, an uncle, or an older cousin—who was the master storyteller.

I still remember my Mom, she could spin up the most fascinating tales out of thin air. Stories of a clever fox who outsmarted a lion, a tiny sparrow on a big adventure, or a king who learned a lesson from a poor farmer. We’d listen, completely captivated, our imaginations painting vivid pictures. Those simple stories were a magical part of growing up.

Now, as developers, what if we could bring a slice of that magic into our digital world? How about we build our very own storyteller? An application where you give it a tiny spark of an idea—say, "a curious robot who discovers desi chai"—and it instantly writes a wonderful short story for you.

Sounds like a fun project, right? But my mind immediately jumps to the challenges. Figuring out complex AI libraries, handling API calls, all that backend hassle… it seems like it would take all the fun out of it. How can we build something so creative using our solid, reliable Java and Spring Boot?

Well, this is where the story gets really interesting for us. It turns out, the brilliant minds at Spring have already thought about this. And their answer is Spring AI.

So, What’s All This Hungama About Spring AI?

Think of Spring AI as a friendly bridge. On one side, you have your solid, dependable Spring Boot application. On the other, you have the incredible power of AI models like OpenAI's GPT, Google's Gemini, and others. Spring AI connects these two worlds so seamlessly that you'll wonder why you ever thought AI was difficult.

In simple terms, it takes away all the boilerplate code and complex configurations. You don't have to manually handle HTTP requests to AI services or parse messy JSON responses. Spring AI gives you a clean, straightforward way to talk to AI, just like you would talk to any other service in your Spring application.

Let's Build Something! Your First AI-Powered Spring Boot App

Enough talk, let’s get our hands dirty. Let's build our little "Story Generator." You need to give a simple idea, and it cooks up a short story for you.

We'll be building this faster than it takes to get your food delivery on a Friday night.

Step 1: The Foundation - Setting Up Your Project

First things first, we need a basic Spring Boot project. The easiest way is to use the Spring Initializr. It’s our go-to starting point for any new Spring project.

  1. Head over to start.spring.io.
  2. Choose Maven as the project type and Java as the language.
  3. Select a recent stable version of Spring Boot (3.2.x or higher is good).
  4. Give your project a name, something like ai-story-generator.
  5. Now, for the important part – the dependencies. Add the following:
    • Spring Web: Because we want to create a REST endpoint.
    • Spring Boot Actuator: Good practice to monitor our app.
    • OpenAI: This is the Spring AI magic wand we need. Just type "OpenAI" and add the dependency.

Once you’re done, click "Generate". A zip file will be downloaded. Unzip it and open the project in your favourite IDE (IntelliJ or VS Code, your choice!).

Step 2: The Secret Ingredient - Your API Key

To talk to an AI model like OpenAI's, you need an API key. It's like a secret password.

  1. Go to the OpenAI Platform and create an account.
  2. Navigate to the API Keys section and create a new secret key.
  3. Important: Copy this key immediately and save it somewhere safe. You won’t be able to see it again!

Now, open the src/main/resources/application.properties file in your project and add this line:

spring.ai.openai.api-key=YOUR_OPENAI_API_KEY_HERE

Step 3: Writing the Code - Where the Magic Happens

This is the best part. You'll be surprised at how little code we need to write.

Let's create a simple REST controller. Create a new Java class called StoryController.java.

package com.bhargav.ai.storygenerator;

import org.springframework.ai.chat.client.ChatClient;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class StoryController {

    private final ChatClient chatClient;

    public StoryController(ChatClient.Builder chatClientBuilder) {
        this.chatClient = chatClientBuilder.build();
    }

    @GetMapping("/story")
    public String generateStory(@RequestParam(value = "topic",
            defaultValue = "a curious robot who discovered desi chai") String topic) {
        return this.chatClient.prompt()
                .user("Tell me a short story about " + topic)
                .call()
                .content();
    }
}

Let's break down this simple code, shall we?

  • @RestController: This tells Spring that this class will handle web requests.
  • private final ChatClient chatClient;: This is the hero of our story! The ChatClient is a part of Spring AI that makes talking to the AI model incredibly easy. We inject it using the constructor. Spring Boot automatically configures it for us because we added the OpenAI dependency and the API key. No manual setup needed. Kitna aasan hai! (How easy is that!)
  • @GetMapping("/story"): This creates a web endpoint. You can access it at http://localhost:8080/story.
  • The generateStory method is where the action is.
  • chatClient.prompt(): We start building our request to the AI.
  • .user("Tell me a short story about " + topic): We are telling the AI what to do. This is our "prompt." We take a topic from the user's request.
  • .call(): This sends our request to the AI model.
  • .content(): This gets the text response back from the AI.

And that’s it! We’re done. Seriously.

Step 4: Run the Application!

Now, just run your Spring Boot application from your IDE. Once it starts up, open your web browser and go to:

http://localhost:8080/story

You should see a short story about a curious robot discovering chai.

Want to try another topic? Just add a topic parameter to the URL:

http://localhost:8080/story?topic=a cat who wanted to be a software engineer in Bengaluru

And watch as the AI instantly generates a new story for you.

What Did We Just Do?

Think about it. In just a few minutes, with a handful of dependencies and less than 20 lines of Java code, we built an AI-powered application. We didn't have to wrestle with HTTP clients, authentication headers, or complex JSON.

We just told Spring AI what we wanted, and it did the needful.

This is just the tip of the iceberg. Spring AI allows you to get structured output (like JSON objects), connect to your own data, and much more. It makes AI a first-class citizen in the Spring ecosystem.

So, the next time you feel that spark of a creative idea, don't think it's out of reach for a Java developer. With Spring AI in your toolkit, you're more than ready to build your own magic. Happy coding!


Wednesday, April 26, 2023

Benefits & Best Practices of Code Review

Photo by Bochelly

Code reviews are methodical assessments of code designed to identify bugs, increase code quality, and help developers learn the source code. Developing a strong code review process, or utilizing version control, sets a foundation for continuous improvement and prevents unstable code from shipping to customers.

Software developers should be encouraged to have their code reviewed as soon as they’ve completed coding to get a second opinion on the solution and implementation. The reviewer can also act as a second step in identifying bugs, logic problems, or uncovered edge cases. Reviewers can be from any team or group as long as they’re a domain expert. If the lines of code cover more than one domain, we should have experts from both domains.


Benefits of Code Review

Knowledge Sharing: 

When software developers review code as soon as a team member makes changes, they can learn new techniques and solutions. Code reviews help junior developers learn from more senior team members, similar to how peer programming effectively helps developers share skills and ideas. By spreading knowledge across the organization, code reviews ensure that no person is a single point of failure. Everyone has the ability to review and offer feedback. Shared knowledge also helps team members take vacation, because everyone on the team has background knowledge on a topic.

Discover Bugs: 

Rather than discovering bugs after a feature has been shipped and scrambling to release a patch, developers can immediately find and fix problems before customers ever see them. Moving the review process earlier in the software development lifecycle through unit tests helps developers work on fixes with fresh knowledge. When waiting until the end of the lifecycle to do a review, developers often struggle to remember code, solutions, and reasoning. Static analysis is a cheap, efficient way to meet business and customer value.

Maintain Compliance: 

Developers have various backgrounds and training that influence their coding styles. If teams want to have a standard coding style, code reviews help everyone adhere to the same standards. This is especially important for open source projects that have multiple individuals contributing code. Peer reviews bring in maintainers to assess the code before pushing changes.

Enhance Security: 

Application security is an integral part in software development, and code reviews help ensure compliance. Security team members can review code for vulnerabilities and alert developers to the threat or even setup the quality gates in static code analysis to make sure they are identified well ahead. If your application is dealing with sensitive information then team should be trained on secure coding practices.

Increase Collaboration: 

When team members work together to create a solution, they feel more ownership of their work and a stronger sense of belonging. Authors and reviewers can work together to find the most effective solutions to meet customer needs. It’s important to strengthen collaboration across the software development lifecycle to prevent information silos and maintain a seamless workflow between teams. To successfully conduct code reviews, it’s important that developers build a code review mindset that has a strong foundation in collaborative development.


Best Practices

What to look for during the code review

It’s important to go into reviews knowing what to look for in a code review. Look for key things like code structure, style, logic. performance, test coverage, code readability and maintainability.

You can do automated checks (e.g., static analysis of the code) for some of the things like structure, style, standards and logic. But others areas like design and functionality, requires a human reviewer to evaluate as we don't have any tools for the same.

Reviewing code with certain questions in mind can help you focus on the right things. For instance, you might evaluate code to answer:

  • Do I understand what the code does? 
  • Does the code function as per the requirements? 
  • Does this code has been written as per the company standards requirements?

Build and Test — Before Code Review

In today’s time, we have Continuous Integration setup as part of the process. It’s key to build and test before doing a manual review. Ideally, the code review should be done after tests have passed. This ensures stability and doing automated checks first will cut down on errors and save time in the review process.

Limit Review Time for 45-60 Minutes

Never review for longer than 45 - 60 minutes at a time. Performance and attention-to-detail tend to drop off after that point. It’s best to conduct code reviews often (and in short sessions). Taking a break will give your brain a chance to reset. So, you can review it again with fresh eyes.

Review 300 Lines at a Time

If you try to review too many lines of code at once, you’re less likely to find defects. Try to keep each code review session to 300 lines or less. Setting a line-of-code (LOC) limit is important for the same reasons as setting a time limit. It ensures you are at your best when reviewing the code.

Give Feedback that Helps

Try to be constructive in your feedback, rather than critical. Be kind, explain your reasoning, balance giving explicit directions with just pointing out problems and letting the developer decide and encourage developers to simplify code or add code comments instead of just explaining the complexity to you.

Giving feedback in-person for the new members will help you communicate with the right tone as they will be new to the process.

Communicate Goals and Expectations

You should be clear on what are the goals of the review, as well as the expectations from reviewers. Giving your reviewers a checklist will ensure that the reviews are consistent. Engineers will evaluate each other’s code with the same criteria in mind.

By communicating goals and expectations, everyone saves time. Reviewers will know what to look for — and they’ll be able to use their time wisely in the review process.

Include Everyone in the Code Review Process

No matter how senior the engineer is, everyone needs to review and be reviewed. After all, everyone performs better when they know someone else will be looking at their work. When you’re running reviews, it’s best to include engineer and leads/architect. They’ll spot different issues in the code, in relation to both the broader codebase and the overall design of the product.

Including everyone in the review process improves collaboration and relationships between programmers.

Automate to Save Time

There are some things that reviewers will need to check in manual reviews. But there are some things that can be checked automatically using the right tools. Static code analyzers, for instance, find potential issues in code by checking it against coding rules. Running static analyzers over the code minimizes the number of issues that reach the peer review phase. Using tools for lightweight reviews can help, too.

By using automated tools, you can save time in peer review process. This frees up reviewers to focus on the issues that tools can’t find — like usability.

Conclusion

Code review is a critical process in software development that helps ensure the quality, reliability, and maintainability of the codebase. By following these best practices, your code review process can be an effective tool for ensuring that your codebase is high-quality and maintainable, while also promoting a positive and productive development culture.

I would like to thank my wonderful team members who bought the idea of why code review is important and helped me to build automation to save time for everyone. We automated to identify number of lines of code updated in a pull request. If the lines is above X number then it will reject the pull request by adding an appropriate message for the author. 

Thanks to the latest version of sonar which allows us to have a pull request based analysis that considers only the code that are added/updated. Automated the first round of the code review is done by a bot which pulls the data from the static analysis tool and highlights blockers, critical and major technical debts within the new code added.


Saturday, November 5, 2022

Understanding Spring AOP

Understanding Spring AOP
Photo by Glenn Carstens Peters

What is Spring AOP?

Spring AOP enables Aspect-Oriented Programming in spring applications. It provides the way to dynamically add the cross-cutting concerns(logging, security, transaction management, auditing, i18n etc.) before, after or around the actual logic using simple pluggable configurations. It makes easy to maintain code in the present and future as well. You can add/remove concerns without recompiling complete source code simply by changing configuration files (if you are applying aspects using XML configuration).


What is advice, joinpoint or pointcut?

  • An important term in AOP is advice. It is the action taken by an aspect at a particular join-point. 
  • Joinpoint is a point of execution of the program, such as the execution of a method or the handling of an exception. In Spring AOP, a joinpoint always represents a method execution.
  • Pointcut is a predicate or expression that matches join points.
  • Advice is associated with a pointcut expression and runs at any join point matched by the pointcut.
  • Spring uses the AspectJ pointcut expression language by default.


Pointcut

Pointcut determines the join point of interest and in the code it appears as pointcut expression. It works similarly to regular expressions, using the special syntax it matches methods with advices. Please note, that Spring AOP supports only those classes that are defined as Spring beans/component otherwise they won’t be available. Here is a pointcut expression general syntax (those parts that are in red are mandatory) with some examples:

execution(modifiers-pattern? ret-type-pattern declaring-type-pattern?name-pattern(param-pattern) throws-pattern?)

The following examples show some common pointcut expressions:

The execution of any public method, First * means that it will match any return type, and *(..) means that the expression will match any method, no matter how much arguments it contains.
    execution(public * *(..))

The execution of any method with a name that begins with set:
    execution(* set*(..))

The execution of any method defined by the OrderService interface/class:
    execution(* com.bhargav.service.AccountService.*(..))

The execution of any method defined in the service package or one of its sub-packages:
    execution(* com.bhargav.service..*.*(..))

This will be matched only for those methods in the DemoClass, which has int as a first parameter, return type as int and method should be public:
    execution(public int DemoClass.*(int, ..))

Any join point (method execution only in Spring AOP) within the service package:
    within(com.bhargav.service.*)

Advices

Spring AOP includes the following types of advice:

@Before: 
Advice that runs before a join point but that does not have the ability to prevent execution flow proceeding to the join point (unless it throws an exception).

@After: 
Advice to be run regardless of the means by which a join point exits (normal or exceptional return).

@AfterReturning: 
Advice to be run after a join point completes normally (for example, if a method returns without throwing an exception).

@AfterThrowing: 
Advice to be run if a method exits by throwing an exception.

@Around:
Advice that surrounds a join point such as a method invocation. This is the most powerful kind of advice. Around advice can perform custom behaviour before and after the method invocation. It is also responsible for choosing whether to proceed to the join point or to shortcut the advised method execution by returning its own return value or throwing an exception.


References:

Featured Post

Your AI Sidekick: How Claude took over Pritee’s Repetitive tasks

  It was a classic Wednesday morning in our Bengaluru office. Pritee, one of our sharpest Project Managers, had just stepped out of a stakeh...