Posts

Building a Smart Holiday Booking System with Agent-to-Agent Communication

Image
 Building a Multi-Agent Holiday Booking System with the A2A Protocol (An MVP Approach) The world of AI is rapidly moving towards "agentic systems" — autonomous AI agents that can perform complex, multi-step tasks by collaborating with each other. The challenge, however, has always been standardization: how do you get agents built on different frameworks, by different teams, to communicate effectively? This is the problem the  Agent2Agent (A2A) protocol , an open standard, aims to solve. It provides a common language for agents to discover, communicate, and collaborate securely. In this blog post, we'll walk through a Minimum Viable Product (MVP) approach to a real-world scenario: building a holiday booking system using the A2A protocol in python. Design Architecture (MVP) The Problem: A Siloed Booking Experience Imagine a traditional holiday booking website. It might have separate sections for flights, hotels, and cabs. Each of these services is handled by a different int...

Scaling Up Your Kafka Cluster: A Step-by-Step Guide

Image
Apache Kafka is a powerful distributed streaming platform. But for high availability and increased throughput, running a single Kafka server might not be enough. This blog post will guide you through setting up a multi-node Kafka cluster using the KRaft protocol. What You'll Need: Multiple servers with Kafka installed SSH access to each server Step 1: Configure Server IDs Navigate to the config/kraft directory within your Kafka installation on each server. Grant write permissions for the current user: Bash sudo chmod -R u+w /opt/kafka/config/kraft 3. Copy the existing server.properties file and rename it for each server: Bash sudo cp -f server.properties server1.properties sudo cp -f server.properties server2.properties sudo cp -f server.properties server3.properties 4.Edit each server's configuration file and update the node.id property with a unique value: server1.properties : node.id=1 server2.properties : node.id=2 server3.properties : node.id=3 Step 2: D...

Apache kafka using kraft

Image
Getting Started with Kafka in Kraft Mode: A Step-by-Step Guide Kafka is a powerful platform for real-time data processing. Traditionally, it relied on ZooKeeper for controller election and state management. However, Kraft mode, introduced in Kafka 3.0, offers significant improvements in reliability, performance, and manageability. This blog post provides a step-by-step guide to running Kafka in Kraft mode, helping you unlock its benefits. Let's dive in! Understanding Kafka Configuration Files: Navigating the Configuration Directory: Bash cd /opt/kafka ls config/kraft This command navigates to the Kafka configuration directory and lists files specific to Kraft mode. Configuration File Breakdown: broker.properties : This file manages topic partitioning and data storage/retrieval. controller.properties : Here lies the configuration for Kraft-based leader election. server.properties : This file combines the settings of both broker.properties and controller.properties for a strea...

Apache kafka Setup in Google cloud

Image
This blog post guides you through setting up a basic Kafka environment on Google Cloud Platform for learning purposes. Kafka is a powerful distributed event streaming platform used for real-time data processing. We'll walk through launching a Kafka cluster, creating a topic, and sending and consuming messages. Kafka is a distributed event streaming platform that lets you read, write, store, and process events (also called records or messages in the documentation) across many machines. Prerequisites: A Google Cloud Platform account Steps: Deploying Kafka: Head over to the Google Cloud Marketplace: https://console.cloud.google.com/marketplace/product/google/kafka Click on "LAUNCH" and proceed with the deployment configuration. Important: For service account, you can choose an existing one or create a new one with appropriate permissions. Select a deployment region closest to you for optimal performance. Keep the disk space settings at default for this learning exe...

Kubernetes and Helm packing

Image
SETUP for Kubernetes Install the docker and check the version. (Visit the docker website to install) check the version using the command docker version Install Minikube  wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 cp minikube-linux-amd64 /usr/local/bin/minikube sudo chmod 755 /usr/local/bin/minikube minikube version minikube start ->to start the Kubernetes this command will take a couple of minutes to complete. Install Kubectl curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl   give permission for the folder chmod u+x kubectl move this folder to user bin  sudo mv kubectl /usr/local/bin/ execute kubectl version to check client and server versions  See the compatible client and server GitVersions v1.25.2 check whether the cluster is running or not using - >minikube status Helm Installation Visit https:/...

How to Test Application Context in Spring Boot

 Usually, we won't bother about the Junit test for the application context and bean instantiations, blindly trust about the stability of the spring boot framework. And spring boot generates automatically a test class with a method contextLoads() @Test void contextLoads () { } To check the beans are launching properly or not we can frame springboot application like below  @SpringBootApplication public class PracticeApplication { public static void main (String[] args) { ConfigurableApplicationContext ctx=SpringApplication. run (PracticeApplication. class , args); printBeanNames (ctx); } Lets deal with the context object  private static void printBeanNames (ApplicationContext ctx) { String[] beanDefinitionNames = ctx.getBeanDefinitionNames(); for (String bean:beanDefinitionNames) System. out .println( "BeanNAme is " +bean); int beanDefinitionCount = ctx.getBeanDefinitionCount(); System. out .println( "beanDefinitionCount...

Design Patterns

 GOF (Gang Of Four) came up with the concepts of design patterns. We can classify the deign patterns in to thress. Creational Design Pattern Structural Design pattern Behavioural Design pattern

SOLID Principle (Quick Read)

Image
 The famous design principle comprises 5 design strategies that a developer should follow for a successful application development Single Responsibility Principle Open/ Close Principle Liskov Substitution Principle Interface Segregation Principle Dependency Inversion Single Responsibility Principle(SRP) No code unit(function/class/package) should have more than one responsibility.   Bad Ex:   Class Bird  {     String result;;    public fly(Bird bird)    {     if(bird==piegeon)       result="flys 20 meter hight";    else if(bird==hen)       result="flys 5 meter";    else        result="not measured yet";    } } This class should design such a way that it should work well for all the types of birds current design is hectic to maintain for the following reasons  difficult to test. difficult in parallel programming  understanding the code...