[{"content":"CodeBhejo is an open-source code sharing and collaboration tool for developers. Instantly create and share code snippets with your team through a web editor, or transfer files peer-to-peer directly from the terminal.\nFeatures Monaco Editor — The same editor that powers VS Code, with syntax highlighting across all major languages Passwordless auth — Email-based magic link login via AWS SES, no password required P2P file transfer — A Go CLI tool (codebhejo) that transfers files directly between machines over WebRTC; the backend acts only as a signaling server via Socket.IO, so file data never touches the server S3-backed storage — File content stored on S3 (or any S3-compatible provider like Hetzner), metadata in MySQL Tech Stack Layer Technology Frontend Vue 3, Vite, Pinia, Monaco Editor Backend Node.js, Express, Knex.js Database MySQL Storage AWS S3 / Hetzner Object Storage Email AWS SES (passwordless login) CLI Go (Cobra, WebRTC) P2P Signaling Socket.IO + WebRTC CLI Usage Install the CLI and send files peer-to-peer:\n# Send a file codebhejo send ./myfile.zip # Receive a file codebhejo receive \u0026lt;code\u0026gt; The CLI uses WebRTC for the actual transfer — the backend acts only as a signaling server, so file data never touches the server.\nLinks Live App GitHub Repo ","permalink":"https://9ovind.in/projects/codebhejo/","summary":"\u003cp\u003e\u003ca href=\"https://codebhejo.in\"\u003eCodeBhejo\u003c/a\u003e is an open-source code sharing and collaboration tool for developers. Instantly create and share code snippets with your team through a web editor, or transfer files peer-to-peer directly from the terminal.\u003c/p\u003e\n\u003ch2 id=\"features\"\u003eFeatures\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eMonaco Editor\u003c/strong\u003e — The same editor that powers VS Code, with syntax highlighting across all major languages\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003ePasswordless auth\u003c/strong\u003e — Email-based magic link login via AWS SES, no password required\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eP2P file transfer\u003c/strong\u003e — A Go CLI tool (\u003ccode\u003ecodebhejo\u003c/code\u003e) that transfers files directly between machines over WebRTC; the backend acts only as a signaling server via Socket.IO, so file data never touches the server\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eS3-backed storage\u003c/strong\u003e — File content stored on S3 (or any S3-compatible provider like Hetzner), metadata in MySQL\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"tech-stack\"\u003eTech Stack\u003c/h2\u003e\n\u003ctable\u003e\n  \u003cthead\u003e\n      \u003ctr\u003e\n          \u003cth\u003eLayer\u003c/th\u003e\n          \u003cth\u003eTechnology\u003c/th\u003e\n      \u003c/tr\u003e\n  \u003c/thead\u003e\n  \u003ctbody\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eFrontend\u003c/td\u003e\n          \u003ctd\u003eVue 3, Vite, Pinia, Monaco Editor\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eBackend\u003c/td\u003e\n          \u003ctd\u003eNode.js, Express, Knex.js\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eDatabase\u003c/td\u003e\n          \u003ctd\u003eMySQL\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eStorage\u003c/td\u003e\n          \u003ctd\u003eAWS S3 / Hetzner Object Storage\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eEmail\u003c/td\u003e\n          \u003ctd\u003eAWS SES (passwordless login)\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eCLI\u003c/td\u003e\n          \u003ctd\u003eGo (Cobra, WebRTC)\u003c/td\u003e\n      \u003c/tr\u003e\n      \u003ctr\u003e\n          \u003ctd\u003eP2P Signaling\u003c/td\u003e\n          \u003ctd\u003eSocket.IO + WebRTC\u003c/td\u003e\n      \u003c/tr\u003e\n  \u003c/tbody\u003e\n\u003c/table\u003e\n\u003ch2 id=\"cli-usage\"\u003eCLI Usage\u003c/h2\u003e\n\u003cp\u003eInstall the CLI and send files peer-to-peer:\u003c/p\u003e","title":"CodeBhejo — Code Sharing \u0026 P2P File Transfer"},{"content":"Domain Name System ( DNS ) is one of the most critical infrastructure pieces of the internet. Every website visit, API call, or email delivery begins with a DNS query.\nWhile most organizations rely on managed DNS providers, self-hosting a DNS nameserver can be a powerful learning experience and, in some cases, a practical solution for greater control, privacy, or experimentation.\nIn this blog, we will:\nUnderstand how DNS works at a global scale Explore real-world DNS architecture and infrastructure Walk through self-hosting an authoritative DNS server using PowerDNS This article is aimed at system administrators, DevOps engineers, and curious learners who want to go deeper than “just use a managed DNS provider.”\nWhat Problem DNS Solves ? At its core, DNS exists to solve a human problem, not a technical one.\nComputers are extremely good at working with numbers. Humans are not.\nThe internet, however, must work for both.\nWhy Humans Cannot Use IP Addresses ? Every device connected to the internet is identified by an IP address, such as:\nIPv4: 37.27.249.121 IPv6: 2001:db8:85a3::8a2e:370:7334 While these numbers are efficient for machines, they are fundamentally unsuitable for humans:\nThey are hard to remember They have no semantic meaning They change frequently (cloud, load balancers, failovers) IPv6 addresses are even longer and more complex Humans think in names, not numbers.\nWe remember:\ngoogle.com github.com example.com Not:\n142.250.195.78 The core problem DNS solves is decoupling identity from location:\nA name represents a service or organization An IP address represents where that service currently lives This separation allows infrastructure to evolve without breaking human access.\nWhy Centralized Name Mapping Doesn’t Scale ? In the very early days of the internet, name resolution was centralized.\nThere was a single file: HOSTS.txt or /etc/hosts\nEvery hostname and IP mapping lived in this file, maintained by a central authority.\nEach computer periodically downloaded the latest version.\nThis approach worked — until it didn’t.\nThe Problems with Centralization :\nSingle Point of Failure If the central server went down, name resolution stopped Administrative Bottleneck Every new host required manual approval and distribution Update Latency Changes propagated slowly Stale data caused outages No Delegation One organization controlled all names No ownership model Security Risk Anyone who compromised the central file could hijack the entire internet As the number of connected machines grew from hundreds to millions and now billions this model became impossible.\nThe internet needed a distributed naming system.\nWhy DNS Must Be Distributed ? DNS is distributed by design because no single system can know everything.\nInstead:\nResponsibility is split Authority is delegated Queries are answered locally whenever possible Each domain owner controls their own namespace:\nGoogle controls google.com GitHub controls github.com You control your domain There is no “master DNS database” for the internet and that is intentional.\nWhy DNS Is Hierarchical ? Distribution alone is not enough. Without structure, discovery would still be inefficient.\nDNS solves this with hierarchy.\n. └── com └── example └── www A strong deep-technical blog must move through four layers:\nConceptual model – what DNS is and why it exists Real internet behavior – how DNS works globally Hands-on mechanics – commands, configs, files Operational reality – security, failure modes, best practices DNS Hierarchy in Detail DNS is organized as an inverted tree structure, starting from the root and branching down to individual domains. This hierarchy ensures efficient delegation and scalability.\nRoot Servers At the top of the DNS hierarchy are the root servers. There are 13 logical root server clusters (labeled A through M), operated by different organizations worldwide. These servers maintain the master list of all top-level domains (TLDs).\nRoot servers don\u0026rsquo;t contain actual domain records but point resolvers to the authoritative servers for each TLD. They respond with NS records directing queries to the appropriate TLD servers.\nTop-Level Domains (TLDs) TLDs are the highest level of domain names, appearing after the last dot. They are divided into categories:\ngTLDs (Generic TLDs): .com, .org, .net, .info, .biz ccTLDs (Country Code TLDs): .us, .uk, .de, .in, .jp New gTLDs: .app, .dev, .tech, .blog Each TLD has its own set of authoritative name servers managed by registries (like Verisign for .com or individual country registries).\nAuthoritative Name Servers For each domain, there are authoritative name servers that hold the actual DNS records. These are typically provided by:\nDomain registrars (when you register a domain) DNS hosting providers (like Cloudflare, Route 53, DigitalOcean) Self-hosted servers (for organizations with their own infrastructure) Authoritative servers are the source of truth for a domain\u0026rsquo;s records and are responsible for responding to queries about that domain.\nDNS Resolution Process: Step-by-Step When you type \u0026ldquo;example.com\u0026rdquo; into your browser, a complex resolution process occurs behind the scenes. Let\u0026rsquo;s trace through a typical DNS lookup:\nStep 1: Local Cache Check Your operating system and browser maintain a DNS cache. The resolver first checks these local caches for the requested domain. If found and not expired, it returns the cached IP address immediately.\nStep 2: Recursive Resolver Query If not cached locally, your device queries a recursive DNS resolver (usually provided by your ISP, or public ones like 8.8.8.8 or 1.1.1.1). This resolver acts on your behalf to find the answer.\nStep 3: Root Server Query The recursive resolver starts at the root. It queries one of the 13 root servers asking \u0026ldquo;Who handles .com domains?\u0026rdquo;\nThe root server responds with NS records pointing to the TLD servers for .com.\nStep 4: TLD Server Query The resolver queries the .com TLD servers: \u0026ldquo;Who handles example.com?\u0026rdquo;\nThe TLD server responds with NS records pointing to example.com\u0026rsquo;s authoritative servers.\nStep 5: Authoritative Server Query Finally, the resolver queries example.com\u0026rsquo;s authoritative servers: \u0026ldquo;What is the IP address for example.com?\u0026rdquo;\nThe authoritative server returns the A or AAAA record with the IP address.\nStep 6: Response and Caching The recursive resolver returns the IP to your device, which caches it. Future requests for the same domain will be served from cache until expiration.\nThis process typically takes 20-100ms and involves multiple network round trips across the globe.\nDNS Record Types DNS stores different types of records, each serving a specific purpose. Here are the most common ones:\nA Record (Address Record) Maps a domain name to an IPv4 address.\nexample.com. IN A 93.184.216.34 AAAA Record (IPv6 Address Record) Maps a domain name to an IPv6 address.\nexample.com. IN AAAA 2606:2800:220:1:248:1893:25c8:1946 CNAME Record (Canonical Name) Creates an alias from one domain to another.\nwww.example.com. IN CNAME example.com. MX Record (Mail Exchange) Specifies mail servers for the domain.\nexample.com. IN MX 10 mail.example.com. The number (10) indicates priority; lower numbers have higher priority.\nTXT Record (Text Record) Stores arbitrary text data, commonly used for SPF, DKIM, and verification.\nexample.com. IN TXT \u0026#34;v=spf1 include:_spf.google.com ~all\u0026#34; NS Record (Name Server) Delegates a subdomain to specific name servers.\nexample.com. IN NS ns1.example.com. SOA Record (Start of Authority) Contains administrative information about the zone.\nexample.com. IN SOA ns1.example.com. admin.example.com. ( 2023010101 ; Serial 3600 ; Refresh 1800 ; Retry 604800 ; Expire 86400 ; Minimum TTL ) PTR Record (Pointer Record) Maps an IP address back to a domain name (reverse DNS).\n34.216.184.93.in-addr.arpa. IN PTR example.com. These records work together to provide complete domain information and enable various internet services.\nDNS Infrastructure: The Global Network DNS operates through a distributed network of servers working in harmony:\nRecursive Resolvers These are the \u0026ldquo;middlemen\u0026rdquo; of DNS. They accept queries from clients and perform the full resolution process on their behalf. Popular public recursive resolvers include:\nGoogle Public DNS (8.8.8.8, 8.8.4.4) Cloudflare (1.1.1.1, 1.0.0.1) Quad9 (9.9.9.9) Recursive resolvers implement aggressive caching to improve performance and reduce load on authoritative servers.\nAuthoritative Servers These hold the actual zone files and are the definitive source for domain information. They come in two types:\nPrimary (Master): Contains the original zone file Secondary (Slave): Replicates data from the primary for redundancy Authoritative servers are organized hierarchically and delegate subdomains to other servers.\nCaching: The Performance Booster DNS heavily relies on caching at multiple levels:\nBrowser Cache: Short-term storage in the browser OS Cache: System-level DNS cache Resolver Cache: Recursive resolvers cache responses Authoritative Cache: Some authoritative servers implement caching TTL (Time To Live) values control how long records are cached. Shorter TTLs provide more current data but increase query volume.\nAnycast: Global Distribution Many DNS servers use anycast routing, where the same IP address is announced from multiple physical locations worldwide. This ensures queries are routed to the nearest server, reducing latency.\nFor example, the root servers are anycasted across hundreds of locations globally, ensuring fast responses from anywhere on earth.\nDNS Security: Protecting the Foundation DNS was designed without security in mind, making it vulnerable to various attacks. Modern solutions address these issues:\nDNSSEC (DNS Security Extensions) DNSSEC adds cryptographic signatures to DNS records, ensuring:\nData Integrity: Records haven\u0026rsquo;t been tampered with Authentication: Responses come from legitimate servers Non-existence Proofs: Proves when domains don\u0026rsquo;t exist DNSSEC uses a chain of trust from the root down to individual domains. Keys are managed hierarchically, with the root KSK (Key Signing Key) being the ultimate trust anchor.\nEncrypted DNS Protocols Traditional DNS sends queries in plaintext, allowing interception and manipulation.\nDoT (DNS over TLS): Encrypts DNS queries over port 853 DoH (DNS over HTTPS): Sends DNS queries over HTTPS (port 443), blending with normal web traffic These prevent eavesdropping and make blocking DNS traffic more difficult.\nCommon DNS Attacks DNS Spoofing/Cache Poisoning Attackers send fake responses to recursive resolvers, polluting their cache with malicious IP addresses.\nDDoS Amplification DNS servers can be used to amplify DDoS attacks since UDP responses can be much larger than queries.\nDNS Tunneling Malware uses DNS queries to exfiltrate data by encoding information in domain names.\nNXDOMAIN Attacks Overloading resolvers with queries for non-existent domains to exhaust resources.\nMitigation Strategies Implement DNSSEC for your domains Use encrypted DNS (DoT/DoH) clients Deploy DNS firewalls (like Response Policy Zones) Monitor for anomalous query patterns Use rate limiting on authoritative servers Self-Hosting an Authoritative DNS Server with PowerDNS While managed DNS providers are convenient, self-hosting gives you complete control. PowerDNS is an excellent open-source authoritative DNS server. Let\u0026rsquo;s set it up:\nInstallation with Docker Use Docker Compose for easy setup with MySQL backend. Create a docker-compose.yml file:\nversion: \u0026#39;3.8\u0026#39; services: mysql: image: mysql:8.0 environment: MYSQL_ROOT_PASSWORD: your_mysql_root_password MYSQL_DATABASE: pdns MYSQL_USER: pdns MYSQL_PASSWORD: your_pdns_password volumes: - mysql_data:/var/lib/mysql networks: - pdns powerdns: image: pschiffe/pdns-mysql:latest environment: MYSQL_HOST: mysql MYSQL_PORT: 3306 MYSQL_USER: pdns MYSQL_PASS: your_pdns_password MYSQL_DB: pdns ports: - \u0026#34;53:53/tcp\u0026#34; - \u0026#34;53:53/udp\u0026#34; - \u0026#34;8081:8081\u0026#34; depends_on: - mysql networks: - pdns volumes: mysql_data: networks: pdns: Run the services:\ndocker-compose up -d Configuration PowerDNS auto-configures with the MySQL backend. Access the web interface at http://localhost:8081.\nDatabase Setup The MySQL database initializes automatically. Create zones via the API or command line:\ndocker-compose exec powerdns pdnsutil create-zone example.com Add some records:\nsudo pdnsutil add-record example.com @ A 192.168.1.10 sudo pdnsutil add-record example.com www CNAME example.com sudo pdnsutil add-record example.com mail MX \u0026#34;10 mail.example.com\u0026#34; DNSSEC Setup Enable DNSSEC for the zone:\nsudo pdnsutil secure-zone example.com sudo pdnsutil set-nsec3 example.com This generates keys and signs the zone automatically.\nTesting Test your setup:\ndig @localhost example.com A nslookup example.com localhost Production Considerations Use MySQL clustering or replication for high availability Implement monitoring and alerting for both PowerDNS and MySQL Set up secondary PowerDNS servers for redundancy Configure proper firewall rules (UDP/TCP port 53) Implement rate limiting and DDoS protection Regularly update Docker images and patch PowerDNS Self-hosting DNS requires careful maintenance but provides maximum control and privacy.\nDNS Troubleshooting and Best Practices DNS issues can be frustrating but are usually solvable with systematic debugging.\nCommon Issues and Solutions Slow Resolution Check resolver performance Clear local DNS cache: sudo systemd-resolve --flush-caches Switch to faster public resolvers Check for network congestion NXDOMAIN Errors Verify domain registration status Check for typos in domain names Confirm authoritative servers are responding Test with different resolvers Propagation Delays DNS changes can take 24-48 hours to propagate globally due to caching. Use tools like:\ndig @8.8.8.8 example.com A dig @1.1.1.1 example.com A DNS Leaks When using VPNs, DNS queries might bypass the tunnel. Test with:\nnslookup example.com Should show VPN resolver IPs, not local ISP.\nBest Practices For Domain Owners Use multiple authoritative servers for redundancy Implement DNSSEC Set appropriate TTL values (lower for dynamic content) Monitor DNS performance and availability Use CDN-integrated DNS for global performance For Developers Understand DNS in your application architecture Implement proper error handling for DNS failures Use connection pooling to reduce DNS lookups Consider DNS prefetching in web applications For System Administrators Monitor DNS server logs for anomalies Implement rate limiting to prevent abuse Keep DNS software updated Have backup DNS providers ready Document your DNS configuration thoroughly DNS reliability is crucial for internet availability. Regular monitoring and proactive maintenance prevent most issues.\nLearning Resources RFC 1035 - Domain Names - Implementation and Specification DNSSEC Practice Statement PowerDNS Documentation Cloudflare Learning Center - DNS DNS Made Easy - DNS Glossary Practical DNSSEC DNS over HTTPS (DoH) RFC 8484 Conclusion DNS is the unsung hero of the internet, silently translating human-readable names into machine-routable addresses billions of times per day. Its distributed, hierarchical design has proven remarkably resilient, scaling from a handful of hosts to billions of connected devices.\nUnderstanding DNS goes beyond technical curiosity—it\u0026rsquo;s essential for anyone working with internet technologies. Whether you\u0026rsquo;re a developer troubleshooting connectivity issues, a system administrator designing resilient infrastructure, or simply a user wanting to understand how the web works, DNS knowledge is fundamental.\nAs the internet evolves with new protocols like DNS over HTTPS and DNSSEC adoption grows, DNS continues to adapt while maintaining its core principles of decentralization and reliability. The next time you type a URL into your browser, remember the complex, global infrastructure that makes it all possible.\nDNS truly is the backbone of the World Wide Web.\n","permalink":"https://9ovind.in/blogs/dns_explained_the_backbone_of_the_world_wide_web/","summary":"\u003cp\u003eDomain Name System ( DNS ) is one of the most critical infrastructure pieces of the internet. Every website visit, API call, or email delivery begins with a DNS query.\u003c/p\u003e\n\u003cp\u003eWhile most organizations rely on managed DNS providers, self-hosting a DNS nameserver can be a powerful learning experience and, in some cases, a practical solution for greater control, privacy, or experimentation.\u003c/p\u003e\n\u003cp\u003eIn this blog, we will:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eUnderstand how DNS works at a global scale\u003c/li\u003e\n\u003cli\u003eExplore real-world DNS architecture and infrastructure\u003c/li\u003e\n\u003cli\u003eWalk through self-hosting an authoritative DNS server using PowerDNS\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eThis article is aimed at system administrators, DevOps engineers, and curious learners who want to go deeper than “just use a managed DNS provider.”\u003c/p\u003e","title":"DNS Explained: The Backbone of the World Wide Web"},{"content":"Introduction The World Wide Web (WWW) was invented in 1989 by Tim Berners-Lee, but it wasn’t until 1991 that it became publicly available. Since then, the internet has evolved dramatically, with billions of users accessing websites daily. At the core of this communication lies the web server a foundational component that delivers content from a server to a user\u0026rsquo;s browser.\nAmong the most popular web servers today is Nginx (pronounced engine-x), known for its high performance, scalability, and efficiency.\nThe problem it solves Imagine writing a custom web server from scratch every time you build an application. You’d need to handle:\nNetworking (managing TCP connections, sockets, etc.) HTTP protocol compliance (handling GET, POST, headers, caching) Concurrency (serving thousands of clients at once) Security (TLS/SSL, request limits, filtering) That’s a lot of work for something every developer needs.\nNginx solves this by providing reusable web server that works across programming languages and frameworks. Instead of reinventing the wheel, developers can let Nginx handle the web traffic while their application focuses on business logic.\nArchitecture At its core, Nginx follows a client-server model:\nA client (Browser, API consumer, Mobile app) sends an HTTP request. The server (Nginx) processes that request and sends a response. Default Ports by convention on which all web servers listen:\nPort 80 → HTTP Port 443 → HTTPS (with SSL/TLS) NGINX acts as an intermediary between the client and the web services. It handles the client requests and routes it to the backend web service. It is also known as a reverse proxy and can load balance the request among multiple backend servers.\nWhile NGINX solves most of the problems with traditional web service architecture, it still needs to solve:\nConcurrent connections - Large number of concurrent connections from clients. Performance - No performance degradation with user growth. Efficient resource utilization - Low memory usage and optimal CPU utilization. Before diving into the solution, let’s revisit connection management basics and understand the scalability bottlenecks.\nHow are the connections handled ? When a web server starts, it calls the operating System and passes the port on which it listens. For e.g., Web servers would pass port 80 (http) or 443 (https) to listen.\nWhen the client connects, the OS’s kernel stack performs a TCP handshake and establishes a connection. The OS assigns a file descriptor or a socket for each connection.\nThe below diagram shows the connection establishment between the client and the server:\nNote: NIC stands for Network interface card\nBy default, sending and receiving data over a network (Network I/O) is blocking. A thread or a process goes into waiting state while writing or reading data to/from the network.\nAlso, the network I/O is dependent on the client’s bandwidth. Data transfer may take a long time for slow clients.\nThe following diagram shows how a process waits until the complete data transfer:\nAs a result, the server can’t accept new connections if it’s already processing request from a client. This hinders the system’s scalability and performance both.\nThere are several ways to tackle this problem and handle more connections. Let’s understand the different approaches and their limitations.\nProcess-Per-Request Approach To overcome the network I/O bottleneck, the process can fork a new child process. The child process would then handle a new client connection.\nEvery connection would correspond to a new child process. Once the request/response cycle is completed, the child process would be killed.\nThe below diagram illustrates this process: Do you think this approach would scale to millions of users/connections ? Take a moment to think and then continue reading\nLet’s assume the server RAM size is 32 GB and each process takes 100 MB. So, then it can handle only 320 (32 GB/ 100 MB) connections in the best case.\nHere are some downsides of this approach:\nScalability Issues - Number of connections depend on the hardware (RAM size). More connections would lead to out of memory issues. Performance Issues - Forking a child process is slow and would impact the performance. Can we do better ? What if instead of forking a process, we launch a thread ? Let’s explore this approach in the next section.\nThread-Per-Request Approach In this approach, a thread is launched every time a client connection is established. Each request is handled independently by a different thread.\nThe below diagram shows how this model works:\nThreads are lightweight and almost 1/10th size of a process. As a result, this is a significant improvement from the Process-Per-Request approach.\nWhile this approach can handle more number of connections, it would still run into issues highlighted in the previous section.\nA process can’t launch an infinite number of threads. The benefits of multi-threading diminish with large number of threads due to frequent CPU context switching.\nWe can still improve by using a thread pool and launching a fixed number of threads. For eg:- 500 threads in the process.\nThis improvement would result in efficient memory usage. However, if all the threads are busy, the new connections would wait in the request queue resulting in slowness.\nHence, this approach also doesn’t solve for scalability and performance. We can’t scale since the primary bottleneck is the time-consuming network I/O.\nIs there a way to unblock the process or thread during the network I/O ? Yes, and NGINX employs an intelligent tactic using its event-driven non-blocking I/O.\nLet’s understand NGINX’s architecture in detail in the next section.\nNGINX Architecture NGINX uses a modular architecture and consists of several components such as:\nMaster process - It acts as the central controller and is responsible for starting, stopping, and launching the worker processes. Worker processes - These run the core NGINX logic and are responsible for connection handling, request forwarding, load balancing, etc. Let’s now dive into the details of how NGINX can scale to million concurrent connections.\nEvent-driven Non-blocking I/O In case of non-blocking I/O, the web server or the application doesn’t wait for the client’s data. Instead, the OS informs the application once the data is available.\nThis makes the process event-driven. Whenever the client’s data is available, the application would get interrupted and it would process the data. Otherwise, it would continue to do something else.\nFurther, the application doesn’t go into a waiting state. It can execute other tasks and efficiently utilize the CPU.\nInternally, the application uses a system call called epoll or kqueue and then registers the sockets. The operating system uses a kernel data structure (Epoll instance) to keep track of the sockets that an application is interested in.\nOnce data is available in a subset of sockets, those sockets are moved into a ready list. The OS then informs the application about those sockets. Finally, the application then processes the data.\nThe below diagram illustrates this flow:\nAs seen from the above diagram, once data becomes available on fd3, and fd4, the process is notified by the OS.\nLet’s now understand this in the context of a NGINX worker.\nNginx worker Each NGINX worker is single-threaded and it runs an event loop. The event loop works like a while loop and checks for any activity on the socket or new connections.\nWith non-blocking sockets, the worker doesn’t need to wait till the data is completely sent to the client. It can quickly move onto the next connection and process the request.\nSince network I/O is non-blocking, the process doesn’t wait for the data transfer. And the worker uses CPU only for request parsing, filtering and other compute operations.\nCompute operations are less time-taking (in order of micro-seconds). As a result, a single worker can process 100K requests every second concurrently.\nAssuming that a single worker can handle 100K connections, if it’s a 10-core CPU, the server can handle 1 million concurrent connections. (Example for illustration only, in real world, things might be different).\nNote: A server must have sufficient memory to serve 1 million connections since each connection needs 100KB-1MB memory. But the OS kernel can be tuned to reduce the connection’s memory.(there are trade-offs to this approach)\nThe event-driven non-blocking I/O efficiently utilizes the CPU and doesn’t consume memory like Process-Per-Request or Thread-Per-Request approach.\nInstallation Prerequisite: Docker\nOne of the simplest ways to install and run Nginx today is via Docker:\ndocker run --rm --name web_server -p 80:80 nginx This pulls the latest Nginx image and starts a container listening on port 80.\nVisit localhost in your Browser and you will see a nginx welcome page.\nConfiguration Nginx has one master process and several worker processes. The main purpose of the master process is to read and evaluate configuration, and maintain worker processes. Worker processes do actual processing of requests.\nThe way nginx and its modules work is determined in the configuration file. By default, the configuration file is named nginx.conf and placed in the directory /etc/nginx.\nTo view the default configuration file nginx.conf, you first need to exec into web_server container.\ndocker exec -it web_server bash cat /etc/nginx/nginx.conf Changes made in the configuration file will not be applied until the command to reload configuration is sent to nginx or it is restarted. To reload configuration, execute:\nnginx -s reload Once the master process receives the signal to reload configuration, it checks the syntax validity of the new configuration file and tries to apply the configuration provided in it. If this is a success, the master process starts new worker processes and sends messages to old worker processes, requesting them to shut down. Otherwise, the master process rolls back the changes and continues to work with the old configuration. Old worker processes, receiving a command to shut down, stop accepting new connections and continue to service current requests until all such requests are serviced. After that, the old worker processes exit.\nConfiguration file structure nginx consists of modules which are controlled by directives specified in the configuration file. Directives are divided into:\nSimple directives consists of the name and parameters separated by spaces and ends with a semicolon (;) Block directives has the same structure as a simple directive, but instead of the semicolon it ends with a set of additional instructions surrounded by braces ({ and }). If a block directive can have other directives inside braces, it is called a context. Eg: events, http, server and location.\nDirectives placed in the configuration file outside of any contexts are considered to be in the main context.\n# nginx.conf is the main context # simple directives user nginx; worker_processes auto; # block directives events { } http { server { location { } } } The rest of a line after the # sign is considered a comment.\nServing static content An important web server task is serving out files (such as images or static HTML pages).\nWe will implement an example where files will be served from local directory: /var/www (which may contain HTML files and images). This will require editing of the configuration file nginx.conf.\nCreate this file structure\nnginx_example/ ├── nginx.conf └── web_pages └── index.html └── image.png Put below content in index.html\n\u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html lang=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;UTF-8\u0026#34;\u0026gt; \u0026lt;meta name=\u0026#34;viewport\u0026#34; content=\u0026#34;width=device-width, initial-scale=1.0\u0026#34;\u0026gt; \u0026lt;title\u0026gt;Document\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;h1\u0026gt;Hello world from Nginx container\u0026lt;/h1\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; Put below content in nginx.conf\nuser nginx; worker_processes auto; events { } http { server { listen 80; server_name _; root /var/www; index index.html index.htm; location / { try_files $uri $uri/ =404; } } } Now let\u0026rsquo;s stop our previous nginx container and run new one with new configurations.\nBut before running below command please make sure you are in nginx_example folder.\ncd nginx_example docker run --rm --name web_server -p 80:80 -v ./web_pages:/var/www -v ./nginx.conf:/etc/nginx/nginx.conf nginx Visit http://localhost in your Browser and you will see this page You can also view image file you have put in web_pages folder by visiting http://localhost/image.png\nReverse proxy One of the frequent uses of nginx is setting it up as a proxy server, which means a server that receives requests, passes them to the proxied servers, retrieves responses from them, and sends them to the clients.\nWe will configure a basic proxy server, which servers requests of other websites with our custom url like:\nhttp://localhost/example will serve https://example.com page http://localhost/wiki will serve https://www.wikipedia.org page Put the below content in nginx.conf file\nuser nginx; worker_processes auto; events { } http { server { listen 80; server_name _; location /example/ { proxy_pass https://example.com/; } location /wiki/ { proxy_pass https://www.wikipedia.org/; } } } Now let\u0026rsquo;s stop our previous nginx container and run new one with new configurations.\nBut before running below command please make sure you are in nginx_example folder.\ncd nginx_example docker run --rm --name web_server -p 80:80 -v ./nginx.conf:/etc/nginx/nginx.conf nginx In your browser visit:\nhttp://localhost/example http://localhost/wiki Conclusion For me, learning about Nginx was more than just understanding another tool. it gave me clarity on how the internet really works behind the scenes.\nAt first, I always thought a web server was just something that shows HTML files, but now I realize it’s the backbone that keeps websites fast, reliable, and secure.\nWhile experimenting, I personally liked:\nHow easy it was to run Nginx inside Docker with just one command. The simplicity of serving my own Hello World page in a container. Seeing reverse proxy in action which felt powerful because it showed how requests can be routed seamlessly. The event handling architecture of nginx is just pure engineering. Overall, I find Nginx not only useful for production systems but also a great learning tool to understand networking, load balancing, and scalability.\nWriting this blog was part of my journey to simplify these concepts, and I hope it helps others get started with Nginx the way I did.\nResources Nginx Documnetation NGINX Explained - What is Nginx NGINX Internal Architecture - Workers ","permalink":"https://9ovind.in/blogs/what_is_web_server_and_nginx_role_in_it/","summary":"\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eThe World Wide Web (WWW) was invented in 1989 by Tim Berners-Lee, but it wasn’t until 1991 that it became publicly available. Since then, the internet has evolved dramatically, with billions of users accessing websites daily. At the core of this communication lies the web server a foundational component that delivers content from a server to a user\u0026rsquo;s browser.\u003c/p\u003e\n\u003cp\u003eAmong the most popular web servers today is Nginx (pronounced engine-x), known for its high performance, scalability, and efficiency.\u003c/p\u003e","title":"What is a web server and role of nginx in it ?"},{"content":"Why CI/CD is essential ? CI/CD ( Continuous Integration and Continuous Deployment/Delivery ) is a cornerstone of modern software development, enabling teams to deliver high-quality applications faster and more reliably.\nBy automating the build, test and deployment processes, CI/CD pipelines eliminate manual errors, provide rapid feedback on code changes, and ensure that every new feature or bug fix is thoroughly vetted before reaching users.\nThis not only accelerates the development lifecycle but also fosters a culture of collaboration and continuous improvement, allowing developers to focus on writing code while the pipeline handles the rest.\nFor example: Think about your mobile apps like WhatsApp or Instagram. Developers update them regularly with new features or bug fixes. CI/CD ensures these updates are built, tested, and safely deployed automatically, so you get smooth updates without the app breaking.\nTech Stack Overview Laravel Docker Gitlab (For Repository) Gitlab CI/CD Gitlab Container registry Kubernetes (K3S distribution) ArgoCD Hetzner Server (4GB RAM / 2 CPU cores) (Price $5) Debian OS Cloudflare and 1 domain Portainer (For container management) Our Goal Containerize the Laravel application for consistent environments across development and production. Deploy and manage the application on a lightweight Kubernetes cluster (K3s) running on a low-cost Hetzner VPS ( Virtual private server ). Automate the entire container image build, test process on every code push using Gitlab CI. Automate deployment via ArgoCD. Custom domain with HTTPS using Cloudflare DNS and SSL. Infrastructure visibility using Portainer and ArgoCD dashboard. Repository git clone https://gitlab.com/9ovindyadav/laravel-cicd.git Project structure laravel-app folder contains our application code which we are going to deploy.\nlaravel-app/ ├── app ├── artisan ├── bootstrap ├── components.json ├── composer.json ├── composer.lock ├── config ├── database ├── eslint.config.js ├── node_modules ├── package.json ├── package-lock.json ├── phpunit.xml ├── public ├── resources ├── routes ├── storage ├── tests ├── tsconfig.json ├── vendor └── vite.config.ts docker folder contains all the configs for building the container image.\ndocker/ ├── compose.yaml # docker compose file ├── Dockerfile # file to build laravel app container images ├── entrypoint.sh # bash script which runs during container boot ├── initdb # sql script which runs during mysql boot │ └── default.sql ├── mysql.cnf # mysql config ├── nginx.conf # nginx config └── services ├── mysql.yaml ├── networks.yaml ├── phpmyadmin.yaml ├── production-app.yaml # laravel in production environment ├── local-app.yaml # laravel local development ├── redis.yaml ├── selenium.yaml └── volumes.yaml kubernetes this folder contains all the kubernetes configuration needed for deployment.\nkubernetes/ ├── apps # all running apps in k8s ├── argocd # argocd configs ├── proxy # gateway api config └── README.md .gitlab-ci.yml this file contains our entire CI pipeline code for build, test and deploy.\nAbout our application This web application is built using laravel 12 Have Login and Register and Dashboard page Need following dependecies to run PHP and Composer Nginx Node and npm Mysql Redis Prerequisite for running the application locally Docker ( Installation script for Linux ) To run the application cd into laravel-cicd copy .env file cp laravel-app/.env.example laravel-app/.env Start the app docker compose -f docker/compose.yaml --env-file laravel-app/.env up In Browser open localhost:9000 Sign Up with new user in local running application and check health status App containerization To containerize the application let\u0026rsquo;s see what we need to run the application first.\nServices needed for our application\nnginx container app container job container scheduler container redis container mysql container For nginx, redis and mysql container we are gonna pull the images from hub.docker.com and pass the config via volume\nFor app,job and scheduler we are gonna build a image using Dockerfile. We have also set a entrypoint.sh to run container dynamically as app,job or scheduler by passing APP_ROLE enviroment variable\nBuilding the image using Dockerfile\ndocker build -f ./docker/Dockerfile --target prod -t \u0026lt;image-name\u0026gt;:\u0026lt;image-tag\u0026gt; ./ View the build image by running docker images in terminal\nContainer registry Now our image is built and ready to use. let\u0026rsquo;s push it to a container registry so that we can pull it in our kubernetes cluster and run.\nThe registry we are gonna use is provided by gitlab per project.\nOur project Gitlab Container registry\nNow to build and push the image follow these steps:\nGet the value for following variables and store in a file we will need all this later.\nCI_REGISTRY=registry.gitlab.com CI_REGISTRY_USERNAME=\u0026lt;username\u0026gt; // Gitlab username CI_REGISTRY_TOKEN=\u0026lt;access-token\u0026gt; // Create new access token with read_registry and write_registry permission For access token go to Profile \u0026gt; Edit profile \u0026gt; Access tokens and Create new access token with read_registry and write_registry permission.\nDocker Login to gitlab registry\ndocker login registry.gitlab.com Build the image with registry name and tag\ndocker build -f ./docker/Dockerfile --target prod -t registry.gitlab.com/\u0026lt;username\u0026gt;/laravel-cicd:v0.0.1 ./ push the image to registry\ndocker push registry.gitlab.com/\u0026lt;username\u0026gt;/laravel-cicd:v0.0.1 Gitlab CICD Setup So what we are setting up in this CICD pipeline is build and deploy stage.\nBuild Build a laravel-cicd docker image Push build image to Gitlab container registry Deploy Change new build image tag to kubernetes/apps/lci/values.yaml file make a commit and push in the gitlab repository ArgoCD will monitor the repository for configuration changes and apply to the Kubernetes cluster. Pipeline configuration is written in gitlab-ci.yml file placed at the root of the repository.\nWhenever you make new commit in repository and push to the main branch pipeline will get triggered which will start build and deploy jobs.\nSince these jobs are running in a docker container they need access to Gitlab container registry to push build images and Gitlab repository to new commit changes.\nThese variables need to be create in gitlab so that pipeline can access credentials and run the jobs successfuly.\nGo to: Gitlab Repository \u0026gt; Settings \u0026gt; CI/CD \u0026gt; Variables \u0026gt; Project variables\nCreate these project variables with visibility masked and flags expanded.\nDOCKER_REGISTRY: registry.gitlab.com DOCKER_REGISTRY_USER: \u0026lt;gitlab username\u0026gt; DOCKER_REGISTRY_TOKEN: \u0026lt;personal-access-token\u0026gt; # token with read and write registry access GIT_REPO_USER: \u0026lt;gitlab username\u0026gt; GIT_REPO_TOKEN: \u0026lt;personal-access-token\u0026gt; # token with read and write repository access Now whenever you make new commits and push to gitlab pipeline will get triggered and build the image and push it to container registry in your project at Gitlab Repository \u0026gt; Deploy \u0026gt; Container registry \u0026gt; laravel-cicd location.\nServer setup Requirement:\n2GB RAM and 2 core amd64 CPU Available cloud providers - AWS, GCP, Azure, Hetzner AWS is good with free tier but billing is too complex here. which may get us in a trouble. So for simplicity and fix charges/month i am gonna use hetzner.\nGo get a hetzner cloud account and login to the dashboard.\nCreate a new project named laravel-cicd\nGo to project dashboard\nGo to servers section and click add server\nChoose\nLocation: Helsinki Image: Debian 12 Type: Shared vCPU (x86) CX22 SSH Keys: Add or else you will get credentials in email and update latter in server Firewalls: Create and allow port 22, 80, 443, 6443 Name: name your server and create Login into server and update OS\nGet the public ip of the server Check your email and get credentials for root user Get into your terminal and enter the following details to ssh into created server ssh root@\u0026lt;server public ip\u0026gt; Check the os details cat /etc/os-release Update the OS apt update \u0026amp;\u0026amp; apt upgrade Setup password less access using ssh keys\nOpen a new terminal into your local system Create a ssh private and public key pair ssh-keygen -t ed25519 -C \u0026#34;Hetzner server access\u0026#34; -f ~/.ssh/hetzner -N \u0026#34;\u0026#34; Copy the content of file ~/.ssh/hetzner cat ~/.ssh/hetzner Open a new terminal and SSH into your server with password Create a file ~/.ssh/authorized_keys and paste the content copied from ~/.ssh/hetzner file located in your local system Now exit out of your server and try to ssh without password ssh -i ~/.ssh/hetzner root@\u0026lt;server public ip\u0026gt; Set up ssh alias for this long command into ~/.ssh/config to make your life go easy Host server-238 HostName \u0026lt;server public ip\u0026gt; User root Port 22 AddKeysToAgent yes Identityfile ~/.ssh/hetzner Now you can login into your server like this ssh server-238 If all this works out. you can disable password authentication to enhance security of your server. # Edit file sudo vim /etc/ssh/sshd_config # Enable PubkeyAuthentication yes # Disbale PasswordAuthentication no # Restart sshd sudo systemctl restart sshd DNS setup To make accessing our applications easier and more secure, we\u0026rsquo;ll configure a domain name with a wildcard DNS and set up TLS encryption using Let\u0026rsquo;s Encrypt.\nPurchase a Domain\nBuy a domain from any provider like Hostinger or GoDaddy. let\u0026rsquo;s assume you purchased 9ovind.in We\u0026rsquo;ll be using wildcard DNS to allow subdomains like app1.9ovind.in, admin.9ovind.in. Configure DNS Records\nLogin to your DNS provider\u0026rsquo;s dashboard and add the following record like this pointing to your server Public IP. Get a TLS certificate from Let's Encrypt\nInstall Certbot on your server. We\u0026rsquo;ll use DNS-based validation since we\u0026rsquo;re generating a wildcard certificate.\n# Install certbot sudo apt install certbot # Request Wildcard Certificate (Manual Challenge) sudo certbot certonly --manual \\ --preferred-challenges=dns \\ -d \u0026#34;*.9ovind.in\u0026#34; -d \u0026#34;9ovind.in\u0026#34; \\ --agree-tos --no-eff-email --email you@example.com Follow the On-Screen Instructions\nCertbot will ask you to create a TXT record something like this.\nPlease deploy a DNS TXT record under the name _acme-challenge.mylaravelblog.com with the following value: AbC123xYzSuperSecretChallengeString Go to your DNS provider’s panel and add a TXT record. wait a few minutes for DNS to propagate and check with this.\ndig TXT _acme-challenge.9ovind.in +short Once Propagated, Certbot Will Complete the Process and The certificate files will be saved at /etc/letsencrypt/live/9ovind.in/\nCopy the TLS certificate and private key and save in the repo at kubernetes/proxy/certs/9ovind.in/.\nCertificate in file tls.cert Private key in file tls.key We will need these files for setting up TLS in our kubernetes cluster at Gateway API.\nKubernetes Cluster Local system setup Install these tools on your local system for accessing and managing the cluster.\nKubectl curl -LO \u0026#34;https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/$(uname -s | tr \u0026#39;[:upper:]\u0026#39; \u0026#39;[:lower:]\u0026#39;)/$(uname -m)/kubectl\u0026#34; chmod +x kubectl sudo mv kubectl /usr/local/bin/ Helm curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash Common steps on all VMs Turn off swap sudo swapoff -a sudo vim /etc/fstab # look for /swapfile none swap sw 0 0 and comment sudo rm -f /swapfile df -kh # look for space Change hostname sudo vim /etc/hostsname // cp-\u0026lt;last ip digit\u0026gt; or worker-\u0026lt;last ip digit\u0026gt; sudo vim /etc/hosts sudo reboot Control plane First Control plane node setup curl -sfL https://get.k3s.io | sh -s - server \\ --disable traefik \\ --disable servicelb \\ --cluster-init \\ --tls-san=Public-IP Copy /etc/rancher/k3s/k3s.yaml to local ~/.kube/config and update Public IP of server for local kubectl get the worker node registration token sudo cat /var/lib/rancher/k3s/server/node-token Join nodes If you have multiple servers available you can join them all using below commands as worker or Control plane nodes.\nJoining Worker Node curl -sfL https://get.k3s.io | K3S_TOKEN=\u0026lt;first-cp-token\u0026gt; sh -s - agent --server https://\u0026lt;first-cp-ip\u0026gt;:6443 Joining Another Control plane Node curl -sfL https://get.k3s.io | K3S_TOKEN=\u0026lt;first-cp-token\u0026gt; sh -s - server \\ --server https://\u0026lt;first-cp-ip\u0026gt;:6443 \\ --tls-san=Public-IP Taint Control plane For restricting the scheduling of pods on Control plane we can tant the nodes.\n# Apply taint kubectl taint nodes \u0026lt;node-name\u0026gt; node-role.kubernetes.io/control-plane=:NoSchedule # Remove taint kubectl taint nodes \u0026lt;node-name\u0026gt; node-role.kubernetes.io/control-plane- Gateway API setup Label node on which you want to schedule Gateway API\nkubectl label node \u0026lt;node-name\u0026gt; gateway=true Add helm repo and Gateway CRDs\nhelm repo add traefik https://traefik.github.io/charts helm repo update # Standard CRDs for GatewayClass, Gateway, HTTPRoute, GRPCRoute, ReferenceGrant kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml # Experimental CRDs for TCPRoute, UDPRoute, TLSRoute kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/experimental-install.yaml Create namespace\nkubectl create namespace traefik Create TLS for your domain\n# Change directory cd kubernetes/proxy/certs/\u0026lt;domain name\u0026gt;/ # Create TLS secret kubectl create secret tls \u0026lt;domain name\u0026gt;-tls --cert=tls.crt --key=tls.key -n traefik Update kubernetes/proxy/traefik/values.yaml. Replace example.com with your \u0026lt;domain-name\u0026gt;\nUpgrade or Install treafik\n# Change directory cd kubernetes/proxy/traefik # Install Traefik gateway helm upgrade --install traefik traefik/traefik \\ --namespace traefik \\ -f values.yaml demo-nginx app for testing the cluster setup Change your directory to kubernetes/apps/demo-nginx and apply all these files.\nUpdate route.yaml file with your \u0026lt;domain name\u0026gt;\nDeployment kubectl apply -f deployment.yaml Service kubectl apply -f service.yaml HTTPRoute kubectl apply -f route.yaml If your DNS setup was working. when nginx-demo.example.com you will get an html response like this.\nArgoCD setup Note: first make sure you are in cd kubernetes/argocd folder\nInstallation\nkubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml Edit configmap to disable https in argocd for nginx-gateway\nkubectl edit configmap argocd-cmd-params-cm -n argocd Place the following line\ndata: server.insecure: \u0026#34;true\u0026#34; Rollout the deployment\nkubectl rollout restart deployment argocd-server -n argocd Create a route\ncd kubernetes/argocd kubectl apply -f route.yaml Login info\nGo to browser and open the argocd url\nUser - admin Password - \u0026lt;we have to get from secrets\u0026gt; Get the password from secrets\nkubectl get secrets/argocd-initial-admin-secret -n argocd -o yaml Copy the password , it\u0026rsquo;s a base64 encoded string we have decode it first\necho \u0026#34;\u0026lt;password\u0026gt;\u0026#34; | base64 --decode Update admin password in argoCD for more security.\nSetup repository\n# Repositories kubectl apply -f repositories.yaml Create root-app for App of Apps pattern.\n# Root app kubectl apply -f root-app.yaml This root-app will track kubernetes/argocd/applications folder and create all apps in argocd automatically. you just have to make a application file and commit changes to git.\n","permalink":"https://9ovind.in/projects/laravel_full_ci_cd_setup_with_kubernetes_and_docker/","summary":"\u003ch2 id=\"why-cicd-is-essential-\"\u003eWhy CI/CD is essential ?\u003c/h2\u003e\n\u003cp\u003eCI/CD ( Continuous Integration and Continuous Deployment/Delivery ) is a cornerstone of modern software development, enabling teams to deliver high-quality applications faster and more reliably.\u003c/p\u003e\n\u003cp\u003eBy automating the build, test and deployment processes, CI/CD pipelines eliminate manual errors, provide rapid feedback on code changes, and ensure that every new feature or bug fix is thoroughly vetted before reaching users.\u003c/p\u003e\n\u003cp\u003eThis not only accelerates the development lifecycle but also fosters a culture of collaboration and continuous improvement, allowing developers to focus on writing code while the pipeline handles the rest.\u003c/p\u003e","title":"Full CI/CD pipeline of Laravel application with GitOps"},{"content":"SSL (Secure Sockets Layer) and its successor, TLS (Transport Layer Security), are the cryptographic protocols that secure data transmitted over the Internet. They ensure:\nConfidentiality: Your data is only accessible to the client and server. Integrity: Your data is not altered in transit. Authentication: You are communicating with the real server. When you see the padlock icon in your browser, that’s TLS protecting your connection.\nSSL was the original protocol developed at Netscape in 1994, but it is now obsolete due to security flaws. TLS is its modern, secure successor. Today, when people say \u0026ldquo;SSL,\u0026rdquo; they almost always mean \u0026ldquo;TLS.\u0026rdquo; The current version, TLS 1.3, is faster and more secure than its predecessors.\nThe Internet: A Network of Networks At its core, the Internet is not one big \u0026ldquo;thing\u0026rdquo; owned by a single company. it’s thousands of independent networks connected together. These networks belong to:\nInternet Service Providers (ISPs) Telecom companies Cloud providers (e.g., AWS, Azure, GCP) Enterprises Governments Universities Each network is called an Autonomous System (AS), and they peer with each other using protocols like Border Gateway Protocol (BGP) to share routes.\nWhat is an Autonomous System (AS)? An Autonomous System is a collection of IP networks (prefixes) that are managed by a single organization and that share a common routing policy. Each AS has a globally unique number called an Autonomous System Number (ASN). Routers inside an AS speak internal routing protocols (like OSPF, IS-IS, or iBGP), and communicate with other AS using BGP (Border Gateway Protocol). Example: Google (AS15169) Cloudflare (AS13335) Jio(AS55836) Airtel(AS9498) Why ASNs matter and who Assigns it ? There is a hierarchy of organizations that manage Internet number resources (IP addresses and ASNs):\nIANA (Internet Assigned Numbers Authority)\nIANA is at the top level. It manages global IP address space and ASN pools. It is operated by ICANN (Internet Corporation for Assigned Names and Numbers). RIRs (Regional Internet Registries)\nIANA delegates blocks of IP addresses and ASN ranges to 5 RIRs, each covering a specific region: ARIN (North America) LACNIC (Latin America \u0026amp; Caribbean) RIPE NCC (Europe, Middle East, parts of Central Asia) APNIC (Asia Pacific) AFRINIC (Africa) NIRs (National Internet Registries)\nIn some regions, there are National Internet Registries (NIRs) that further manage allocation within a country. For example these are some NIRs operating under APNIC: IRINN, Indian Registry for Internet Names and Numbers CNNIC, China Internet Network Information Center JPNIC, Japan Network Information Center Local ISPs, Enterprises, Organizations\nThese entities apply to their RIR or NIR to request an ASN. -They must justify why they need an ASN (usually because they plan to run BGP with other networks). Once assigned, they are globally registered. Every BGP router announces IP prefixes with its ASN as it helps establish routing policies, peering agreements, and routing decisions.\nWhen you visit google.com, your data may flow through: Each router uses BGP to decide how to route traffic across AS boundaries.\nSince your packets cross multiple ASes you don’t control. Any compromised router in any AS could inspect or modify unencrypted traffic.\nTLS ensures that even though your data flows through all these ASes, only you and the server can read it.\nHow SSL/TLS protects your data ? When you connect to a website over HTTPS, TLS works behind the scenes to secure your connection.\nSince your data is flowing through someone else networks and routers to the destination server anyone in midle can read and write to the data.\nTLS protects your data in three key ways:\nConfidentiality\nData is only accessible by client and server. TLS achieves confidentiality through a cryptographic technique called Encryption. TLS encrypts all the data you send and receive. Even if someone intercepts your traffic (e.g. ISP, hacker, rogue router), they only see scrambled, unreadable data. Only your browser and the server have the keys to decrypt the data. Integrity\nData hasn\u0026rsquo;t been modified between client and server. TLS achieves integrity through a cryptographic technique called Hashing. TLS ensures that the data hasn\u0026rsquo;t been tampered with during transmission but it does\u0026rsquo;t prevent from modification. If any data is altered, it will be detected instantly and the connection will fail. Authentication\nClient and server are indeed who they say they are. TLS achieves authentication through a system called PKI (Pulbic key infrastructure). TLS verifies the identity of the server using digital certificates issued by trusted Certificate Authorities (CAs). This prevents attackers from impersonating a real website (e.g., phishing, man-in-the-middle attacks). Key players of SSL \u0026amp; TLS Understanding TLS requires knowing the three main actors involved in the process:\nThe Client: This is the application that initiates the secure connection. Most commonly, this is your web browser (like Chrome, Firefox, or Safari) when you visit a website. However, it can be any application that needs to communicate securely, such as an email client or a mobile app.\nThe Server: This is the machine that the client wants to communicate with. It hosts the website or service (e.g., google.com or your-bank.com). The server holds the digital certificate and the private key necessary to prove its identity and establish a secure connection.\nThe Certificate Authority (CA): A Certificate Authority is a trusted third-party organization that issues digital certificates. Its job is to verify the identity of the server\u0026rsquo;s owner before issuing a certificate. This is crucial for the \u0026ldquo;Authentication\u0026rdquo; part of TLS. When you browser sees a certificate from a trusted CA (like Let\u0026rsquo;s Encrypt, DigiCert, or GlobalSign), it knows it can trust that the server is who it claims to be. Your browser and operating system come with a pre-installed list of trusted CAs.\nServer Verification and Certificate Types Before a Certificate Authority (CA) issues a certificate, it must verify that the entity requesting it is who they say they are. The rigor of this verification process determines the type of certificate issued. This is crucial for establishing trust. There are three main levels of validation:\nDomain Validation (DV) Certificates:\nVerification Level: This is the most basic level of validation. The CA only verifies that the applicant controls the domain name. This is usually done by email verification, adding a DNS record, or uploading a specific file to the website. Trust Level: It confirms that your connection is encrypted and that you are connected to the correct domain, but it doesn\u0026rsquo;t verify who owns the domain. Best for: Blogs, personal websites, and small projects where identity assurance is not a high priority. Let\u0026rsquo;s Encrypt is a popular provider of free DV certificates. Organization Validation (OV) Certificates:\nVerification Level: This involves a more substantial vetting process. The CA verifies not only domain control but also the legal existence and details of the organization (e.g., name, city, country). This requires submitting business registration documents. Trust Level: Provides a higher level of trust by confirming the identity of the legal entity behind the website. Users can view these details in the certificate information in their browser. Best for: E-commerce sites, public-facing corporate websites, and services that handle sensitive user information. Extended Validation (EV) Certificates:\nVerification Level: This is the highest level of validation and involves a strict, globally standardized vetting process defined by the CA/Browser Forum. The CA performs a thorough background check on the organization, verifying its legal, physical, and operational existence. Trust Level: Offers the highest level of trust and assurance. In the past, browsers would display the company\u0026rsquo;s name in a green address bar, though this UI has been mostly phased out. Still, the verified company name is visible in the certificate details. Best for: Financial institutions (banks), government agencies, and large enterprises where proving identity and building user trust is paramount. Choosing the right certificate type depends on the website\u0026rsquo;s purpose and the level of trust it needs to establish with its users.\nEncryption and Confidentiality Encryption is the process of converting plaintext data into a scrambled, unreadable format called ciphertext. This is the core mechanism TLS uses to ensure confidentiality. Only parties with the correct key can decrypt the ciphertext back into its original, readable form.\nTLS cleverly uses two types of encryption:\nAsymmetric Encryption (Public-Key Cryptography):\nThis involves a pair of keys: a public key and a private key. The public key can be shared with anyone. The private key must be kept secret by the owner (in this case, the server). Data encrypted with the public key can only be decrypted with the corresponding private key. TLS uses asymmetric encryption during the initial handshake to securely exchange a key for symmetric encryption. This is crucial for establishing a secure channel without having to pre-share a secret. Symmetric Encryption:\nThis uses a single, shared secret key for both encryption and decryption. It is much faster and more efficient for encrypting large amounts of data than asymmetric encryption. Once the initial handshake is complete, the client and server use the securely exchanged symmetric key to encrypt all the actual application data (like your HTTP requests and the website\u0026rsquo;s responses). By combining these two methods, TLS gets the best of both worlds: the secure key exchange capability of asymmetric encryption and the high performance of symmetric encryption for the bulk of the data transfer.\nA Closer Look at Asymmetric Encryption: How RSA Works The RSA (Rivest-Shamir-Adleman) algorithm is the most well-known asymmetric encryption algorithm. Its security is based on the practical difficulty of factoring the product of two large prime numbers.\nThe Core Concept: A Mailbox Analogy\nImagine you have a special mailbox with two keys:\nA public key (the mailbox slot): You can make copies of this key and give it to anyone. Anyone with this key can open the slot and drop a message in. A private key (the mailbox door key): This key is yours alone. It is the only key that can open the mailbox door to retrieve the messages. This is the essence of RSA. The server\u0026rsquo;s public key is like the mailbox slot, and its private key is the only thing that can unlock the messages sent to it.\nA Demo: RSA in Action\nLet\u0026rsquo;s see how this is used for both Confidentiality (encryption) and Authentication (digital signatures).\n1. For Confidentiality (Encrypting the Session Key):\nThis is the primary role of RSA in the TLS handshake.\nStep 1: The Server sends its certificate to the Client. This certificate contains the Server\u0026rsquo;s public key. Step 2: The Client generates a small, random piece of data (the \u0026ldquo;pre-master secret\u0026rdquo;) that will be used to create the final session key. Step 3: The Client uses the Server\u0026rsquo;s public key to encrypt this pre-master secret. Now, it\u0026rsquo;s just scrambled, unreadable data. Step 4: The Client sends this encrypted data to the Server. Step 5: The Server uses its private key to decrypt the data and retrieve the pre-master secret. No one else could have done this, because no one else has the private key. Result: Both the Client and Server now share a secret, which they use to independently generate the same symmetric session key. Confidentiality is achieved. 2. For Authentication (How a CA Signs a Certificate):\nRSA also works in reverse for digital signatures, which is how a CA guarantees a certificate is authentic.\nStep 1: A server owner gives their certificate information (domain name, public key, etc.) to a Certificate Authority (CA). Step 2: The CA creates a hash of all that information. Step 3: The CA uses its own private key to encrypt that hash. This encrypted hash is the digital signature. Step 4: The CA attaches this signature to the certificate and sends it back to the server owner. Step 5 (Verification): When your browser receives the certificate, it sees it was signed by the CA. Your browser already trusts this CA and has its public key. It uses the CA\u0026rsquo;s public key to decrypt the signature, revealing the original hash. It then computes its own hash of the certificate. If the two hashes match, the signature is valid. Result: Your browser has just mathematically proven that the certificate is authentic and has not been tampered with. Authentication is achieved. Hashing, Hashing algorithms and Collisions Hashing is a fundamental concept in cryptography that ensures data integrity. It\u0026rsquo;s the process of taking an input (of any size) and running it through a mathematical function to produce a fixed-size output string, known as a hash.\nA good hashing algorithm is:\nDeterministic: The same input will always produce the same hash. Efficient: The hash is quick to compute. Pre-image Resistant: It\u0026rsquo;s computationally impossible to determine the original input from its hash. This makes it a one-way function. Collision Resistant: It should be extremely difficult to find two different inputs that produce the same hash. Common hashing algorithms include SHA-256 (Secure Hash Algorithm 256-bit), which is widely used today. Older algorithms like MD5 and SHA-1 are now considered insecure because \u0026ldquo;collisions\u0026rdquo; have been found. A collision means two different inputs produce the same hash, which can allow an attacker to pass off a malicious file as a legitimate one.\nData integrity - How TLS uses Hashing So how does TLS use hashing to ensure data hasn\u0026rsquo;t been tampered with? It uses a clever mechanism called an HMAC (Hash-based Message Authentication Code).\nHere’s how it works:\nDuring the initial TLS handshake, the client and server securely negotiate a shared secret key. For every message they exchange, the sender combines the message content with the secret key and then hashes the result. This creates the HMAC, which is attached to the message. The receiver gets the message and the HMAC. It independently computes its own HMAC using the message content and the shared secret key it already has. If the received HMAC matches the one it just computed, the data is considered authentic and unaltered. If they don\u0026rsquo;t match, the connection is terminated immediately, as this indicates tampering. This process guarantees both integrity (the data wasn\u0026rsquo;t changed) and authentication (we know who sent it, because only they have the secret key).\nWhat is a Cipher Suite? A cipher suite is a bundle of algorithms that, together, provide all the security guarantees of TLS. Think of it as a single name that defines the exact tools for the job. During the handshake, the client sends a list of cipher suites it supports (its \u0026ldquo;menu\u0026rdquo; of security options), and the server chooses the one it prefers, usually the strongest one they both support.\nA typical cipher suite name, like TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, looks complex, but it\u0026rsquo;s just a combination of four distinct types of algorithms, each mapping directly to a core TLS goal:\nKey Exchange Algorithm (ECDHE): This algorithm\u0026rsquo;s job is to let the client and server securely agree on a shared secret key for symmetric encryption, even if someone is listening in. This is the foundation for confidentiality. Using an \u0026ldquo;ephemeral\u0026rdquo; method like ECDHE provides Perfect Forward Secrecy, meaning past conversations remain secure even if the server\u0026rsquo;s long-term key is compromised later.\nAuthentication Algorithm (RSA): This determines how the server proves its identity to the client. The server uses this algorithm (e.g., RSA) to sign its certificate, proving it owns the corresponding private key. This directly provides Authentication.\nBulk Encryption Algorithm (AES_128_GCM): This is the high-speed symmetric cipher that will encrypt all the actual data (your requests, the website\u0026rsquo;s content, etc.) after the handshake is complete. This provides Confidentiality.\nHashing Algorithm (SHA256): This algorithm is used to create a Message Authentication Code (MAC), which is like a tamper-proof seal on every message. This provides Integrity.\nSo, the cipher suite is the complete package. By agreeing on one, the client and server are explicitly defining how they will achieve Authentication, Confidentiality, and Integrity for the entire session.\nThe TLS Handshake Now that we understand the key players and the types of encryption, let\u0026rsquo;s walk through how they all come together in the TLS handshake. This is the negotiation process that happens in milliseconds before any actual data is sent. The goal is simple: for the client and server to verify each other and securely agree on a session key for symmetric encryption.\nHere’s a simplified look at the steps (for TLS 1.2, as it\u0026rsquo;s a bit more explicit for learning, though TLS 1.3 is faster and more common today):\nClient Hello: The client sends a \u0026ldquo;hello\u0026rdquo; message to the server. This message includes:\nThe TLS versions it supports. A list of cipher suites it can use (the combination of encryption, authentication, and hashing algorithms). A random string of bytes, known as the \u0026ldquo;Client Random.\u0026rdquo; Server Hello: The server receives the client\u0026rsquo;s hello and responds with its own \u0026ldquo;hello\u0026rdquo; message. This includes:\nThe TLS version and cipher suite it has chosen from the client\u0026rsquo;s list. Its digital certificate, which contains its public key. Another random string of bytes, the \u0026ldquo;Server Random.\u0026rdquo; Certificate Verification: The client examines the server\u0026rsquo;s certificate. It checks:\nIs the certificate expired? Is it for the correct domain (google.com, etc.)? Is it signed by a Certificate Authority (CA) that the client trusts? (The client checks this against its built-in list of trusted CAs). If any of these checks fail, the browser will show a security warning, and the connection is terminated. Key Exchange: This is the clever part. The client generates another random string of bytes called the \u0026ldquo;Pre-Master Secret.\u0026rdquo;\nThe client encrypts this Pre-Master Secret using the server\u0026rsquo;s public key (which it got from the certificate). The client sends this encrypted Pre-Master Secret to the server. Because it was encrypted with the public key, only the server, with its corresponding private key, can decrypt it. Now, both the client and server use the Client Random, Server Random, and the Pre-Master Secret to independently calculate the same session key. Handshake Complete \u0026amp; Secure Communication: The handshake is now finished.\nBoth client and server send a \u0026ldquo;Finished\u0026rdquo; message, which is encrypted with the newly created session key. From this point on, all communication between the client and server is encrypted using this symmetric session key, ensuring confidentiality and integrity for the rest of the session. How to Inspect a Website\u0026rsquo;s Certificate Everything we\u0026rsquo;ve discussed is visible right in your browser. This is the best way to see how these concepts apply in the real world. Try this on any https:// site:\nClick the Padlock: In your browser\u0026rsquo;s address bar, click the padlock icon to the left of the website\u0026rsquo;s URL. View the Certificate: Look for an option that says \u0026ldquo;Connection is secure,\u0026rdquo; which will lead to a \u0026ldquo;Certificate is valid\u0026rdquo; button. Clicking this will open the certificate viewer. What to Look For: Issued To: You\u0026rsquo;ll see the \u0026ldquo;Common Name\u0026rdquo; (the domain the certificate belongs to) and often the organization\u0026rsquo;s name, city, and country (for OV and EV certificates). Issued By: This shows you which Certificate Authority (CA) verified and signed the certificate. Validity Period: You can see the \u0026ldquo;Not Before\u0026rdquo; and \u0026ldquo;Not After\u0026rdquo; dates. By doing this, you can directly see the results of the TLS handshake and verify the identity and security of the websites you visit.\nLearning Resources Practical TLS Playlist Conclusion SSL/TLS is a cornerstone of modern internet security. It works silently in the background, providing the essential guarantees of confidentiality, integrity, and authentication. While the process is complex, the result is a secure channel that protects our sensitive information from prying eyes and tampering.\n","permalink":"https://9ovind.in/blogs/ssl_and_tls/","summary":"\u003cp\u003eSSL (Secure Sockets Layer) and its successor, TLS (Transport Layer Security), are the cryptographic protocols that secure data transmitted over the Internet. They ensure:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eConfidentiality:\u003c/strong\u003e Your data is only accessible to the client and server.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eIntegrity:\u003c/strong\u003e Your data is not altered in transit.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eAuthentication:\u003c/strong\u003e You are communicating with the real server.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eWhen you see the padlock icon in your browser, that’s TLS protecting your connection.\u003c/p\u003e\n\u003cp\u003e\u003cimg alt=\"TLS\" loading=\"lazy\" src=\"/blogs/ssl_and_tls/connection_secure.jpg\"\u003e\u003c/p\u003e\n\u003cp\u003eSSL was the original protocol developed at Netscape in 1994, but it is now obsolete due to security flaws. TLS is its modern, secure successor. Today, when people say \u0026ldquo;SSL,\u0026rdquo; they almost always mean \u0026ldquo;TLS.\u0026rdquo; The current version, TLS 1.3, is faster and more secure than its predecessors.\u003c/p\u003e","title":"A Deep Dive into SSL/TLS: The Protocols That Secure the Internet"},{"content":"Kubernetes also known as K8s is an open-source container orchestration system for automating software deployment, scaling, and management.\nOriginally designed by Google, the project is now maintained by a worldwide community of contributors and the trademark is held by the CNCF(Cloud Native Computing Foundation).\nKubernetes assembles one or more computers, either virtual machines or bare metal, into a cluster which can run workloads in containers.\nWhat features does it provide ? Container Orchestration\nAutomatically schedules and runs containers across a cluster of machines. Abstracts away the infrastructure — developers just define what the app needs (YAML), Kubernetes handles the rest. Declarative Infrastructure\nEverything in K8s is defined as YAML manifests. Developers describe the desired state, and K8s ensures the system matches that. Self-Healing\nAutomatically restarts failed containers Reschedules Pods if a Node dies Replaces containers if the health check fails Health checks\nreadinessProbe: Tells K8s when the app is ready to serve traffic livenessProbe: Restarts app if it hangs or crashes Rolling Updates and Rollbacks\nUpdates can be applied gradually with zero downtime Easy rollback to previous versions if the new one fails Services and Networking\nInternal DNS service discovery (e.g: my-service.default.svc.cluster.local) Supports: ClusterIP (default, internal-only) NodePort, LoadBalancer, Ingress, GatewayAPI for external access Secrets and ConfigMaps\nConfigMap: Inject environment variables or config files into containers Secret: Securely store API keys, passwords, etc. Developers should never hardcode secrets — use these instead.\nVolumes and Persistent Storage\nStore data outside containers via PersistentVolumeClaims (PVCs) Good for databases or any app that needs persistent state Namespaces\nIsolate environments (e.g., dev, staging, prod) within a single cluster Developers can test without affecting production RBAC (Role-Based Access Control)\nControls who can deploy, read logs, access resources Essential for teams — especially on shared clusters Observability\nNative support for: Logs (kubectl logs) Metrics (via Prometheus) Great for debugging and monitoring performance. What is a Kubernetes Cluster ? A Kubernetes cluster is a group of computers (called nodes) that work together to run your containerized applications. These nodes can be real machines or virtual ones.\nThere are two types of nodes in a Kubernetes cluster:\n1. Master node (Control Plane): Think of it as the brain of the cluster. It makes decisions, like where to run applications, handles scheduling, and keeps track of everything. 2. Worker nodes: These are the machines that actually run your apps inside containers. Each worker node has a Kubelet (agent), a container runtime (like Docker or containerd), and tools for networking and monitoring. Key Components ( Architecture ) A Kubernetes cluster has many parts working together behind the scenes. Let’s break down the core components we should know:\n1. API Server Location: Control Plane Acts as the front door of the Kubernetes cluster. All communication (from users, CLI tools like kubectl, or even internal components) goes through the API Server. It processes REST requests, validates them, and updates the cluster state in etcd. Think of it as the receptionist of a company — every request passes through it first.\n2. Scheduler Location: Control Plane Responsible for assigning Pods to Nodes. It looks at: Resource requirements (CPU, memory) Node availability Taints/tolerations and affinities It chooses the best node for each pod and tells the API server its decision. Like a delivery manager assigning packages to the nearest delivery person.\n3. Conroller manager Location: Control Plane Watches the cluster state and makes sure the current state matches the desired state (defined in YAML files). Contains multiple controllers: Node Controller – watches node health ReplicaSet Controller – ensures the right number of pods are running Job Controller, DaemonSet Controller, etc. Imagine a robot checking every 5 seconds if your to-do list is being followed, and fixing anything that\u0026rsquo;s off.\n4. etcd Location: Control Plane A fast, distributed key-value store used as Kubernetes’ backbone database. Stores all cluster data — deployments, state of pods, secrets, config maps, etc Highly consistent and supports snapshots/backup. If the API server is the receptionist, etcd is the filing cabinet where everything is saved.\n5. Kubelet Location: Each Worker Node An agent that runs on every worker node. It takes instructions from the API server and: Ensures containers are running Monitors pod health Reports back to the control plane Like a local manager on each node making sure everything is working as planned.\n6. Kube-proxy Location: Each Worker Node Manages networking and communication in the cluster. Handles: Routing traffic to the correct pod/service Load balancing NAT rules for service access Think of it as the node’s network engineer — setting up all the traffic rules so things run smoothly.\n7. Container runtime Location: Each Worker Node Software that actually runs containers on a system. Kubernetes supports several runtimes: Docker (deprecated) containerd CRI-O Kubelet communicates with this runtime to start/stop containers. It\u0026rsquo;s the engine that powers and runs your containers, like Docker or containerd.\nLocal K8s cluster setup We are going to use docker containers as a nodes and kind to setup the cluster.\nTo create a local k8s cluster we need following tools installed in our system.\n1. Docker Installation curl -fsSL https://get.docker.com | sh Allow running docker without sudo sudo groupadd docker sudo usermod -aG docker $USER newgrp docker Run a hello-world image docker run hello-world 2. Kubectl Installation curl -LO \u0026#34;https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl\u0026#34; curl -LO \u0026#34;https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256\u0026#34; echo \u0026#34;$(cat kubectl.sha256) kubectl\u0026#34; | sha256sum --check sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl Check version kubectl version 3. Helm Installation curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash Check version helm version 4. Kind Installation # For AMD64 / x86_64 [ $(uname -m) = x86_64 ] \u0026amp;\u0026amp; curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.29.0/kind-linux-amd64 # For ARM64 [ $(uname -m) = aarch64 ] \u0026amp;\u0026amp; curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.29.0/kind-linux-arm64 chmod +x ./kind sudo mv ./kind /usr/local/bin/kind Check version kind version Now we have everything installed on our system for cluster setup.\nLet\u0026rsquo;s create a k8s cluster which will have 1 Master and 2 Worker node\nCreate a kind-config.yaml file and copy paste the below content kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane extraPortMappings: - containerPort: 30080 hostPort: 30080 # Map container\u0026#39;s port 30080 to host\u0026#39;s port 30080 protocol: TCP - containerPort: 30443 hostPort: 30443 # Map container\u0026#39;s port 30443 to host\u0026#39;s port 30443 protocol: TCP - role: worker - role: worker Run below command to create cluster kind create cluster --config kind-config.yaml Check nodes kubectl get nodes Output:\nNAME STATUS ROLES AGE VERSION kind-control-plane Ready control-plane 21m v1.33.1 kind-worker Ready \u0026lt;none\u0026gt; 21m v1.33.1 kind-worker2 Ready \u0026lt;none\u0026gt; 21m v1.33.1 Important K8s Concepts 1. Pods and Deployments A Pod is the smallest unit in Kubernetes. It runs one or more containers with shared storage and network resources. A Deployment ensures that the desired number of pod replicas are running and manages rolling updates. We can list the running pods via following command kubectl get pods Let\u0026rsquo;s create a our first pod using deployment file nginx-deployment.yaml. # nginx-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 Apply the deployment file kubectl apply -f nginx-deployment.yaml Now list the running pods kubectl get pods Output: NAME READY STATUS RESTARTS AGE nginx-deployment-96b9d695-9h49b 1/1 Running 0 20m nginx-deployment-96b9d695-rhd8x 1/1 Running 0 20m We can exec into the pods using the below command to view the files of our container kubectl exec -it \u0026lt;pod-name\u0026gt; -- sh We can also view the logs of the container which process prints on stdout kubectl logs -f \u0026lt;container-name\u0026gt; We can describe the pod to view each and every detail of it. if pod crashes or fails to start here we can debug the reason. kubectl describe pod/\u0026lt;pod-name\u0026gt; To delete the pods if we run the below command.The existing pod will delete but new pod will be created by control manager using existing deployment resource which we applied previously. kubectl delete pod/\u0026lt;pod-name\u0026gt; If we want to delete the pods permanently then we should delete the deployment and pod will be removed by k8s kubectl delete deployment nginx-deployment We can also delete K8s deployment resource using our existing deployment file kubectl delete -f nginx-deployment.yaml Wants to read more about pods checkout the K8s Pod documentation 2. Services Each pod in a cluster gets its own unique cluster-wide IP address. A pod has its own private network namespace which is shared by all of the containers within the pod. Processes running in different containers in the same pod can communicate with each other over localhost. The pod network (also called a cluster network) handles communication between pods. It ensures that all pods can communicate with all other pods, whether they are on the same node or on different nodes. If we use a Deployment to run our app, that Deployment can create and destroy Pods dynamically. From one moment to the next, we don\u0026rsquo;t know how many of those Pods are working and healthy. we might not even know what those healthy Pods are named. The Service API lets us provide a stable (long lived) IP address or hostname for a service implemented by one or more backend pods, where the individual pods making up the service can change over time. A Service exposes pods to internal or external traffic. So we have several types of services: ClusterIP : This is a default service type. it exposes service on cluster\u0026rsquo;s internal ip and makes it reachable only from within the cluster. NodePort : Exposes service on each Node\u0026rsquo;s IP at a static port(the NodePort). The default range of NodePort is 30000–32767 . LoadBalancer : Exposes the Service externally using an external load balancer. Let\u0026rsquo;s access our nginx pod from outside of the cluster using NodePort service. Create a nginx-service.yaml file and copy paste the below content. # nginx-service.yaml apiVersion: v1 kind: Service metadata: name: nginx-service spec: type: NodePort selector: app: nginx ports: - port: 80 targetPort: 80 nodePort: 30080 Let\u0026rsquo;s apply the service manifest using below command. kubectl apply -f nginx-service.yaml To list the services run the below command kubectl get services Output: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 \u0026lt;none\u0026gt; 443/TCP 49s nginx-service NodePort 10.96.165.229 \u0026lt;none\u0026gt; 80:30080/TCP 8s Now if we open the localhost:30080 in our browser. we will see a page served by our nginx pod. Wants to read more about k8s network model checkout the K8s services documentation 3. Namespaces Namespaces provide a mechanism for isolating groups of resources within a single cluster.\nNames of resources need to be unique within a namespace, but not across namespaces.\nNamespaces cannot be nested inside one another and each Kubernetes resource can only be in one namespace.\nKubernetes starts with four initial namespaces. We can list the current namespaces in a cluster using below command.\nkubectl get namespaces Output:\nNAME STATUS AGE default Active 78m kube-node-lease Active 78m kube-public Active 78m kube-system Active 78m Most Kubernetes resources (e.g. pods, services, replication controllers, and others) are in some namespaces. However namespace resources are not themselves in a namespace. And low-level resources, such as nodes and persistentVolumes, are not in any namespace.\nTo see which Kubernetes resources are and aren\u0026rsquo;t in a namespace:\n# In a namespace kubectl api-resources --namespaced=true # Not in a namespace kubectl api-resources --namespaced=false To see all the resources of a particular namespace flag every command with -n \u0026lt;namespace\u0026gt; such as to view running pods in kube-system it would be.\nkubectl get pods -n kube-system Create a new namespace\nkubectl create namespace mars 4. Configmaps and Secrets A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume. A ConfigMap allows you to decouple environment-specific configuration from our container images, so that our applications are easily portable. List all configmaps kubectl get configmaps To read more about configmaps and its usage please refere this k8s Configmap documentation A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Secrets are similar to ConfigMaps but are specifically intended to hold confidential data. List all secrets kubectl get secrets To read more about secrets and its usage please refer this k8s secrets documentation Demo of kind cluster and nginx application Learning Resources Kubernetes Crash Course for Absolute Beginners Kubernetes Crash Course: Learn the Basics and Build a Microservice Application What is Kubernetes? | Kubernetes Explained Kubernetes Documentation Virtualization , Containers and role of docker in it Below Kubernetes: Demystifying container runtimes Certified Kubernetes Administrator Understanding Kubernetes Networking in 30 Minutes Conclusion Till now we have just covered tip of the iceburg. We have alot to learn about kubernetes but the topics we have covered are just enough to get you started using k8s in local setup and explore its architecture.\nFor production grade cluster setup and k8s distributions we will talk in another blog. So stay tuned.\nThanks for sticking around and practicing!\nUntil next time happy coding!\n","permalink":"https://9ovind.in/blogs/kubernetes/","summary":"\u003cp\u003e\u003cstrong\u003eKubernetes\u003c/strong\u003e also known as K8s is an open-source container orchestration system for automating software deployment, scaling, and management.\u003c/p\u003e\n\u003cp\u003e\u003cimg alt=\"k8s logo\" loading=\"lazy\" src=\"/blogs/kubernetes/k8s-logo.png\"\u003e\u003c/p\u003e\n\u003cp\u003eOriginally designed by Google, the project is now maintained by a worldwide community of contributors and the trademark is held by the \u003ca href=\"https://www.cncf.io\"\u003eCNCF\u003c/a\u003e(Cloud Native Computing Foundation).\u003c/p\u003e\n\u003cp\u003eKubernetes assembles one or more computers, either virtual machines or bare metal, into a cluster which can run workloads in containers.\u003c/p\u003e\n\u003ch2 id=\"what-features-does-it-provide-\"\u003eWhat features does it provide ?\u003c/h2\u003e\n\u003col\u003e\n\u003cli\u003e\n\u003cp\u003eContainer Orchestration\u003c/p\u003e","title":"Getting Started with Kubernetes for Developers"},{"content":"Go is a simple, fast, and concurrent programming language. Its simplicity in design makes it an amazing programming language to work with. Go is currently gaining a lot of popularity, and a lot of organizations now prefer to write their backend in Go.\nGo was designed at Google in 2007 by Robert Griesemer, Rob Pike, and Ken Thompson to improve programming productivity in an era of multicore, networked machines and large codebases.The designers wanted to address criticisms of other languages in use at Google, but keep their useful characteristics:\nStatic typing and run-time efficiency (like C) Readability and usability (like Python) High-performance networking and multiprocessing Its designers were primarily motivated by their shared dislike of C++.\nGo was publicly announced in November 2009,and version 1.0 was released in March 2012. Go is widely used in production at Google and in many other organizations and open-source projects like Kubernetes, Prometheus, and Docker.\nIn retrospect the Go authors judged Go to be successful due to the overall engineering work around the language, including the runtime support for the language\u0026rsquo;s concurrency feature.\nAlthough the design of most languages concentrates on innovations in syntax, semantics, or typing, Go is focused on the software development process itself. The principal unusual property of the language itself—concurrency—addressed problems that arose with the proliferation of multicore CPUs in the 2010s. But more significant was the early work that established fundamentals for packaging, dependencies, build, test, deployment, and other workaday tasks of the software development world, aspects that are not usually foremost in language design.\nGo Installation for linux Download Binary Remove old go version (if any) sudo rm -rf /usr/local/go Install go binary sudo tar -C /usr/local -xzf go1.24.2.linux-amd64.tar.gz Add /usr/local/go/bin to the PATH environment variable echo \u0026#34;export PATH=$PATH:/usr/local/go/bin\u0026#34; \u0026gt;\u0026gt; ~/.bashrc source ~/.bashrc Verify installation go version Outputs: go version go1.23.1 linux/amd64 Let\u0026rsquo;s write some code Make a directory mkdir go-example cd go-example Enable dependecy tracking go mod init example/hello Create a file hello.go in which to write our code. package main import \u0026#34;fmt\u0026#34; func main() { fmt.Println(\u0026#34;Hello, World!\u0026#34;) } In this code, we: Declare a main package (a package is a way to group functions, and it\u0026rsquo;s made up of all the files in the same directory). Import the popular fmt package, which contains functions for formatting text, including printing to the console. This package is one of the standard library packages we got when we installed Go. Implement a main function to print a message to the console. A main function executes by default when we run the main package. Run the code go run hello.go Output: Hello, World! The go run command is one of many go commands we\u0026rsquo;ll use to get things done with Go. Use the go help command to get a list of the others:\ngovind@debian:~$ go help Go is a tool for managing Go source code. Usage: go \u0026lt;command\u0026gt; [arguments] The commands are: bug start a bug report build compile packages and dependencies clean remove object files and cached files doc show documentation for package or symbol env print Go environment information fix update packages to use new APIs fmt gofmt (reformat) package sources generate generate Go files by processing source get add dependencies to current module and install them install compile and install packages and dependencies list list packages or modules mod module maintenance work workspace maintenance run compile and run Go program telemetry manage telemetry data and settings test test packages tool run specified go tool version print Go version vet report likely mistakes in packages Use \u0026#34;go help \u0026lt;command\u0026gt;\u0026#34; for more information about a command. Packages and Imports Go does not support Classes like in OOPs programming languages such as Java, it uses the package system instead. Each package is a directory in your workspace, and each go file must belong to some package. Hence, each file should start with the keyword package followed by the package name. A go executable must contain package main.\nA package can be imported by using the import keyword followed by the list of packages inside parenthesis.\nThe standard library comes preinstalled, with Go, and contains the most essential and useful packages. \u0026ldquo;fmt\u0026rdquo; is used to export Println which prints to console.\nGo does not allow unused imports.\npackage main import ( \u0026#34;fmt\u0026#34; \u0026#34;log\u0026#34; \u0026#34;time\u0026#34; ) Using an external package When we need our code to do something that might have been implemented by someone else, we can look for a package that has functions we can use in our code.\nLet\u0026rsquo;s Make our printed message a little more interesting with a function from an external module.\nVisit pkg.go.dev and search for a \u0026ldquo;quote\u0026rdquo; package. Locate and click the rsc.io/quote package in search results (if we see rsc.io/quote/v4, ignore it for now). In the Documentation section, under Index, note the list of functions we can call from our code. we\u0026rsquo;ll use the Go function. At the top of this page, note that package quote is included in the rsc.io/quote module. we can use the pkg.go.dev site to find published modules whose packages have functions we can use in our own code. Packages are published in modules like rsc.io/quote where others can use them. Modules are improved with new versions over time, and we can upgrade our code to use the improved versions.\nCreate a file quote.go and add the following code\npackage main import ( \u0026#34;fmt\u0026#34; \u0026#34;rsc.io/quote\u0026#34; ) func main() { fmt.Println(quote.Go()) } Add new module requirements and sums.\nGo will add the quote module as a requirement, as well as a go.sum file for use in authenticating the module.\ngo mod tidy Run the code to see the message generated by the function we\u0026rsquo;re calling.\ngo run quote.go Output:\nDon\u0026#39;t communicate by sharing memory, share memory by communicating. Notice that our code calls the Go function, printing a clever message about communication.\nWhen we ran go mod tidy, it located and downloaded the rsc.io/quote module that contains the package you imported. By default, it downloaded the latest version.\nLanguage concepts 1. Variables Go’s basic primitive types are bool, string, int, uint, float, and complex, the size of the type can be specified next to the type, uint32. A variable is declared by the var keyword followed by the variable name and the type.\nvariables can also be initialized with a shorthand notation := as Go can infer the type.\nJust like imports, unused variables are not allowed.\npackage main var x int = 5 var y int = 6 sum := x + y Also, Go does not use semicolons to end a statement.\nAn important point to note is how Go scopes variables with a package, a variable is public if the first letter is in Captial, else private, same goes for functions.\npackage main X := 5 // public y := 6 // private func Add() { // public } func add() { // private } 2. Functions Functions are an essential part of Go, and of course, the above won’t work as execution has to happen in a function body. Functions are declared with the keyword func followed by the function name, arguments, and return type. A Go application must contain the main function which is the entry point to the application. It does not take any arguments or return anything. The opening braces of the function must start at the same level as the function and cannot move to the new line.\nfunction parameters are declared with their name followed by the type and separated by a comma. The return type must be provided if the function returns, as a shortcut the return variable can also be declared to avoid declaring another variable inside the function, here’s an example.\npackage main func sum(x int, y int) int { sum := x + y return sum } // OR func sum(x int, y int) (sum int) { sum := x + y return } 3. Arrays, Slices, and Maps Arrays can be declared by simply specifying the datatype next to brackets with an integer denoting the size of the array. Then, arrays can be assigned by their index, a more convenient way to initialize is to use the shorthand syntax along with the data in parenthesis.\npackage main var arr [4]int arr[0] = 1 arr[1] = 2 // OR arr := [4]int{1,2,3,4} But there’s a problem here. You cannot modify the length of the array, wouldn\u0026rsquo;t it be more convenient when you don’t know the size yet? That’s where Slices come in, slices are simply dynamic arrays. you can declare a slice just like arrays, without specifying the size.\nSlices can be really useful in performing a lot of operations. The copy or append function can be used to manipulate the slice. Slices can also be concatenated using the append with the spread operator(…). A slice can be sliced using its indices within the brackets. Below are some examples.\npackage main import \u0026#34;fmt\u0026#34; func main() { slice := []int{1,2,3,4,5} slice1 := slice[2:] slice2 := slice[:2] slice = append(slice ,4) slicecat := append(slice, slice...) fmt.Println(slice, slice1, slice2, slicecat) } The append function does not modify the slice but returns a new slice from the given one. Here’s the output.\n# output [1 2 3 4 5 4] [3 4 5] [1 2] [1 2 3 4 5 4 1 2 3 4 5 4] Maps are equivalent to a HasMap in Java or a Dictionary in python. They store key-value pairs. A map can be created using the make keyword followed by the keyword map and the datatype of the key in brackets and value next to it.\nMaps are simple to operate on, they can be assigned values by using the [ ] operator specifying the key and value, and a key can be removed by using the delete function.\npackage main import \u0026#34;fmt\u0026#34; func main() { elements := make(map[string]int) elements[\u0026#34;first\u0026#34;] = 1 elements[\u0026#34;second\u0026#34;] = 2 fmt.Println(elements) delete(elements, \u0026#34;first\u0026#34;) fmt.Println(elements) } Output:\nmap[first:1 second:2] map[second:2] 3. Loops Loops in Go exist in the simplest form, there is only one looping syntax, the for loop. The for loop can be written in multiple ways to meet your looping needs. The first syntax is a familiar one starting with the pointer variable i and followed by condition and incrementation. The below example will print 1 to 5.\npackage main import \u0026#34;fmt\u0026#34; func main() { arr := []int{1,2,3,4,5} for i := 0; i \u0026lt; len(arr); i++ { fmt.Println(arr[i]) } } Oh! you badly miss the while loop? Don’t worry, Go has you covered, all you have to do is mention the condition with the for loop and use a pointer declared outside the loop just how you would when using a while loop.\npackage main import \u0026#34;fmt\u0026#34; func main() { arr := []int{1,2,3,4,5} i := 0 for i \u0026lt; len(arr) { fmt.Println(arr[i]) i++ } } The range function provides an easy way to access the index as well as value.\npackage main import \u0026#34;fmt\u0026#34; func main() { arr := []int{1,2,3,4,5} for i,v := range(arr) { fmt.Println(i, v) } } 4. Struct The struct keyword is to define a shape to your data. Since Go does not support classes, data of a certain shape requirement can be stored in variables of that type of struct. A struct is created using the keyword type and its properties can be accessed by the . operator.\npackage main import \u0026#34;fmt\u0026#34; func main() { type Animal struct { Name string animalType string } giraffe := Animal(\u0026#34;Giraffe\u0026#34;, \u0026#34;Mammal\u0026#34;) fmt.Println(giraffe.Name, giraffe.animalType) } 5. nil, error, and multiple return values Go provides some smooth ways to handle the error and nil values. Both error and nil are native built-in types that can be used to perform validation before performing some operation. Go also supports returning multiple types from a function, this can be done using specifying the type within parenthesis in place of the return type.\npackage main import \u0026#34;fmt\u0026#34; func main() { a,b := sum(5,10) } func sum(x int, y int) (sum int, diff int) { sum = x + y diff = x - y return } errors or nil can be returned depending on the operation performed using an if check. Here’s an example showing how you can handle errors by checking the input for a square root function.\npackage main import ( \u0026#34;fmt\u0026#34; \u0026#34;math\u0026#34; \u0026#34;errors\u0026#34; ) func main() { result, err := sqrt(25) if err != nil { fmt.Println(err) } else { fmt.Println(result) } } func sqrt(x float64) (float64, error) { if x \u0026lt; 0 { return 0 , errors.New(\u0026#34;x must be non negative number\u0026#34;) } retun math.Sqrt(x), nil } 6. Pointers Pointers in Go are similar to pointers in other languages, you can refer to the memory address of the variable by prefixing the variable with an ampersand(\u0026amp;) symbol and dereference it using an asterisk(*), by default, go passes arguments by value not reference, you can accomplish this by prefixing that type of the argument in the function with an asterisk(*), here’s an example.\npackage main import \u0026#34;fmt\u0026#34; func main() { i := 5 increment(\u0026amp;i) fmt.Println(i) } func increment(i *int) { *i++ } without the \u0026amp; it’d print 5 as a copy of the variable would have been passed, and once we have the reference, we need to deference the memory to get the value by using * on the variable again.\nThat’s it! you should be now ready to write your first program in Go, try practicing the code snippets and if you feel like challenging yourself more, try writing a basic HTTP server using the net package. This should give you enough practice to write some awesome packages or contribute to your favorite Go repository.\nWe\u0026rsquo;ll explore concurrency and other powerful features of Go in another blog—so stay tuned.\nThanks for sticking around and coding along!\nHere are some additional resources if you wish to dive deeper.\ngo.dev Golang Tutorial By Nana How To Golang Playlist by Anthony GG A Journey With Go Until next time happy coding, and go Go!\n","permalink":"https://9ovind.in/blogs/golang-getting-started/","summary":"\u003cp\u003eGo is a simple, fast, and concurrent programming language. Its simplicity in design makes it an amazing programming language to work with. Go is currently gaining a lot of popularity, and a lot of organizations now prefer to write their backend in Go.\u003c/p\u003e\n\u003cp\u003e\u003cimg alt=\"go logo\" loading=\"lazy\" src=\"/blogs/golang-getting-started/go-logo.png\"\u003e\u003c/p\u003e\n\u003cp\u003eGo was designed at Google in 2007 by Robert Griesemer, Rob Pike, and Ken Thompson to improve programming productivity in an era of multicore, networked machines and large codebases.The designers wanted to address criticisms of other languages in use at Google, but keep their useful characteristics:\u003c/p\u003e","title":"Getting started with Golang"},{"content":"Docker is just a tool in your toolbox which can help you in your development lifecycle and make you a better software developer.\nYou can still do everything without it but in the hard way.\nWhat to expect from this blog post? In this blog post, I will discuss a little bit about Virtualization, Containers and where it used before diving deep into detailed instructions on using Docker.\nI will explain about Docker images, running Docker containers, Docker networking, Docker volumes and how it all works.\nWhy docker and containers ? If you go back 20–30 years, you had a hardware and installed operating system (Kernel and UI) on top of that. To run an application, we had to compile the code and sort all application dependencies. If we needed another application or more capacity to accommodate application workload hikes, we had to purchase new hardware, do installation, and configuration.\nVirtualization added one additional layer between the hardware and operating system called hypervisor. It allowed users to run multiple isolated applications on virtual machines with their OS.\nWhile virtualization improved resource utilization by allowing multiple virtual machines (VMs) to run on a single physical server, it still had some inefficiencies. Each VM required its own full operating system, consuming significant resources (CPU, memory, and storage). Boot times were slow, and managing multiple VMs became complex.\nContainers addressed these challenges by introducing a lightweight and efficient way to package and run applications. Unlike VMs, containers share the same operating system kernel while keeping applications isolated from one another. This eliminates the overhead of running multiple OS instances and results in faster startup times, better performance, and improved scalability.\nContainers encapsulate everything an application needs—code, runtime, libraries, and dependencies—ensuring that it runs consistently across different environments, whether on a developer’s laptop, a test server, or in production.\nFigure 2: Hypervisor and Containers\nSo what is docker ? It is a container technology which help to bundle software and it\u0026rsquo;s dependencies togather for running consistently across different environments.\nIn simple words, docker is a way to package softwares so they can run on any machines (Windows, mac and linux)\nDocker revolutionized the way we build software by making microservice-based application development possible.\nWhere is Docker Used Today ? Since its launch, Docker has become an industry-standard technology for containerization. It is widely used across various domains, including:\nSoftware Development \u0026amp; DevOps: Enables faster development cycles, CI/CD pipelines, and efficient testing environments.\nCloud Computing: Powers containerized applications in cloud platforms like AWS, Google Cloud, and Azure.\nMicroservices Architecture: Helps developers break applications into smaller, manageable services that can scale independently.\nEdge Computing \u0026amp; IoT: Facilitates lightweight deployments on edge devices and embedded systems.\nBig Data \u0026amp; AI/ML: Used for containerizing machine learning models and data pipelines for scalability.\nEnterprise Applications: Modernizes legacy applications by running them in isolated, portable containers.\nFigure 3: Netflix Microservice Architecture\nDocker at Scale – Real World Use Cases Google – Running Billions of Containers Per Week Google runs all its services including Search, Gmail, YouTube, and Maps on containers. They deploy over 2 billion containers per week, making Google one of the largest container users globally.\nTheir internal system, Borg, inspired Kubernetes, which now runs Docker containers worldwide.\nNetflix – Streaming to millions of Users with Docker Netflix has a microservices architecture where thousands of services run in containers. Using Docker, they can:\nDeploy updates thousands of times per day with zero downtime.\nScale instantly during peak traffic (e.g., Stranger Things premieres).\nEnsure a seamless experience for 250M+ users worldwide.\nPayPal – Cutting Deployment Time by 90% PayPal migrated from VMs to Docker containers and reduced software deployment time from hours to minutes. By using Docker, PayPal improved:\nResource utilization, saving on infrastructure costs.\nDeveloper agility, allowing teams to ship features 3x faster.\nSpaceX – Docker in Rocket Launch Simulations SpaceX uses Docker to simulate rocket launches and run AI-powered navigation systems. Containers help:\nTest rocket software in isolated, reproducible environments.\nEnsure mission-critical software runs identically across all systems.\nScale computing power as needed for complex calculations.\nThese examples show that Docker isn’t just a tool. it’s a critical infrastructure component powering the world’s largest applications.\nConcepts in docker Image\nA Docker Image is a blueprint for a container. It includes: The application code All dependencies (libraries, runtime, configurations) Instructions to run the app (like a Dockerfile) Example: An image can be Ubuntu, Nginx, or a custom Node.js app.\nContainer\nA container is a running instance of an image. It is lightweight, isolated, and can be created, started, stopped, or deleted.\nThink of it like this:\nImage = Recipe Container = Cooked dish Dockerfile\nA Dockerfile is a text file with a set of instructions to create a Docker image. It defines:\nBase image (e.g., FROM python:3.10) Dependencies (e.g., RUN apt-get install) Application code (e.g., COPY . /app) Start command (e.g., CMD [\u0026ldquo;python\u0026rdquo;, \u0026ldquo;app.py\u0026rdquo;]) This ensures consistent builds across different environments.\nDocker Hub\nDocker Hub is a public registry where you can find and share Docker images. Think of it as GitHub for Docker images. Example: You can pull a ready-made Nginx image by running: docker pull nginx Volume\nA Docker Volume is a persistent storage mechanism for containers. It ensures that data remains even if the container stops or restarts. Example: Running a MySQL database container with a volume: docker run -d -v mysql-data:/var/lib/mysql mysql:latest Network\nDocker provides different networking options for containers to communicate with each other and the outside world: Bridge (default, for isolated containers) Host (shares the host’s network) Overlay (for multi-host networking in Swarm) Example: Running a container on a specific network: docker network create my_network docker run -d --network=my_network nginx Docker Compose\nDocker Compose allows you to define multi-container applications in a single docker-compose.yml file. It simplifies the deployment of complex applications with multiple services. Docker installation on Linux Installation curl -fsSL https://get.docker.com | sh Allow running docker without sudo sudo groupadd docker sudo usermod -aG docker $USER newgrp docker Run a hello-world image docker run hello-world Running a nodejs in docker from terminal Create a index.js file with the following code console.log(\u0026#34;Hello world\u0026#34;) Run the node image docker run -it --rm -v ./:/app -w /app node:alpine3.21 sh run : create and run container from docker image -it : starts a interactive shell --rm: remove the container after user exits the shell -v : set the volume in container for sharing host files inside it -w : sets the current working directory Check node and npm version node -v npm -v Run the javascript file node index.js Running a MySQL and phpmyadmin setup in docker using docker compose Create a docker-compose.yaml file and paste the below content services: mysql: image: mysql:9.2 container_name: mysql restart: always environment: MYSQL_ROOT_PASSWORD: RootPassword MYSQL_DATABASE: my_database MYSQL_USER: govind MYSQL_PASSWORD: GovindPassword ports: - \u0026#34;3306:3306\u0026#34; volumes: - mysql_data:/var/lib/mysql phpmyadmin: image: phpmyadmin:5.2 container_name: phpmyadmin restart: always environment: PMA_HOST: mysql ports: - \u0026#34;8000:80\u0026#34; depends_on: - mysql volumes: mysql_data: Run the below command docker compose up -d Visit localhost:8000 in your browser How Docker Works Internally Docker Architecture\nDocker follows a client-server architecture with three main components:\nDocker Client ( CLI or API that sends commands (docker run, docker build) to the Docker daemon. ) Docker Daemon ( A background service (dockerd) that manages containers, images, volumes, and networks. ) Docker Registry ( A repository like Docker Hub where images are stored and pulled from. ) Docker Image \u0026amp; Container\nDocker Image\nA read-only template containing everything needed to run an application (OS, dependencies, app code). Built using a Dockerfile. Can be stored in a registry and shared. Docker Container\nA running instance of an image. Uses UnionFS (OverlayFS, AUFS, etc.) for efficient layered storage. Isolated from the host using namespaces and cgroups. How Docker Runs a Container\nWhen you run docker run node, Docker performs these steps:\nPulls the Image\nChecks local storage; if not found, pulls from Docker Hub.\nCreates a Container\nAssigns a unique container ID. Sets up a filesystem using UnionFS (copy-on-write layers). Allocates namespaces for isolation (PID, NET, MNT, IPC, UTS). Applies cgroups to limit CPU \u0026amp; memory usage. Creates a virtual network interface (bridge mode by default). Executes the Process\nRuns the specified command (e.g., node app.js).\nManages Lifecycle\nWhen stopped, the container remains. When removed (docker rm), the container’s writable layer is deleted. Namespaces are a feature of the Linux kernel that partition kernel resources such that one set of processes sees one set of resources, while another set of processes sees a different set of resources.\nIn Linux, cgroups (control groups) are a kernel feature that allows administrators to limit, account for, and isolate the resource usage (CPU, memory, disk I/O, network bandwidth) of a collection of processes\nAlternatives Podman Resources Public container registry hub.docker.com A simple terminal UI for both docker and docker-compose Lazydocker Docker Playlist YouTube Channels TechWorld with Nana KodeKloud That DevOps Guy Recommendation If you really want to know in deep how docker containers works then building is the best way to learn.\nLiz rice a Software engineer and a member of CNCF\u0026rsquo;s Governing Board shows Building a container from scratch in Go in a 40 min video.\n","permalink":"https://9ovind.in/blogs/virtualization_containers_and_role_of_docker_in_it/","summary":"\u003cp\u003eDocker is just a tool in your toolbox which can help you in your development lifecycle and make you a better software developer.\u003c/p\u003e\n\u003cp\u003eYou can still do everything without it but in the hard way.\u003c/p\u003e\n\u003ch3 id=\"what-to-expect-from-this-blog-post\"\u003eWhat to expect from this blog post?\u003c/h3\u003e\n\u003cp\u003eIn this blog post, I will discuss a little bit about Virtualization, Containers and where it used before diving deep into detailed instructions on using Docker.\u003c/p\u003e\n\u003cp\u003eI will explain about Docker images, running Docker containers, Docker networking, Docker volumes and how it all works.\u003c/p\u003e","title":"Virtualization , Containers and role of docker in it"},{"content":"Hi there! 👋\nI\u0026rsquo;m Govind Yadav, a software engineer with over a year of experience, focused on Cloud \u0026amp; DevOps. I enjoy building the infrastructure that keeps software running reliably — CI/CD pipelines, containerised deployments, Kubernetes clusters, and everything in between.\nWhat I Do My day-to-day sits at the intersection of development and operations:\nInfrastructure \u0026amp; Cloud — provisioning and managing cloud environments on AWS and Hetzner, working with Linux servers, networking, and storage. CI/CD \u0026amp; GitOps — designing automated pipelines with GitLab CI, GitHub Actions, ArgoCD, and Kubernetes to ship software safely and fast. Containerisation — Docker, Docker Compose, and Kubernetes (K3S) for packaging and orchestrating applications at scale. Backend Engineering — building server-side systems in PHP (Laravel), Go, and Node.js that the infrastructure actually runs. Skills \u0026amp; Technologies Languages: PHP, Go, JavaScript, Bash Frameworks: Laravel, Node.js, React Databases: MySQL, PostgreSQL, MongoDB DevOps: Docker, Kubernetes (K3S), ArgoCD, Ansible, Jenkins, Nginx CI/CD: GitLab CI, GitHub Actions, GitOps workflows Cloud: AWS (EC2, S3), Hetzner OS: Linux — Debian, Ubuntu, Arch, Fedora Why I Write I write about the things I work through — DNS, TLS, web servers, containers, networking. If something took me time to figure out, it\u0026rsquo;s probably worth a post so someone else doesn\u0026rsquo;t have to.\nLet\u0026rsquo;s Connect Open to cloud/DevOps roles, infrastructure projects, and interesting engineering conversations.\nEmail: govindsvyadav@gmail.com LinkedIn: linkedin.com/in/9ovindyadav GitHub: github.com/9ovindyadav Telegram: t.me/s/govindsvyadav ","permalink":"https://9ovind.in/about/","summary":"\u003cp\u003eHi there! 👋\u003c/p\u003e\n\u003cp\u003eI\u0026rsquo;m \u003cstrong\u003eGovind Yadav\u003c/strong\u003e, a software engineer with \u003cspan id=\"exp-duration\"\u003eover a year\u003c/span\u003e of experience, focused on \u003cstrong\u003eCloud \u0026amp; DevOps\u003c/strong\u003e. I enjoy building the infrastructure that keeps software running reliably — CI/CD pipelines, containerised deployments, Kubernetes clusters, and everything in between.\u003c/p\u003e\n\u003cscript\u003e\n(function () {\n  const start = new Date('2023-10-01');\n  const now = new Date();\n  let years = now.getFullYear() - start.getFullYear();\n  let months = now.getMonth() - start.getMonth();\n  if (months \u003c 0) { years--; months += 12; }\n  const parts = [];\n  if (years \u003e 0) parts.push(years + (years === 1 ? ' year' : ' years'));\n  if (months \u003e 0) parts.push(months + (months === 1 ? ' month' : ' months'));\n  const el = document.getElementById('exp-duration');\n  if (el \u0026\u0026 parts.length) el.textContent = parts.join(' ');\n})();\n\u003c/script\u003e\n\u003ch2 id=\"what-i-do\"\u003eWhat I Do\u003c/h2\u003e\n\u003cp\u003eMy day-to-day sits at the intersection of development and operations:\u003c/p\u003e","title":"About me"},{"content":"When it comes to network communication, two key protocols come into play: TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). Both serve different purposes, and understanding their differences is crucial for building efficient communication systems.\nTCP: Reliable, Connection-Oriented Communication TCP is all about reliability. It establishes a connection between client and server, ensuring every packet of data arrives safely and in order. This makes it ideal for use cases like:\nWeb Browsing: Ensures data (like HTML, CSS, images) is loaded correctly. File Transfers: Guarantees the complete and accurate delivery of files. Emails: Makes sure all parts of the email reach the recipient. UDP: Fast, Connection-Less Communication UDP, on the other hand, is all about speed. It sends data without establishing a connection, making it faster but less reliable than TCP. This is perfect for scenarios where real-time data is more important than complete accuracy:\nVideo Streaming: Minimal delay is critical, even if some data is lost. Online Gaming: Quick updates matter more than occasional packet loss. Voice-over-IP (VoIP): Prioritizes continuous voice flow over packet delivery accuracy. Project setup Requirements: Install Node.js (v20.17.0 recommended). Clone the Project: git clone https://github.com/9ovindyadav/l4_servers.git \u0026amp;\u0026amp; cd l4_servers TCP Server About This is a connection-oriented server, ensuring reliable communication between the client and server. Run the TCP Server: node tcp_server.js Connect to the TCP Server: From another terminal or system, use: telnet localhost 8000 After the connection is established: Write a message in the terminal. The message will be displayed in the server terminal. Note: If the server crashes, the connection will be lost. UDP Server About\nThis is a connection-less server, offering fast, but potentially unreliable communication. Run the UDP Server:\nnode udp_server.js Connect to the UDP Server: From another terminal or system, use: echo \u0026#34;Hii\u0026#34; | nc -w1 -u localhost 8001 After sending a message: The message will appear in the server terminal. Learning Resource For a deeper dive into building these servers with Node.js, I found this tutorial really helpful:\nYouTube video: Building TCP and UDP servers with NodeJS Blog Post : Understanding TCP and UDP ","permalink":"https://9ovind.in/blogs/tcp_udp/","summary":"\u003cp\u003eWhen it comes to network communication, two key protocols come into play: TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). Both serve different purposes, and understanding their differences is crucial for building efficient communication systems.\u003c/p\u003e\n\u003ch3 id=\"tcp-reliable-connection-oriented-communication\"\u003eTCP: Reliable, Connection-Oriented Communication\u003c/h3\u003e\n\u003cp\u003eTCP is all about reliability. It establishes a connection between client and server, ensuring every packet of data arrives safely and in order. This makes it ideal for use cases like:\u003c/p\u003e","title":"Understanding TCP and UDP: Key Networking Protocols"},{"content":"Project Features Get Data as a JSON response from Google sheets Generate new PDF\u0026rsquo;s from response data such as invoices or anything you want with your personal template Send Email to each user with there generated PDF via SMTP Server Requirements NodeJs must be installed Google sheet setup as described below in the readme.md to get data Project setup Clone the repository from github via git on your laptop git clone https://github.com/9ovindyadav/invoiceGenerator.git Rename the .env.example file to .env and enter the credentials Before running the script make sure you have entered all the credentials in .env and Google sheet is all set To run the app enter the below command node app.js Google sheet as a API Getting JSON response First row as a key and others as a value Follow below steps to make google sheets as a API steps Create a New Google worksheet with your google accoount Rename Sheet1 as you want Fill the data in the sheet as first row as a heading and others as a values In Extension open App Script Name the App Script as the same name as your worksheet Copy paste the below code in Code.gs file function doGet(req){ var sheetName = \u0026#39;Your sheet name\u0026#39;; var doc = SpreadsheetApp.getActiveSpreadsheet(); var sheet = doc.getSheetByName(sheetName); var values = sheet.getDataRange().getValues(); var output = []; var keys = values[0]; var data = values.slice(1); data.forEach((item) =\u0026gt; { var row = {}; keys.forEach((key, index) =\u0026gt; { row[key] = item[index]; }) output.push(row); }) console.log(output); return ContentService.createTextOutput(JSON.stringify({data: output})).setMimeType(ContentService.MimeType.JSON); } Deploy the following code as a New Deployment as a web app Allow Who can access as Anyone Copy the script link and run in browser's address bar Deployment page Script link page Example Googl sheet Contribuiting Contributions are welcome! If you\u0026rsquo;d like to contribute to the project.\nGithub Repo\n","permalink":"https://9ovind.in/projects/invoice_generator/","summary":"\u003ch2 id=\"project-features\"\u003eProject Features\u003c/h2\u003e\n\u003col\u003e\n\u003cli\u003eGet Data as a JSON response from Google sheets\u003c/li\u003e\n\u003cli\u003eGenerate new PDF\u0026rsquo;s from response data such as invoices or anything you want with your personal template\u003c/li\u003e\n\u003cli\u003eSend Email to each user with there generated PDF via SMTP Server\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch2 id=\"requirements\"\u003eRequirements\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ccode\u003eNodeJs\u003c/code\u003e must be installed\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003eGoogle sheet\u003c/code\u003e setup as described below in the \u003ccode\u003ereadme.md\u003c/code\u003e to get data\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"project-setup\"\u003eProject setup\u003c/h2\u003e\n\u003col\u003e\n\u003cli\u003eClone the repository from github via git on your laptop\u003c/li\u003e\n\u003c/ol\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003egit clone https://github.com/9ovindyadav/invoiceGenerator.git\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003col start=\"2\"\u003e\n\u003cli\u003eRename the \u003ccode\u003e.env.example\u003c/code\u003e file to \u003ccode\u003e.env\u003c/code\u003e and enter the credentials\u003c/li\u003e\n\u003cli\u003eBefore running the script make sure you have entered all the credentials in \u003ccode\u003e.env\u003c/code\u003e and \u003ccode\u003eGoogle sheet\u003c/code\u003e is all set\u003c/li\u003e\n\u003cli\u003eTo run the app enter the below command\u003c/li\u003e\n\u003c/ol\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003enode app.js\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003ch2 id=\"google-sheet-as-a-api\"\u003eGoogle sheet as a API\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eGetting JSON response\u003c/li\u003e\n\u003cli\u003eFirst row as a key and others as a value\u003c/li\u003e\n\u003cli\u003eFollow below steps to make google sheets as a API\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3 id=\"steps\"\u003esteps\u003c/h3\u003e\n\u003col\u003e\n\u003cli\u003eCreate a New Google worksheet with your google accoount\u003c/li\u003e\n\u003cli\u003eRename Sheet1 as you want\u003c/li\u003e\n\u003cli\u003eFill the data in the sheet as first row as a heading and others as a values\u003c/li\u003e\n\u003cli\u003eIn \u003ccode\u003eExtension\u003c/code\u003e open \u003ccode\u003eApp Script\u003c/code\u003e\u003c/li\u003e\n\u003cli\u003eName the \u003ccode\u003eApp Script\u003c/code\u003e as the same name as your \u003ccode\u003eworksheet\u003c/code\u003e\u003c/li\u003e\n\u003cli\u003eCopy paste the below code in \u003ccode\u003eCode.gs\u003c/code\u003e file\u003c/li\u003e\n\u003c/ol\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-javascript\" data-lang=\"javascript\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"kd\"\u003efunction\u003c/span\u003e \u003cspan class=\"nx\"\u003edoGet\u003c/span\u003e\u003cspan class=\"p\"\u003e(\u003c/span\u003e\u003cspan class=\"nx\"\u003ereq\u003c/span\u003e\u003cspan class=\"p\"\u003e){\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  \u003cspan class=\"kd\"\u003evar\u003c/span\u003e \u003cspan class=\"nx\"\u003esheetName\u003c/span\u003e \u003cspan class=\"o\"\u003e=\u003c/span\u003e \u003cspan class=\"s1\"\u003e\u0026#39;Your sheet name\u0026#39;\u003c/span\u003e\u003cspan class=\"p\"\u003e;\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  \u003cspan class=\"kd\"\u003evar\u003c/span\u003e \u003cspan class=\"nx\"\u003edoc\u003c/span\u003e \u003cspan class=\"o\"\u003e=\u003c/span\u003e \u003cspan class=\"nx\"\u003eSpreadsheetApp\u003c/span\u003e\u003cspan class=\"p\"\u003e.\u003c/span\u003e\u003cspan class=\"nx\"\u003egetActiveSpreadsheet\u003c/span\u003e\u003cspan class=\"p\"\u003e();\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  \u003cspan class=\"kd\"\u003evar\u003c/span\u003e \u003cspan class=\"nx\"\u003esheet\u003c/span\u003e \u003cspan class=\"o\"\u003e=\u003c/span\u003e \u003cspan class=\"nx\"\u003edoc\u003c/span\u003e\u003cspan class=\"p\"\u003e.\u003c/span\u003e\u003cspan class=\"nx\"\u003egetSheetByName\u003c/span\u003e\u003cspan class=\"p\"\u003e(\u003c/span\u003e\u003cspan class=\"nx\"\u003esheetName\u003c/span\u003e\u003cspan class=\"p\"\u003e);\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  \u003cspan class=\"kd\"\u003evar\u003c/span\u003e \u003cspan class=\"nx\"\u003evalues\u003c/span\u003e \u003cspan class=\"o\"\u003e=\u003c/span\u003e \u003cspan class=\"nx\"\u003esheet\u003c/span\u003e\u003cspan class=\"p\"\u003e.\u003c/span\u003e\u003cspan class=\"nx\"\u003egetDataRange\u003c/span\u003e\u003cspan class=\"p\"\u003e().\u003c/span\u003e\u003cspan class=\"nx\"\u003egetValues\u003c/span\u003e\u003cspan class=\"p\"\u003e();\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  \u003cspan class=\"kd\"\u003evar\u003c/span\u003e \u003cspan class=\"nx\"\u003eoutput\u003c/span\u003e \u003cspan class=\"o\"\u003e=\u003c/span\u003e \u003cspan class=\"p\"\u003e[];\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  \u003cspan class=\"kd\"\u003evar\u003c/span\u003e \u003cspan class=\"nx\"\u003ekeys\u003c/span\u003e \u003cspan class=\"o\"\u003e=\u003c/span\u003e \u003cspan class=\"nx\"\u003evalues\u003c/span\u003e\u003cspan class=\"p\"\u003e[\u003c/span\u003e\u003cspan class=\"mi\"\u003e0\u003c/span\u003e\u003cspan class=\"p\"\u003e];\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  \u003cspan class=\"kd\"\u003evar\u003c/span\u003e \u003cspan class=\"nx\"\u003edata\u003c/span\u003e \u003cspan class=\"o\"\u003e=\u003c/span\u003e \u003cspan class=\"nx\"\u003evalues\u003c/span\u003e\u003cspan class=\"p\"\u003e.\u003c/span\u003e\u003cspan class=\"nx\"\u003eslice\u003c/span\u003e\u003cspan class=\"p\"\u003e(\u003c/span\u003e\u003cspan class=\"mi\"\u003e1\u003c/span\u003e\u003cspan class=\"p\"\u003e);\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  \u003cspan class=\"nx\"\u003edata\u003c/span\u003e\u003cspan class=\"p\"\u003e.\u003c/span\u003e\u003cspan class=\"nx\"\u003eforEach\u003c/span\u003e\u003cspan class=\"p\"\u003e((\u003c/span\u003e\u003cspan class=\"nx\"\u003eitem\u003c/span\u003e\u003cspan class=\"p\"\u003e)\u003c/span\u003e \u003cspan class=\"p\"\u003e=\u0026gt;\u003c/span\u003e \u003cspan class=\"p\"\u003e{\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e    \u003cspan class=\"kd\"\u003evar\u003c/span\u003e \u003cspan class=\"nx\"\u003erow\u003c/span\u003e \u003cspan class=\"o\"\u003e=\u003c/span\u003e \u003cspan class=\"p\"\u003e{};\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e    \u003cspan class=\"nx\"\u003ekeys\u003c/span\u003e\u003cspan class=\"p\"\u003e.\u003c/span\u003e\u003cspan class=\"nx\"\u003eforEach\u003c/span\u003e\u003cspan class=\"p\"\u003e((\u003c/span\u003e\u003cspan class=\"nx\"\u003ekey\u003c/span\u003e\u003cspan class=\"p\"\u003e,\u003c/span\u003e \u003cspan class=\"nx\"\u003eindex\u003c/span\u003e\u003cspan class=\"p\"\u003e)\u003c/span\u003e \u003cspan class=\"p\"\u003e=\u0026gt;\u003c/span\u003e \u003cspan class=\"p\"\u003e{\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e      \u003cspan class=\"nx\"\u003erow\u003c/span\u003e\u003cspan class=\"p\"\u003e[\u003c/span\u003e\u003cspan class=\"nx\"\u003ekey\u003c/span\u003e\u003cspan class=\"p\"\u003e]\u003c/span\u003e \u003cspan class=\"o\"\u003e=\u003c/span\u003e \u003cspan class=\"nx\"\u003eitem\u003c/span\u003e\u003cspan class=\"p\"\u003e[\u003c/span\u003e\u003cspan class=\"nx\"\u003eindex\u003c/span\u003e\u003cspan class=\"p\"\u003e];\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e    \u003cspan class=\"p\"\u003e})\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e    \u003cspan class=\"nx\"\u003eoutput\u003c/span\u003e\u003cspan class=\"p\"\u003e.\u003c/span\u003e\u003cspan class=\"nx\"\u003epush\u003c/span\u003e\u003cspan class=\"p\"\u003e(\u003c/span\u003e\u003cspan class=\"nx\"\u003erow\u003c/span\u003e\u003cspan class=\"p\"\u003e);\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  \u003cspan class=\"p\"\u003e})\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  \u003cspan class=\"nx\"\u003econsole\u003c/span\u003e\u003cspan class=\"p\"\u003e.\u003c/span\u003e\u003cspan class=\"nx\"\u003elog\u003c/span\u003e\u003cspan class=\"p\"\u003e(\u003c/span\u003e\u003cspan class=\"nx\"\u003eoutput\u003c/span\u003e\u003cspan class=\"p\"\u003e);\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e  \u003cspan class=\"k\"\u003ereturn\u003c/span\u003e \u003cspan class=\"nx\"\u003eContentService\u003c/span\u003e\u003cspan class=\"p\"\u003e.\u003c/span\u003e\u003cspan class=\"nx\"\u003ecreateTextOutput\u003c/span\u003e\u003cspan class=\"p\"\u003e(\u003c/span\u003e\u003cspan class=\"nx\"\u003eJSON\u003c/span\u003e\u003cspan class=\"p\"\u003e.\u003c/span\u003e\u003cspan class=\"nx\"\u003estringify\u003c/span\u003e\u003cspan class=\"p\"\u003e({\u003c/span\u003e\u003cspan class=\"nx\"\u003edata\u003c/span\u003e\u003cspan class=\"o\"\u003e:\u003c/span\u003e \u003cspan class=\"nx\"\u003eoutput\u003c/span\u003e\u003cspan class=\"p\"\u003e})).\u003c/span\u003e\u003cspan class=\"nx\"\u003esetMimeType\u003c/span\u003e\u003cspan class=\"p\"\u003e(\u003c/span\u003e\u003cspan class=\"nx\"\u003eContentService\u003c/span\u003e\u003cspan class=\"p\"\u003e.\u003c/span\u003e\u003cspan class=\"nx\"\u003eMimeType\u003c/span\u003e\u003cspan class=\"p\"\u003e.\u003c/span\u003e\u003cspan class=\"nx\"\u003eJSON\u003c/span\u003e\u003cspan class=\"p\"\u003e);\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"p\"\u003e}\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003col start=\"7\"\u003e\n\u003cli\u003eDeploy the following code as a \u003ccode\u003eNew Deployment\u003c/code\u003e as a \u003ccode\u003eweb app\u003c/code\u003e\u003c/li\u003e\n\u003cli\u003eAllow \u003ccode\u003eWho can access\u003c/code\u003e as \u003ccode\u003eAnyone\u003c/code\u003e\u003c/li\u003e\n\u003cli\u003eCopy the \u003ccode\u003escript link\u003c/code\u003e and \u003ccode\u003erun\u003c/code\u003e in \u003ccode\u003ebrowser's address bar\u003c/code\u003e\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch4 id=\"deployment-page\"\u003eDeployment page\u003c/h4\u003e\n\u003cp\u003e\u003cimg alt=\"Deployment page\" loading=\"lazy\" src=\"/projects/invoice_generator/deployment.png\"\u003e\u003c/p\u003e","title":"Invoice Generator"},{"content":"The Food order system for Restaurant is a web-based application designed for staff usage in a restaurant. It streamlines order processing, menu management, and administrative tasks for staff working at the counter, in the kitchen, and for administrators.\nTech Stack PHP : Server side scripting language for backend development MySQL : RDBMS for managing and storing data Nginx : Web server for serving the PHP application JQuery : JavaScript library for client side scripting Docker : Containerization tech for easy deployment and scaling Deployment The application is deployed and accessible at below URL\nRestaurent Food order system\nTest Users Counter staff Email - counter@gmail.com Password - admin Kitchen staff Email - kitchen@gmail.com Password - admin Admin Email - admin@gmail.com Password - admin Youtube video Setup Prerequisites Docker Docker compose Installation Clone the repository git clone https://github.com/9ovindyadav/food-order-php Navigate to the project directory cd food-order-php Create a .env file cp .env.example .env Update .env file with your MySql config\nNavigate to docker folder cd docker Build and start docker container docker compose up -d --build Access the application Open your web browser and navigate to http://localhost:8000 Usage Counter staff Take and create customer orders Manage payment status View orders Kitchen staff View Incoming orders Update order status when prepared or prepairing Update menu status available or not Admin Log in with admin credentials Manage menus ( add, create, update, delete) Manage users ( add, create, update, delete) View orders View statistics of overall data Contribuiting Contributions are welcome! If you\u0026rsquo;d like to contribute to the project.\nGithub Repo\nSupport If you encounter any issue or have questions , please open and issue in the issue section.\n","permalink":"https://9ovind.in/projects/food_order_php/","summary":"\u003cp\u003eThe Food order system for Restaurant is a web-based application designed for staff usage in a restaurant. It streamlines order processing, menu management, and administrative tasks for staff working at the counter, in the kitchen, and for administrators.\u003c/p\u003e\n\u003ch2 id=\"tech-stack\"\u003eTech Stack\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003ePHP : Server side scripting language for backend development\u003c/li\u003e\n\u003cli\u003eMySQL : RDBMS for managing and storing data\u003c/li\u003e\n\u003cli\u003eNginx : Web server for serving the PHP application\u003c/li\u003e\n\u003cli\u003eJQuery : JavaScript library for client side scripting\u003c/li\u003e\n\u003cli\u003eDocker : Containerization tech for easy deployment and scaling\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"deployment\"\u003eDeployment\u003c/h2\u003e\n\u003cp\u003eThe application is deployed and accessible at below URL\u003c/p\u003e","title":"Food order system for Restaurant"},{"content":"Time at it\u0026rsquo;s simplest is just a series of increasing seconds.\nHowever, keeping track of time across the world is a much more difficult practice. Let’s look at the basics of how time works and then we will see ways how we manage it in computers.\nUnderstanding UTC Coordinated Universal Time or UTC is the primary time standard by which the world regulates clocks and time. UTC, or Coordinated Universal Time, isn’t an actual time zone, but it is a time standard. It doesn’t change for Daylight Saving, and all other time zones are measured in their relation to UTC.\nTime zones around the world are expressed using positive or negative offsets from UTC which has a range of UTC-12 to UTC+14.\nIndia and Shri lanka uses Indian standard time ( IST ) which is offset UTC+5:30 i am 5 hours 30 minutes ahead of UTC.\nHow Computers record time In computing, an epoch is a fixed date and time used as a reference from which a computer measures system time.\nit’s an arbitrary date with no real meaning.\nFor Unix systems it is - 01 jan 1970 00:00:00 UTC\nFor Windows it is - 01 jan 1601 00:00:00 UTC\nThe Unix time 0 is exactly midnight UTC on 1 January 1970, with Unix time incrementing by 1 for every second after this.\nEach day in this kind of system is about 24 x 60 x 60 = 86,400 seconds long.\nToday when i am writing this article it is 19 Nov 2023 3:53 PM and the unix time is 1700389351 seconds .\nSo for computers time is nothing but a incremented integer value .\nFor precise time measurement computers uses other systems like PTSS designed by engineers synced with some atomic clock.\nSo a question comes in mind how do we read those big integer values to human readbale format . Well that\u0026rsquo;s where programming languages come for our help we programmers format these number\u0026rsquo;s to human readable format so that we do not have to count how many seconds have passed . it\u0026rsquo;s overall an abstraction to complex tech used to measure time.\nOn Unix-like systems, the computer keeps its clock in UTC and applies an offset based on your time zone. like if UTC time is 1,700,389,351 then for indian public software would have to add 5.5 x 60 x 60 = 19800 to UTC time to get Indian standard time.\nIn Windows, the system clock is stored as your local time i.e for Indian public it is already UTC+5:30 .\nLargest date in computer For 32 bit Operating system largest integer value is 2^32 - 1 which is 4,294,967,295 .\nSo largest date possible is 19 Jan 2038 , after that value will overflow and we have to recaliberate it for 32bit OS.\nFor 64 bit Operating system largest integer value is 2^64 - 1 which is 18,446,744,073,709,551,615 .\nSo largest date possible is 21 July 2554 , which is really really far away so currently we don\u0026rsquo;t have to worry about 64bit OS.\nCurrent unix time 32 bit OS largest integer 64 bit OS largest integer 2^32 - 1 2^64 - 1 1,700,389,351 4,294,967,295 18,446,744,073,709,551,615 19 Nov 2023 19 Jan 2038 21 July 2554 Network time protocol ( NTP ) Network Time Protocol (NTP) is an internet protocol used to synchronize with computer clock time sources in a network.\nHow does NTP work?\nThe NTP client initiates a time-request exchange with the NTP server.\nThe client is then able to calculate the link delay and its local offset and adjust its local clock to match the clock at the server\u0026rsquo;s computer.\nAs a rule, six exchanges over a period of about five to 10 minutes are required to initially set the clock.\nOnce synchronized, the client updates the clock about once every 10 minutes, usually requiring only a single message exchange, in addition to client-server synchronization. This transaction occurs via User Datagram Protocol (UDP) on port 123.\nThere are thousands of NTP servers around the world. They have access to highly precise atomic clocks and Global Positioning System clocks.\nWhat are stratum levels?\nDegrees of separation from the UTC source are defined as strata. The various strata include the following :\nStratum 0. A reference clock receives true time from a dedicated transmitter or satellite navigation system. It is categorized as stratum 0.\nStratum 1. A device is directly linked to the reference clock.\nStratum 2. A device receives its time from a stratum 1 computer.\nStratum 3. A device receives its time from a stratum 2 computer.\nWorking with time in PHP So after a long time learning about time we finally know that computers store dates and times as timestamps because it is easier to manipulate an integer.\nFor example, to add one day to a timestamp, it simply adds the number of seconds to the timestamp.\nPHP provides some helpful functions that manipulate timestamps effectively.\nGetting the current time \u0026lt;?php echo time(); // 1700389351 The return value is a big integer that represents the number of seconds since Epoch.\nTo make the time human-readable, you use the date() function. For example:\n\u0026lt;?php $current_time = time(); echo date(\u0026#39;Y-m-d g:ia\u0026#39;, $current_time); // 2023-11-19 5:47am The date() function has two parameters.\nThe first parameter specifies the date and time format.\nThe second parameter is an integer that specifies the timestamp.\nSince the time() function returns a timestamp, we can add seconds to it.\nAdding and subtracting from timestamp The following example shows how to add a week to the current time:\n\u0026lt;?php $current_time = time(); // 7 days later $one_week_later = $current_time + 7 * 24 * 60 * 60; echo date(\u0026#39;Y-m-d g:ia\u0026#39;,$one_week_later); Timezone By default, the time() function returns the current time in the timezone specified in the PHP configuration file (php.ini).\nTo get the current timezone, you can use the date_default_timezone_get() function:\n\u0026lt;?php echo(date_default_timezone_get()); To set a specific timezone, you use the date_default_timezone_set(). It’s recommended that you use the UTC timezone.\nThe following shows how to use the date_default_timezone_set() function to set the current timezone to the UTC timezone:\n\u0026lt;?php date_default_timezone_set(\u0026#39;UTC\u0026#39;); Making a unix timestamp To make a Unix timestamp, we use the mktime() function :\n\u0026lt;?php mktime( int $hour, int|null $minute = null, int|null $second = null, int|null $month = null, int|null $day = null, int|null $year = null ): int|false The mktime() function returns a Unix timestamp based on its arguments. If you omit an argument, mktime() function will use the current value according to the local date and time instead.\nThe following example shows how to use the mktime() function to show that Nov 19, 2023, is on a Sunday :\n\u0026lt;?php echo \u0026#39;Nov 19, 2023 is on a \u0026#39; . date(\u0026#39;l\u0026#39;, mktime(0, 0, 0, 11, 19, 2023)); Conclusion Remember , time at it\u0026rsquo;s simplest is just a series of increasing seconds for a computer geek.\n","permalink":"https://9ovind.in/blogs/unixtime/","summary":"\u003cp\u003eTime at it\u0026rsquo;s simplest is just a series of increasing seconds.\u003c/p\u003e\n\u003cp\u003eHowever, keeping track of time across the world is a much more difficult practice. Let’s look at the basics of how time works and then we will see ways how we manage it in computers.\u003c/p\u003e\n\u003ch3 id=\"understanding-utc\"\u003eUnderstanding UTC\u003c/h3\u003e\n\u003cp\u003eCoordinated Universal Time or UTC is the primary time standard by which the world regulates clocks and time. UTC, or Coordinated Universal Time, isn’t an actual time zone, but it is a time standard. It doesn’t change for Daylight Saving, and all other time zones are measured in their relation to UTC.\u003c/p\u003e","title":"Behind the Scenes: How Computers Keep Track of Time"},{"content":"Programming languages are formal systems designed for expressing computations. They are used to instruct computers and create software applications. There are numerous programming languages, each with its own syntax, semantics, and purposes.\nTypes of programming languages Domain specific like SQL\nGeneral purpose like C++, Java, Python, PHP.\nThe domain-specific languages are used within specific application domains. For example, SQL is a domain-specific language. It’s used mainly for querying data from relational databases. And SQL cannot be used for other purposes.\nOn the other hand, PHP is a general-purpose language because PHP can develop various applications but it is mainly used in web development.\nWhat can PHP do ? PHP has two main applications :\nServer side scripting - developing dynamic websites\nCommand line scripting - like python, we can run PHP script from command line to send mails or some admin tasks.\nHow PHP works ? To Work with PHP we need to have following software installed\nPHP\nApache ( Web server ) PHP ( Zend Engine ) PECL ( PHP Extension manager ) Composer ( PHP Library manager ) Note: To get started you can also try FrankenPHP\nAfter installation you will see something like this in your terminal\ngovind@debian:~$ php -v PHP 8.3.11 (cli) (built: Sep 2 2024 15:06:27) (NTS) Copyright (c) The PHP Group Zend Engine v4.3.11, Copyright (c) Zend Technologies with Zend OPcache v8.3.11, Copyright (c), by Zend Technologies Running PHP code First of all make a file named index.php and write some code in it as given below\n\u0026lt;?php echo \u0026#34;Hello world\u0026#34;; There is 2 ways by which you can run PHP code\nServe on the web php -S localhost:8000 now go to the browser and type localhost:8000 in address bar\nOn the Command line php index.php Conclusion PHP continues to be a powerful and versatile language for both web and command-line applications. Its ability to dynamically generate content, combined with an extensive ecosystem of libraries and frameworks, makes it a popular choice for developers worldwide.\n","permalink":"https://9ovind.in/blogs/php/","summary":"\u003cp\u003eProgramming languages are formal systems designed for expressing computations. They are used to instruct computers and create software applications. There are numerous programming languages, each with its own syntax, semantics, and purposes.\u003c/p\u003e\n\u003ch3 id=\"types-of-programming-languages\"\u003eTypes of programming languages\u003c/h3\u003e\n\u003col\u003e\n\u003cli\u003e\n\u003cp\u003eDomain specific like SQL\u003c/p\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003cp\u003eGeneral purpose like C++, Java, Python, PHP.\u003c/p\u003e\n\u003c/li\u003e\n\u003c/ol\u003e\n\u003cp\u003eThe domain-specific languages are used within specific application domains. For example, SQL is a domain-specific language. It’s used mainly for querying data from relational databases. And SQL cannot be used for other purposes.\u003c/p\u003e","title":"PHP a general purpose programming language"},{"content":"A static website built using js, html and css for a treking community.\nDeployment The application is deployed and accessible at below URL\nTrekworld\nRequirements Browser Project setup Clone the repository from github via git on your laptop git clone https://github.com/9ovindyadav/trekworld.git Open the index.html file in browser Contribuiting Contributions are welcome! If you\u0026rsquo;d like to contribute to the project.\nGithub Repo\nSupport If you encounter any issue or have questions , please open and issue in the issue section.\n","permalink":"https://9ovind.in/projects/trekworld/","summary":"\u003cp\u003eA static website built using js, html and css for a treking community.\u003c/p\u003e\n\u003ch2 id=\"deployment\"\u003eDeployment\u003c/h2\u003e\n\u003cp\u003eThe application is deployed and accessible at below URL\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://trekworld.9ovind.in\"\u003eTrekworld\u003c/a\u003e\u003c/p\u003e\n\u003ch2 id=\"requirements\"\u003eRequirements\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eBrowser\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"project-setup\"\u003eProject setup\u003c/h2\u003e\n\u003col\u003e\n\u003cli\u003eClone the repository from github via git on your laptop\u003c/li\u003e\n\u003c/ol\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003egit clone https://github.com/9ovindyadav/trekworld.git\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003col start=\"2\"\u003e\n\u003cli\u003eOpen the index.html file in browser\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch2 id=\"contribuiting\"\u003eContribuiting\u003c/h2\u003e\n\u003cp\u003eContributions are welcome! If you\u0026rsquo;d like to contribute to the project.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://github.com/9ovindyadav/trekworld\"\u003eGithub Repo\u003c/a\u003e\u003c/p\u003e\n\u003ch2 id=\"support\"\u003eSupport\u003c/h2\u003e\n\u003cp\u003eIf you encounter any issue or have questions , please open and issue in the issue section.\u003c/p\u003e","title":"Trekworld"},{"content":"Git and GitHub have become indispensable tools for developers and teams working on software projects. Git is a distributed version control system, and GitHub is a web-based platform that enhances collaboration and code sharing. In this blog, we\u0026rsquo;ll focus on four fundamental Git commands that every developer should know and expand on other essential topics related to Git and GitHub.\n1. Git Initialization ( git init ) To start version controlling your project with Git, you first need to initialize a repository. The git init command creates a new Git repository in your project directory. It establishes a .git directory that tracks changes and manages your project\u0026rsquo;s history.\ngit init 2. Adding and Committing ( git add and git commit ) git add . git commit -m \u0026#34;Your message here\u0026#34; 3. Managing Branches (git branch) Branches are a vital part of Git workflows. They allow you to work on different features or bug fixes without affecting the main codebase. Use git branch to list existing branches and create new ones.\ngit branch git branch feature-branch 4. Remote Repositories (git remote) Git allows you to collaborate with others by connecting your local repository to remote repositories, such as GitHub. The git remote command helps you manage remote connections.\ngit remote git remote add origin \u0026lt;GitHub repository URL\u0026gt; Beyond the Basics While these four commands are fundamental, Git and GitHub offer much more:\nPushing and Pulling (git push and git pull) Use git push to send your local commits to a remote repository, and git pull to retrieve changes from a remote repository to your local project.\ngit push origin \u0026lt;branch-name\u0026gt; git pull origin \u0026lt;branch-name\u0026gt; Cloning Repositories (git clone) To create a local copy of a remote repository, use the git clone command. This is useful when starting to work on an existing project hosted on GitHub.\nAuthenticate your Github account with git on your local machine to work seamlessly Github CLI tool GitHub provides a command-line interface (CLI) tool that streamlines interactions with GitHub repositories and issues. It offers features like creating repositories, managing issues, and more.\nSSH Key Authentication To enhance security and simplify access to your GitHub repositories, consider setting up SSH key authentication. This allows you to securely connect to your repositories without entering your credentials repeatedly.\nConclusion Mastering these essential Git and GitHub commands will empower you to efficiently track changes, collaborate with others, and manage your projects effectively. Git and GitHub offer a plethora of features and capabilities that can greatly enhance your development workflow, making them invaluable tools for any software developer or team. Explore and experiment with these tools to harness their full potential.\n","permalink":"https://9ovind.in/blogs/github/","summary":"\u003cp\u003eGit and GitHub have become indispensable tools for developers and teams working on software projects. Git is a distributed version control system, and GitHub is a web-based platform that enhances collaboration and code sharing. In this blog, we\u0026rsquo;ll focus on four fundamental Git commands that every developer should know and expand on other essential topics related to Git and GitHub.\u003c/p\u003e\n\u003ch3 id=\"1-git-initialization--git-init-\"\u003e1. Git Initialization ( git init )\u003c/h3\u003e\n\u003cp\u003eTo start version controlling your project with Git, you first need to initialize a repository. The git init command creates a new Git repository in your project directory. It establishes a .git directory that tracks changes and manages your project\u0026rsquo;s history.\u003c/p\u003e","title":"Git and GitHub- Basic commands for beginners"}]