<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Code & Compass]]></title><description><![CDATA[Tech insights from an engineer, solutions architect: bridging code and leadership, transforming complexity into clarity. Perspectives for engineers seeking stra]]></description><link>https://ronaldkainda.blog</link><generator>RSS for Node</generator><lastBuildDate>Mon, 20 Apr 2026 21:53:24 GMT</lastBuildDate><atom:link href="https://ronaldkainda.blog/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The Strategic Foundations of Effective Tech Modernisation]]></title><description><![CDATA[In the ever-evolving world of technology, “modernisation” is a buzzword that conjures images of moving to the cloud, implementing cutting-edge technologies like artificial intelligence (AI), or adopting the latest software design patterns. For many t...]]></description><link>https://ronaldkainda.blog/the-strategic-foundations-of-effective-tech-modernisation</link><guid isPermaLink="true">https://ronaldkainda.blog/the-strategic-foundations-of-effective-tech-modernisation</guid><category><![CDATA[modernization]]></category><category><![CDATA[Strategic-alignment]]></category><category><![CDATA[business strategy]]></category><category><![CDATA[Technology Leadership]]></category><dc:creator><![CDATA[Ronald Kainda]]></dc:creator><pubDate>Tue, 18 Nov 2025 09:00:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763404582330/0f7d4800-9b17-4efe-9aad-45abf2855eef.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the ever-evolving world of technology, “modernisation” is a buzzword that conjures images of moving to the cloud, implementing cutting-edge technologies like artificial intelligence (AI), or adopting the latest software design patterns. For many technologists, modernisation is often synonymous with embracing these trends, pushing for faster, more scalable systems, or diving headfirst into the next big thing.</p>
<p>However, this common perception is not only limiting but also misleading. Technology modernisation is not just about adopting new tools for the sake of it. In fact, it’s less about technology itself and more about aligning technology to business strategy—ensuring that technology is flexible, adaptable, and scalable enough to meet the evolving needs of the business over the next 10 to 20 years.</p>
<h3 id="heading-modernisation-is-more-than-picking-new-tech"><strong>Modernisation Is More Than Picking New Tech</strong></h3>
<p>It’s easy to get swept up in the latest technological trends. Cloud computing, AI, machine learning, microservices—these are all powerful tools that can greatly enhance a business’s operations. However, if we focus solely on these as the end goal of modernisation, we risk overlooking the true purpose: to serve the broader strategic needs of the business.</p>
<p>Technology modernisation should be viewed as an enabler, not a solution in and of itself. While the adoption of new technologies might play a role in modernisation, it is the alignment of technology with the overarching business goals that is truly transformative.</p>
<p>For example, a company’s strategic vision for the next 10 years might be focused on global expansion or enhancing customer experience. The technology stack that supports this vision should be flexible, scalable, and capable of evolving with business requirements. While cloud migration or the integration of AI may help achieve these goals, they are merely the tools, not the strategy itself.</p>
<h3 id="heading-aligning-technology-to-business-strategy"><strong>Aligning Technology to Business Strategy</strong></h3>
<p>To properly modernise, organisations must first understand their long-term business strategy. Without this clarity, technology becomes a disjointed collection of solutions that may not serve the greater purpose. Technology choices must be driven by business goals, rather than a desire to adopt the latest shiny objects.</p>
<p>For example, if a business goal is to enter a new market, the technology stack must support quick scaling, localisation, and seamless integration with local systems. If the goal is to enhance customer satisfaction, the technology must enable personalised, seamless experiences across all touchpoints. In both cases, the focus is not on the tools themselves (cloud, AI, etc.) but on the long-term outcomes they can enable.</p>
<p>To align technology with business goals, organisations should consider the following key principles:</p>
<ol>
<li><p><strong>Flexibility</strong>: The technology landscape should be adaptable to evolving business needs. This means moving away from rigid, monolithic systems to more modular, flexible architectures that allow for quicker iteration and change.</p>
</li>
<li><p><strong>Scalability</strong>: As a business grows, its technology needs will also evolve. Systems must be able to scale—whether it’s accommodating more customers, handling more data, or expanding into new markets. Cloud computing can play a role here.</p>
</li>
<li><p><strong>Collaboration</strong>: Technology and business leaders need to work hand-in-hand. Technologists must understand the broader business strategy, while business leaders must understand the potential and limitations of technology. This requires ongoing communication and collaboration across departments.</p>
</li>
<li><p><strong>Sustainability</strong>: Technology modernisation is not just about short-term improvements. It’s about ensuring that systems and processes are designed for long-term success. This includes considerations around environmental sustainability, availability of skills for a chosen tech stack, maintainability, and long-term cost efficiency.</p>
</li>
</ol>
<h3 id="heading-the-pitfalls-of-trend-driven-modernisation"><strong>The Pitfalls of Trend-Driven Modernisation</strong></h3>
<p>It’s tempting to rush into technology adoption based on the latest trends. After all, cloud computing, AI, and machine learning are exciting and can certainly provide significant benefits. But chasing trends without a clear strategic alignment can lead to costly missteps.</p>
<p>For instance, migrating to the cloud may not be the right decision for every business. Some businesses may have highly regulated data requirements or legacy systems that are difficult to migrate. Simply moving to the cloud because it's the current trend could create more complexity without providing the desired outcomes.</p>
<p>Similarly, while AI holds great promise, it is not a one-size-fits-all solution. AI requires substantial investment in data infrastructure, skilled talent, and long-term commitment. If the business has not yet defined clear goals around AI, its adoption may not deliver value and could result in wasted resources.</p>
<p>The key takeaway is that technology modernisation should be a thoughtful, strategic endeavour, rather than a reaction to current trends.</p>
<h3 id="heading-technology-modernisation-as-a-continuous-journey"><strong>Technology Modernisation as a Continuous Journey</strong></h3>
<p>Technology modernisation is not a one-off project. It is an ongoing journey that requires regular assessments, iteration, and alignment to shifting business goals. The business landscape is constantly evolving, and so too should the technology that supports it. What works today may not be the right fit in five or ten years.</p>
<p>This continuous nature of technology modernisation means organisations need to build a culture of adaptability. This includes fostering an environment where technology teams are not only skilled in the latest tools but also in agile methodologies, cross-functional collaboration, and forward-thinking problem-solving.</p>
<h3 id="heading-conclusion-modernisation-with-purpose"><strong>Conclusion: Modernisation with Purpose</strong></h3>
<p>In conclusion, technology modernisation is not simply about adopting the latest software or migrating to the cloud. It’s about aligning technology to long-term business strategy. The ultimate goal is to have technology that is flexible, scalable, and able to evolve in tandem with the business, ensuring that it can meet the demands of the next 10 to 20 years.</p>
<p>To achieve this, businesses must take a step back and ask: "How can our technology support our strategic goals?" Only by understanding the business’s vision, and aligning technology to it, can companies truly achieve meaningful, sustainable modernisation.</p>
]]></content:encoded></item><item><title><![CDATA[Living Through the AI Era: Bubble, Revolution, or Both?]]></title><description><![CDATA[Disclaimer: These are my personal reflections—anecdotal, observational, and written with the awareness that the AI landscape changes by the minute. By the time you finish reading this, parts of this essay may already be out of date.
The Financial Que...]]></description><link>https://ronaldkainda.blog/living-through-the-ai-era-bubble-revolution-or-both</link><guid isPermaLink="true">https://ronaldkainda.blog/living-through-the-ai-era-bubble-revolution-or-both</guid><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[AI]]></category><category><![CDATA[Workflow Automation]]></category><dc:creator><![CDATA[Ronald Kainda]]></dc:creator><pubDate>Thu, 16 Oct 2025 12:22:00 GMT</pubDate><content:encoded><![CDATA[<p><strong>Disclaimer:</strong> <em>These are my personal reflections—anecdotal, observational, and written with the awareness that the AI landscape changes by the minute. By the time you finish reading this, parts of this essay may already be out of date.</em></p>
<h2 id="heading-the-financial-questionare-we-living-through-another-bubble">The Financial Question—Are We Living Through Another Bubble?</h2>
<h3 id="heading-the-investment-surge">The Investment Surge</h3>
<p>Billions of dollars are flowing into AI start-ups, infrastructure, and model providers. As of October 2025, venture capitalists have poured £153 billion ($192.7 billion) into AI start-ups so far this year—setting new global records and putting 2025 on track to be the first year where more than half of total VC dollars went into the industry.[1] In Q3 2025 alone, global venture funding jumped 38% to reach $97 billion, with the three largest rounds going to foundation model companies Anthropic ($13 billion), xAI ($5.3 billion), and Mistral AI ($2 billion).[2]</p>
<p>Established players like Microsoft, Google, and Amazon have committed staggering sums into AI-specific cloud infrastructure and GPU capacity. Microsoft is investing $80 billion in fiscal 2025, Amazon plans $100 billion, and Google is targeting $75 billion—with combined infrastructure investments from tech mega-caps projected to reach $364 billion.[3]</p>
<p>This surge feels eerily reminiscent of the dot-com bubble of 2000—a period when any company adding ".com" to its name could attract outsized valuations. The parallel is clear: FOMO-driven investment, speculative valuations, and promises of a "revolution" that few can fully articulate.</p>
<p>Yet there are differences. Unlike early web start-ups that could spin up overnight, AI at scale demands real physical capital: data centres, chips, fibre networks, and vast operational energy. That infrastructure is tangible, expensive, and (at least somewhat) defensible.</p>
<h3 id="heading-a-rational-bubble">A Rational Bubble?</h3>
<p>Could both things be true—that AI is overhyped, <em>and</em> it will transform industries? Possibly. Market bubbles often overstate short-term potential whilst underestimating long-term impact. The internet boom left behind massive overcapacity and bankruptcies, but also the foundations of today's online economy.</p>
<p>A similar pattern might play out here. Some AI start-ups will fade, but the infrastructure and capabilities they helped build could prove essential later.</p>
<p>I'm not qualified to call a definitive financial verdict—but we might be in a "mini-bubble" within a broader structural shift.</p>
<h2 id="heading-the-technical-questionwill-ai-deliver">The Technical Question—Will AI Deliver?</h2>
<h3 id="heading-its-just-pattern-matching">"It's Just Pattern Matching"</h3>
<p>Critics argue that modern AI isn't truly intelligent; it merely recombines statistical patterns from its training data. In this view, large language models (LLMs) are glorified autocomplete systems—powerful but fundamentally shallow. Recent research from Apple and IBM has highlighted these limitations. IBM's analysis notes that LLMs "can't do deductive reasoning" and are instead "set up to do pattern recognition", whilst Apple research demonstrates that AI models primarily rely on statistical pattern matching rather than genuine mathematical reasoning, with accuracy collapsing on high-complexity tasks. Columbia University Professor Vishal Misra has articulated similar concerns, arguing in his June 2025 essay that language models are fundamentally limited by their reliance on statistical patterns and cannot truly improve themselves through reasoning.[4]</p>
<p>The cri<a class="post-section-overview" href="#user-content-fn-13">t</a>ique is fair: AI systems hallucinate, lack grounding in physical reality, and often fail at causal reasoning. But to dismiss them entirely misses something essential.</p>
<h3 id="heading-humans-are-pattern-matchers-too">Humans Are Pattern Matchers Too</h3>
<p>Human cognition relies heavily on pattern recognition. From reading facial expressions to completing sentences, much of our "thinking" involves comparing current stimuli with stored examples. We may encode richer context, but the process is still probabilistic and experience-based.</p>
<p>In that sense, AI's "pattern matching" is not alien—it's familiar. The question isn't whether AI uses patterns, but how far pattern-based learning can go before it hits cognitive or conceptual ceilings.</p>
<h3 id="heading-counterarguments-to-ai-is-just-pattern-matching">Counterarguments to "AI is Just Pattern Matching"</h3>
<p>There are several compelling counterarguments to this reductive claim, and experts increasingly emphasise that, whilst pattern matching is a fundamental mechanism, modern AI systems operate in ways that go beyond this basic principle:</p>
<p><strong>Hierarchical Reasoning and Abstraction:</strong> Advanced AI models leverage hierarchical representations that move beyond raw pattern recognition, enabling abstraction, analogical reasoning, and context-aware responses—facilitating problem-solving that resembles human cognition in specific domains. Recent research on hierarchical reasoning models demonstrates that AI systems inspired by the brain's multi-timescale processing can achieve exceptional performance on complex reasoning tasks, including the Abstraction and Reasoning Corpus (ARC), a key benchmark for measuring artificial general intelligence capabilities.[8]</p>
<p><strong>Generalisation and Zero-Shot Learning:</strong> State-of-the-art AI models, especially large language models, demonstrate the ability to generalise from previously unseen data ("zero-shot" tasks), inferring solutions through context rather than simple memorisation of past patterns. Research has shown that large language models are capable of zero-shot reasoning on complex multi-step tasks, performing competently on problems they have never been explicitly trained to solve.[9]</p>
<p><strong>Autonomous Planning and Multi-Step Reasoning:</strong> Agentic AI workflows not only identify patterns but plan workflows, adapt to novel data, and synthesise information across unstructured contexts, mimicking some aspects of human decision-making and creative synthesis.</p>
<p><strong>Emergent Behaviours:</strong> Modern deep learning systems exhibit emergent behaviours (such as generating code, composing music, or playing strategic games) that were not explicitly trained into them, suggesting a more complex form of intelligence than rote statistical pattern matching. As language models scale up, they demonstrate emergent abilities—capabilities that appear unpredictably at certain model sizes and were not present in smaller versions.[10]</p>
<p><strong>Human Cognition Relies on Patterns Too:</strong> Critiques of AI as "just pattern matching" sometimes overlook the reality that human cognition itself relies heavily on identifying, storing, and manipulating patterns; intelligence emerges from using those patterns adaptively.</p>
<p>Whilst pattern matching is a fundamental building block, contemporary AI systems utilise this capacity as a substrate for more sophisticated reasoning, planning, and abstraction—undermining claims that "AI is just pattern matching" and supporting a more nuanced view of how artificial intelligence operates in practice.</p>
<h3 id="heading-technical-perspective-what-does-ai-really-deliver">Technical Perspective: What Does AI Really Deliver?</h3>
<p>In domains rich in structured data and textual context—summarisation, translation, search, code generation, documentation—AI has already proved transformative. Developers routinely use tools such as Amazon Q, Replit, Lovable or Copilot-like assistants to draft, test, and debug code. Analysts use AI to summarise long documents or generate reports in minutes.</p>
<p>These are not trivial gains; they redefine productivity in knowledge work.</p>
<p>Many critics argue that today's AI systems are not "intelligent" in the way humans are, and are merely sophisticated pattern matchers or glorified text generators. The distinction between human and artificial intelligence is subtle: whilst humans certainly use pattern matching to solve problems, what separates us is our awareness of our limitations and our ability to self-correct.</p>
<p>That said, it's undeniable that AI has become remarkably effective in text generation, summarisation, and other language-heavy tasks. This has driven progress in fields like software development automation, customer service response, fraud detection, and medical research—domains where large volumes of data or text need to be managed rapidly and intelligently.</p>
<h3 id="heading-agentic-aia-new-name-for-workflow-automation">Agentic AI—A New Name for Workflow Automation?</h3>
<p>A new term is circulating: <em>agentic AI</em>. It describes systems that can plan, act, and self-correct through multiple steps, rather than executing a single prompt.</p>
<p>On closer inspection, it's reminiscent of workflow automation tools like IFTTT, Zapier, or n8n—systems built around conditional logic: <em>If This Then That</em>. The difference is that agentic AI can dynamically reason about tasks, make decisions under uncertainty, and learn from context.</p>
<p>Traditional automation tools—think of systems like IFTTT or Zapier—operate within rigid, rule-based frameworks. They execute predefined sequences: if X happens, do Y. These systems are dependable but inflexible. If something unexpected occurs—say, a missing input, an API change, or a new context—they typically fail silently or require manual intervention. Their strength lies in predictability, but their weakness is brittleness. They can't reason about why a step failed or decide on an alternative course of action.</p>
<p>More sophisticated platforms like n8n occupy a middle ground. n8n allows for branching, looping, and dynamic adaptation, offering greater flexibility and customisation, particularly for developers building complex automations.[14] Unlike simpler tools, n8n provides a visual interface where different apps connect like puzzle pieces through nodes, and it supports custom code alongside pre-built integrations. However, even n8n operates primarily through structured, event-driven logic—when something happens, something else follows.</p>
<p>Agentic AI represents a step beyond this paradigm. Rather than relying on static rules or even complex conditional logic, agentic systems learn from context. They can interpret goals described in natural language, decompose them into sub-tasks, and plan multi-step actions autonomously. When something goes wrong, they don't just stop—they use reasoning and feedback loops to attempt recovery, such as re-running a step, reformulating a query, or selecting a different tool. This adaptability is often powered by large language models, which allow these agents to parse flexible instructions and dynamically adjust their behaviour. Whilst traditional automation tools like n8n excel at structured workflows and event-driven logic, agentic AI systems are better suited for building intelligent systems where agents can think through problems and assess outcomes.[15]</p>
<p>In short, agentic AI blends the predictability of automation with the fluid intelligence of contextual reasoning—moving automation from a rigid "workflow engine" to something more akin to a digital collaborator.</p>
<h2 id="heading-where-is-the-world-changing-ai">Where Is the World-Changing AI?</h2>
<p>Take Sora 2, OpenAI's text-to-video model. Its results are visually stunning—cinematic clips rendered from short text prompts. Yet beyond creative or entertainment niches, its world-changing impact remains speculative.</p>
<p>Or look at Tesla's Optimus robot. Tesla released footage of Optimus performing kung fu–style movements, demonstrating fluid motion and balance.[11] It's an impressive technical feat, but one designed for controlled conditions. Whether such dexterity translates into robust, generalisable robotics is uncertain.</p>
<p>These are exciting glimpses, not revolutions. The path from demo to deployment is long and fraught with engineering, safety, and economic barriers. But, again, AI innovation is moving at terrific speed.</p>
<p>Whilst AI brings hype and headlines, genuine world-changing applications remain rare. Sora 2, a high-performance multimodal generative model, is technically impressive but hasn't fundamentally transformed daily life. Meanwhile, demonstrations like Tesla's Optimus robot performing kung fu moves—praised for their smooth, humanlike execution—highlight technical progress in machine learning and robotics. However, such feats are often achieved in highly controlled environments and are not yet generalised solutions for complex real-world problems.</p>
<p>Despite major advances across industries—from agriculture to autonomous vehicles and fraud detection—these accomplishments have yet to redefine society on the scale promised by AI's loudest advocates.</p>
<h2 id="heading-the-infrastructure-backdroppower-scale-and-sustainability">The Infrastructure Backdrop—Power, Scale, and Sustainability</h2>
<h3 id="heading-the-cost-of-intelligence">The Cost of Intelligence</h3>
<p>AI doesn't exist in a vacuum—it runs on silicon and electricity. According to the International Energy Agency, data centres currently account for around 1–1.5% of global electricity consumption.[5] The IEA projects that data centre electricity consumption will more than double by 2030, with AI being a chief driver of this growth.[6] However, data centre demand growth is expected to account for less than 10% of global electricity demand growth between 2024 and 2030.[7] Training a single frontier model can cost tens of millions of pounds and emit thousands of tonnes of CO₂ equivalent.</p>
<p>New hyperscale data centres are under construction worldwide, often near renewable-energy hubs. But the scale raises hard questions:</p>
<ul>
<li><p>How sustainable is exponential growth in compute demand?</p>
</li>
<li><p>Can innovation in model efficiency outpace energy consumption?</p>
</li>
<li><p>What happens when power supply becomes a geopolitical bottleneck?</p>
</li>
</ul>
<h3 id="heading-scaling-limits">Scaling Limits</h3>
<p>Model performance improvements have so far followed predictable scaling laws—but the cost curve is steepening. Each new "state-of-the-art" model demands disproportionately more compute, memory, and data. Diminishing returns are already visible. Recent reports from Bloomberg and The Information indicate that OpenAI, Google, and Anthropic are experiencing slower improvement rates despite massive investments in computing power and data.[12] Research from Epoch AI shows that training costs for frontier models have grown by a factor of 2-3x per year over the past eight years, with today's most expensive models costing tens of millions of dollars to train—suggesting costs could excee<a class="post-section-overview" href="#user-content-fn-14">d</a> $1 billion by 2027.[13]</p>
<p>Future breakthroughs may depend less on bigger models and more on smarter architectures, efficiency gains, and hybrid reasoning systems that combine symbolic, causal, and neural elements.</p>
<h2 id="heading-tentative-conclusions">Tentative Conclusions</h2>
<p><strong>Financially,</strong> AI may contain pockets of speculative excess, but the underlying infrastructure is real and strategically important. A correction, if it comes, will likely prune hype rather than kill the field.</p>
<p><strong>Technically,</strong> AI delivers extraordinary capabilities in text-based reasoning and automation but remains narrow. General intelligence, common sense, and real-world generalisation remain unsolved.</p>
<p><strong>Agentic AI</strong> could evolve into the backbone of digital operations—the next generation of workflow orchestration rather than an entirely new paradigm.</p>
<p><strong>Sustainability</strong> will become the next frontier constraint: compute, power, and cooling capacity may shape AI's growth curve more than algorithms.</p>
<p>In short, AI is not a mirage—but neither is it omnipotent. It's a technological wave still searching for equilibrium between hype, utility, and feasibility.</p>
<blockquote>
<p>"We may be living through both a bubble and a revolution—the two are not mutually exclusive."</p>
</blockquote>
<hr />
<h2 id="heading-references">References</h2>
<p>[1]: Bloomberg. (October 2025). "AI Is Dominating 2025 VC Investing, Pulling in $192.7 Billion." <a target="_blank" href="https://www.bloomberg.com/news/articles/2025-10-03/ai-is-dominating-2025-vc-investing-pulling-in-192-7-billion">https://www.bloomberg.com/news/articles/2025-10-03/ai-is-dominating-2025-vc-investing-pulling-in-192-7-billion</a></p>
<p>[2]: Crunchbase. (October 2025). "Q3 Venture Funding Jumps 38% As More Massive Rounds Go To AI Giants." <a target="_blank" href="https://news.crunchbase.com/venture/global-vc-funding-biggest-deals-q3-2025-ai-ma-data/">https://news.crunchbase.com/venture/global-vc-funding-biggest-deals-q3-2025-ai-ma-data/</a></p>
<p>[3]: WebProNews. (October 2025). "Microsoft, Alphabet, Amazon Drive AI Supercycle with $364B Investments." <a target="_blank" href="https://www.webpronews.com/microsoft-alphabet-amazon-drive-ai-supercycle-with-364b-investments/">https://www.webpronews.com/microsoft-alphabet-amazon-drive-ai-supercycle-with-364b-investments/</a>; Microsoft On the Issues. (January 2025). "The golden opportunity for American AI." <a target="_blank" href="https://blogs.microsoft.com/on-the-issues/2025/01/03/the-golden-opportunity-for-american-ai/">https://blogs.microsoft.com/on-the-issues/2025/01/03/the-golden-opportunity-for-american-ai/</a></p>
<p>[4] IBM Think. (April 2025). "How smart is machine intelligence? AI aces games but fails basic reality check." <a target="_blank" href="https://www.ibm.com/think/news/mit-study-evaluating-world-model-ai">https://www.ibm.com/think/news/mit-study-evaluating-world-model-ai</a>; <a target="_blank" href="http://MLQ.ai">MLQ.ai</a><a target="_blank" href="https://www.ibm.com/think/news/mit-study-evaluating-world-model-ai">. (2025). "Apple Research Exposes Limits of AI Reasoning Mo</a>dels." <a target="_blank" href="https://mlq.ai/news/apple-research-exposes-limits-of-ai-reasoning-models-ahead-of-wwdc-2025/">https://mlq.ai/news/apple-research-exposes-limits-of-ai-reasoning-models-ahead-of-wwdc-2025/</a>; <a target="_blank" href="https://mlq.ai/news/apple-research-exposes-limits-of-ai-reasoning-models-ahead-of-wwdc-2025/">Misra, V. (June 2025). "The Illusion of Thinking: Why Language Models Can't Improve Themsel</a>ves." <a target="_blank" href="https://medium.com/@vishalmisra/the-illusion-of-thinking-why-language-models-cant-improve-themselves-0c71a13811e2">https://medium.com/@vishalmisra/the-illusion-of-thinking-why-language-models-cant-improve-themselves-0c71a13811e2</a></p>
<p>[5]: International Energy Agency. (2024). "Data centres &amp; networks." <a target="_blank" href="https://www.iea.org/energy-system/buildings/data-centres-and-data-transmission-networks">https://www.iea.org/energy-system/buildings/data-centres-and-data-transmission-networks</a></p>
<p>[6]: Nature. (April 2025). "Data centres will use twice as much energy by 2030—driven by AI." <a target="_blank" href="https://www.nature.com/articles/d41586-025-01113-z">https://www.nature.com/articles/d41586-025-01113-z</a></p>
<p>[7]: International Energy Agency. (2024). "Energy demand from AI." <a target="_blank" href="https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai">https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai</a></p>
<p>[8]: Sapient Intelligence. (June 2025). "Hierarchical Reasoning Model." <a target="_blank" href="https://arxiv.org/abs/2506.21734">https://arxiv.org/abs/2506.21734</a>; ARC Prize. (2025). "The Hidden Drivers of HRM's Performance on ARC-AGI." <a target="_blank" href="https://arcprize.org/blog/hrm-analysis">https://arcprize.org/blog/hrm-analysis</a></p>
<p>[9]: Kojima, T., et al. (2022). "Large Language Models are Zero-Shot Reasoners." <a target="_blank" href="https://arxiv.org/abs/2205.11916">https://arxiv.org/abs/2205.11916</a></p>
<p>[10]: Wei, J., et al. (2022). "Emergent Abilities of Large Language Models." <a target="_blank" href="https://arxiv.org/abs/2206.07682">https://arxiv.org/abs/2206.07682</a>; Center for Security and Emerging Technology. (April 2024). "Emergent Abilities in Large Language Models: An Explainer." <a target="_blank" href="https://cset.georgetown.edu/article/emergent-abilities-in-large-language-models-an-explainer/">https://cset.georgetown.edu/article/emergent-abilities-in-large-language-models-an-explainer/</a></p>
<p>[12]: Marketing AI Institute. (November 2024). "The Great AI Scaling Debate: Have We Hit a Wall?" <a target="_blank" href="https://www.marketingaiinstitute.com/blog/scaling-laws-ai-wall">https://www.marketingaiinstitute.com/blog/scaling-laws-ai-wall</a>; Platformer. (November 2024). "AI companies hit a scaling wall." <a target="_blank" href="https://www.platformer.news/openai-google-scaling-laws-anthropic-ai/">https://www.platformer.news/openai-google-scaling-laws-anthropic-ai/</a></p>
<p>[13]: Epoch AI. (January 2025). "How much does it cost to train frontier AI models?" <a target="_blank" href="https://epoch.ai/blog/how-much-does-it-cost-to-train-frontier-ai-models">https://epoch.ai/blog/how-much-does-it-cost-to-train-frontier-ai-models</a>; Sevilla, J., et al. (May 2024). "The rising costs of training frontier AI models." <a target="_blank" href="https://arxiv.org/abs/2405.21015">https://arxiv.org/abs/2405.21015</a></p>
<p>[14]: n8n. (2025). "Advanced AI Workflow Automation Software &amp; Tools." <a target="_blank" href="https://n8n.io/ai/">https://n8n.io/ai/</a>; Hostinger. (September 2025). "What is n8n? Intro to a workflow automation tool." <a target="_blank" href="https://www.hostinger.com/tutorials/what-is-n8n">https://www.hostinger.com/tutorials/what-is-n8n</a></p>
<p>[15]: Gupta, M. (October 2025). "OpenAI AgentKit vs N8N: The best AI Workflow Builder?" <a target="_blank" href="https://medium.com/data-science-in-your-pocket/openai-agentkit-vs-n8n-the-best-ai-workflow-builder-da5eaf21aa10">https://medium.com/data-science-in-your-pocket/openai-agentkit-vs-n8n-the-best-ai-workflow-builder-da5eaf21aa10</a></p>
]]></content:encoded></item><item><title><![CDATA[Beyond the Hype: Practical Azure Cost Optimisation for Enterprise Workloads]]></title><description><![CDATA[The cloud computing landscape has been a rollercoaster of promises and challenges, with many organisations discovering that the initial allure of cloud migration does not always translate into the cost savings originally anticipated. Whilst cloud pla...]]></description><link>https://ronaldkainda.blog/beyond-the-hype-practical-azure-cost-optimisation-for-enterprise-workloads</link><guid isPermaLink="true">https://ronaldkainda.blog/beyond-the-hype-practical-azure-cost-optimisation-for-enterprise-workloads</guid><category><![CDATA[AzureOptimization]]></category><category><![CDATA[SustainableIT]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Cost management]]></category><category><![CDATA[EnterpriseIT]]></category><category><![CDATA[cloudstrategy]]></category><category><![CDATA[Cloud Economics]]></category><category><![CDATA[Tech Innovation,]]></category><category><![CDATA[Digital Transformation]]></category><dc:creator><![CDATA[Ronald Kainda]]></dc:creator><pubDate>Wed, 15 Jan 2025 07:46:53 GMT</pubDate><content:encoded><![CDATA[<p>The cloud computing landscape has been a rollercoaster of promises and challenges, with many organisations discovering that the initial allure of cloud migration does not always translate into the cost savings originally anticipated. Whilst cloud platforms like Microsoft Azure offer unprecedented scalability and flexibility, a growing number of organisations are experiencing a phenomenon that would have seemed unthinkable just a few years ago: cloud repatriation.</p>
<p>It may seem long time ago when companies such as Dropbox, which famously saved nearly $75 million by migrating away from public cloud infrastructure, and 37signals (creators of Basecamp), who documented their strategic move back to on-premises infrastructure, brought critical attention to the often-overlooked complexities of cloud economics. These case studies reveal that uncontrolled cloud adoption without meticulous cost management can lead to spiralling expenses that erode the very benefits organisations seek.</p>
<p>However, abandoning cloud strategies wholesale is not the answer. Instead, in this blog post I will delve into practical, actionable strategies for Azure cost optimisation that can help organisations maintain the agility of cloud computing while keeping expenditures firmly in check. From right-sizing resources to leveraging advanced cost management tools, I will explore how organisations can transform cloud spending from a potential financial burden into a strategic advantage.</p>
<h2 id="heading-cloud-repatriation-when-cloud-does-not-deliver">Cloud Repatriation: When Cloud Does not Deliver</h2>
<p>The narrative of cloud computing is not a simple tale of universal triumph. While cloud migration has been touted as an inevitable technological progression, some of the tech industry's most innovative companies found themselves swimming against the current, choosing to bring their infrastructure back in-house. I will briefly touch on the cases of Dropbox and Basecamp as these are the ones that have been covered widely.</p>
<h3 id="heading-dropbox-a-strategic-infrastructure-exodus">Dropbox: A Strategic Infrastructure Exodus</h3>
<p>Dropbox's cloud repatriation journey is perhaps the most celebrated case of calculated infrastructure transformation. In 2016, the file-sharing giant made a bold decision to migrate approximately 90% of its infrastructure from Amazon Web Services (AWS) to a custom-built, private cloud infrastructure. This move was driven by a number of factors including having control to configure hardware precisely, performance, and pure economic pragmatism.</p>
<p>The financial implications were staggering. By building their own infrastructure, Dropbox saved approximately $75 million over two years. Their custom-built infrastructure provided remarkable advantages, allowing the company to precisely configure hardware tailored to their specific workloads. This approach enabled significant reduction in per-unit infrastructure costs, offering greater control over performance and resource allocation while eliminating the cloud provider markup on infrastructure services.</p>
<p>The key challenges Dropbox faced with public cloud were multifaceted. The company experienced escalating costs that grew disproportionately with their scale. Standard cloud offerings provided limited customisation options, and the performance overhead associated with multi-tenant cloud environments became increasingly problematic as the company expanded.</p>
<h3 id="heading-basecamp-embracing-infrastructure-autonomy">Basecamp: Embracing Infrastructure Autonomy</h3>
<p>David Heinemeier Hansson, CTO of 37signals (Basecamp), has been vocally critical of the cloud-first approach. Their repatriation strategy was driven by a combination of cost considerations and a philosophical stance on infrastructure ownership.</p>
<p>Basecamp's migration illuminated several critical issues with cloud computing. The company struggled with unpredictable and rapidly escalating cloud costs, experiencing diminishing returns on cloud flexibility for their stable, well-understood workloads. They discovered that direct hardware management could provide superior performance and cost-efficiency compared to cloud-based solutions.</p>
<p>By moving back to dedicated hardware, Basecamp achieved a transformative infrastructure strategy. The company secured more predictable infrastructure expenses, enhanced performance through direct hardware control, reduced complexity in infrastructure management, and gained greater long-term cost predictability.</p>
<h2 id="heading-common-reasons-for-cloud-repatriation">Common Reasons for Cloud Repatriation</h2>
<p>While Dropbox and Basecamp represent different scales and approaches, their experiences reveal common challenges in cloud migration. Cost scalability emerged as a critical concern, with cloud expenses potentially growing exponentially and outpacing the perceived benefits of flexibility. Performance limitations became apparent, demonstrating that generic cloud infrastructure does not always match the efficiency of custom-built solutions.</p>
<p>Each organisation discovered an economic tipping point where owning infrastructure becomes more economical than renting. It is crucial to understand that cloud repatriation is not a universal solution. These companies did not abandon cloud computing entirely but rather made strategic decisions about where and how to deploy their computational resources.</p>
<p>The lesson is not that cloud is inherently flawed, but that a one-size-fits-all approach to cloud infrastructure is fundamentally misguided. Successful digital infrastructure strategy requires continuous evaluation, flexibility, and a willingness to challenge prevailing technological narratives.</p>
<h2 id="heading-azure-cost-optimisation-strategies">Azure Cost Optimisation Strategies</h2>
<p>Understanding the cautionary tales of cloud repatriation does not mean abandoning cloud strategies altogether. Instead, it calls for a more nuanced, strategic approach to cloud cost management. Microsoft Azure offers a robust ecosystem of tools and techniques that, when implemented thoughtfully, can transform cloud expenditure from a potential financial drain into a strategic business advantage.</p>
<p>Cloud cost optimisation is not a one-time exercise but a continuous process of analysis, refinement, and strategic alignment. It requires organisations to develop a holistic view of their cloud infrastructure, moving beyond simple cost-cutting to create a more intelligent, efficient computational environment. The most successful organisations approach Azure cost management as a dynamic discipline that balances performance, scalability, and financial prudence.</p>
<p>In this section, I will explore a suite of strategies that can help organisations extract maximum value from their Azure investments. From granular resource management to advanced cost prediction techniques, these approaches will empower IT leaders and financial managers to take control of their cloud economics. The goal is not just to reduce costs, but to create a more responsive, adaptable, and financially sustainable cloud infrastructure that directly supports business objectives.</p>
<h3 id="heading-1-right-sizing-resources">1. Right-Sizing Resources</h3>
<p>Right-sizing represents one of the most fundamental yet powerful strategies for optimising Azure infrastructure costs. At its core, right-sizing is about matching computational resources precisely to workload requirements, eliminating the costly practice of over-provisioning that plagues many organisation cloud environments.</p>
<p>Most organisations inadvertently deploy virtual machines and cloud resources with excessive capacity, essentially paying for computational power they never utilise. Industry studies suggest that many organisations waste up to 35% of their cloud spending on unused or overprovisioned resources. Any engineer who has deployed on-premises knows that when requesting for a VM, you ask for the highest specifications you can get away with because it is almost impossible, mostly due to paperwork involved, to request an upgrade to your on-prem VM (let alone a physical machine) at a later stage. Azure provides sophisticated tools that enable organisations to analyse resource utilisation with remarkable granularity, transforming cloud cost management from a guessing game to a data-driven discipline.</p>
<p>The right-sizing process begins with comprehensive monitoring and analysis. Azure Monitor and Azure Advisor become critical allies in this journey, offering detailed insights into resource consumption patterns. These tools track critical metrics such as CPU utilisation, memory consumption, network throughput, and storage performance across virtual machines and cloud services.</p>
<p>For virtual machines, right-sizing strategies can be divided into several approaches. Downsizing involves reducing virtual machine specifications to match actual workload requirements. This might mean transitioning from a premium D-series virtual machine with 16 cores to a more modest machine with 4 cores that can adequately handle the computational load. Similarly, organisations can leverage Azure's burstable virtual machine instances, which provide baseline performance with the ability to burst above that baseline when required, offering significant cost savings.</p>
<p>Reserved Instances represent another sophisticated right-sizing mechanism. By committing to one-year or three-year terms for specific virtual machine configurations, organisations can secure substantial discounts compared to pay-as-you-go pricing. These reservations work exceptionally well for stable, predictable workloads where computational requirements remain relatively consistent. This option, however, is not suitable for everyone; it is mostly suitable for large organisations.</p>
<p>The economic implications are profound. A well-executed right-sizing strategy can potentially reduce cloud infrastructure costs by 30-50% without compromising performance or introducing additional complexity. However, right-sizing is not a one-time exercise but a continuous process requiring regular review and adjustment.</p>
<p>Key considerations for effective right-sizing include:</p>
<ul>
<li><p>Implementing continuous monitoring of resource utilisation</p>
</li>
<li><p>Establishing clear performance baseline metrics</p>
</li>
<li><p>Creating automated scaling policies</p>
</li>
<li><p>Regularly reviewing and adjusting resource allocations</p>
</li>
<li><p>Leveraging Azure's native cost management and recommendation tools</p>
</li>
</ul>
<p>Organisations must also develop a nuanced understanding of their workload characteristics. Some applications require consistent computational power, while others experience significant variability. Right-sizing strategies must be tailored to these unique workload profiles, recognising that a universal approach is fundamentally ineffective.</p>
<p>Technical teams, while they should not become finance managers, should understand the cost implications of the resources they are deploying. This approach ensures that computational resources are not just efficient, but directly aligned with broader organisational objectives.</p>
<h3 id="heading-2-advanced-cost-management-techniques">2. Advanced Cost Management Techniques</h3>
<p>Azure Cost Management and billing tools serve as the control centre for enterprise cloud spending. These native tools provide comprehensive visibility into resource consumption patterns and spending trends across subscriptions and resource groups. The platform's cost analysis features enable organisations to break down expenditure by service, location, and time period, while forecasting capabilities help predict future spending based on historical patterns.</p>
<p>Budget alerts and spending limits act as an early warning system for potential cost overruns. Organisations can establish multiple budget thresholds with automated notifications at different spending levels. When configured effectively, these alerts notify stakeholders via email when spending reaches predefined percentages of the budget, typically at 70%, 90%, and 100%. Critical workloads can be protected by implementing hard spending limits that automatically disable resource deployment when budgets are exceeded.</p>
<p>A well-structured tagging strategy enables granular cost tracking and allocation. Tags should reflect business dimensions such as department, environment, application, and cost centre. This granular approach to resource labelling enables precise cost attribution and chargeback mechanisms. Through consistent tagging, organisations can generate detailed reports showing exactly how cloud resources are being consumed across different business units and projects. Effective tagging also facilitates automation of cost management policies and governance rules, ensuring resources are consistently tracked and managed according to organisational standards.</p>
<p>Integration of these three approaches creates a robust framework for cost governance. Real-time visibility, proactive alerts, and detailed tracking mechanisms work together to prevent unexpected costs while maintaining operational efficiency.</p>
<h3 id="heading-3-storage-optimisation">3. Storage Optimisation</h3>
<p>Storage costs in Azure can be effectively managed through a multi-layered optimisation strategy. Azure Blob Storage access tiers form the foundation of cost optimisation, offering Hot, Cool, Cold, and Archive tiers with decreasing storage costs but increasing access fees. Organisations should implement lifecycle management policies to automatically move data between these tiers based on access patterns and retention requirements.</p>
<p>Data redundancy choices significantly impact costs. While Geo-Redundant Storage (GRS) provides the highest durability, many workloads can safely utilise Locally Redundant Storage (LRS) or Zone-Redundant Storage (ZRS) at a lower cost. Organisations should evaluate their Recovery Point Objectives (RPO) and adjust redundancy accordingly.</p>
<p>Implementing effective data retention policies helps control storage growth. Regular clean up of unused snapshots, old backups, and obsolete data can substantially reduce storage costs. Azure Storage Explorer enables identification of orphaned resources and unnecessary duplicates.</p>
<p>Compression and deduplication techniques further optimise storage usage. Implementing these at the application level before storing data in Azure can significantly reduce storage requirements and associated costs.</p>
<p>Premium storage should be reserved for truly I/O-intensive workloads, while standard storage suffices for most general-purpose applications. Regular monitoring of storage metrics helps identify opportunities to downgrade storage performance tiers without impacting application performance.</p>
<p>Another component of storage is databases. Azure offers a few options to choose from, including MS SQL, PostgreSQL, Cosmos, and other open-source databases such as MongoDB. It is tempting for engineers to choose the hottest technology at any point in time. However, this may be detrimental to cost management. For example, while Cosmos DB offers features such as instant multi-region replication, multi-write replicated copies, and sub-millisecond reads, as well as APIs for MongoDB, SQL, etc., it comes at a cost. Organisations need to analyse their requirements and understand whether the additional benefits offered by Cosmos DB are truly what their services require.</p>
<h3 id="heading-4-network-cost-reduction">4. Network Cost Reduction</h3>
<p>Network costs in Azure often form a substantial portion of cloud spending, particularly for data-intensive applications. Effective network design starts with proper Virtual Network (VNet) architecture and region selection. By hosting interdependent services within the same region and availability zone, organizations can minimize inter-region data transfer costs while maintaining high availability.</p>
<p>Azure ExpressRoute proves cost-effective for organisations with high-volume data transfer requirements between on-premises and Azure environments. While initial setup costs are higher than VPN connections, ExpressRoute's predictable pricing and superior performance often result in long-term savings for large-scale deployments.</p>
<p>Content Delivery Network (CDN) implementation significantly reduces data transfer costs for globally distributed applications. Azure CDN caches content closer to end users, reducing both latency and egress charges. Similarly, careful placement of Azure Front Door and Application Gateway services optimises traffic routing and reduces unnecessary data transfer.</p>
<p>Network bandwidth costs can be controlled through effective use of Azure's Virtual Network service endpoints. These endpoints allow services to connect through Azure's backbone network rather than through public IP addresses, reducing both costs and security risks.</p>
<p>Data egress optimisation requires careful monitoring of cross-region and internet-bound traffic. Azure Network Watcher and Flow Logs provide visibility into traffic patterns, enabling identification of unexpected or costly data transfers. Regular review of these metrics helps identify opportunities for traffic optimisation and potential cost savings through architectural improvements.</p>
<p>Implementing bandwidth throttling and scheduling large data transfers during off-peak hours can help manage costs while maintaining service quality. Additionally, utilising compression for data transfers and implementing efficient caching strategies at the application level reduces overall network utilisation.</p>
<h3 id="heading-5-containerisation-and-serverless">5. Containerisation and Serverless</h3>
<p>Containerisation through Azure Kubernetes Service (AKS) and serverless computing via Azure Functions represent modern approaches to cost optimisation. AKS enables efficient resource utilisation through dynamic container orchestration, automatically scaling resources based on actual demand. This eliminates idle capacity costs while ensuring applications receive necessary resources during peak periods.</p>
<p>Azure Functions' consumption plan pricing model charges only for actual execution time and memory usage, measured in milliseconds. This granular pricing eliminates the overhead costs associated with maintaining traditional infrastructure. Functions automatically scale based on workload, optimising costs during varying demand levels.</p>
<p>Containerisation facilitates efficient resource sharing among applications, improving overall infrastructure utilisation. Containers' lightweight nature allows higher density deployment compared to traditional virtual machines, reducing per-application infrastructure costs. AKS's automated bin-packing capabilities ensure optimal resource distribution across the cluster.</p>
<p>Serverless architecture eliminates infrastructure management costs and reduces development overhead. Azure Functions' integration with other Azure services enables cost-effective event-driven architectures. The platform handles scaling, patching, and maintenance, reducing operational expenses.</p>
<p>Combined implementation of containers and serverless creates a hybrid approach optimised for different workload types. Long-running applications benefit from container-based deployment, while event-driven processes leverage serverless functions. This architecture maximises cost efficiency while maintaining application performance and scalability.</p>
<p>Both technologies support rapid deployment and testing, reducing development costs and time-to-market. Integration with Azure DevOps enables automated deployment pipelines, further optimising operational efficiency and resource utilisation.</p>
<p>Containerisation and serverless means that organisations need to assess their on-prem workloads and find the best way how these workloads can take advantage of cloud native infrastructure. This may mean redesigning the workloads to best suit the cloud infrastructure. For start-ups, this is never a problem as there is no on-prem workloads to migrate.</p>
<h2 id="heading-practical-recommendations">Practical Recommendations</h2>
<ol>
<li><p><strong>Conduct Regular Cost Audits</strong></p>
<p> Regular cost audits form the backbone of effective cloud financial management. Organisations should establish a systematic process to review Azure spending patterns monthly, focusing on identifying unexpected cost spikes and underutilised resources. These audits should examine resource utilisation across all subscriptions, comparing actual spending against budgeted amounts and historical trends.</p>
<p> Cost audits should integrate data from Azure Cost Management, emphasising high-impact areas such as virtual machines, storage, and network usage. The process should identify orphaned resources, non-production environments running outside business hours, and opportunities for reservation purchases. Organisations should also analyse usage patterns to detect potential cost anomalies or security incidents that manifest as unusual spending patterns. A comprehensive audit includes reviewing tag compliance, ensuring accurate cost allocation across business units, and validating that resources align with governance policies. The findings should drive actionable recommendations for immediate cost optimisation and long-term architectural improvements.</p>
</li>
<li><p><strong>Develop a Hybrid Strategy</strong></p>
<p> Developing a hybrid strategy is essential for optimising cloud infrastructure. Not all workloads are suited for a cloud-native environment, making it crucial to evaluate the specific needs of each application. A hybrid approach allows organisations to leverage the strengths of both cloud and on-premises solutions, ensuring that workloads are matched to the most appropriate infrastructure. This strategy involves integrating cloud services with existing on-premises systems, providing flexibility and scalability while maintaining control over critical data and applications. By adopting a multi-cloud or hybrid model, organisations can avoid vendor lock-in, enhance resilience, and optimise costs. This approach requires careful planning and execution, ensuring seamless interoperability between different environments. Organisations should continuously assess their infrastructure needs, adapting their strategy to align with evolving business objectives and technological advancements. A well-executed hybrid strategy not only optimises performance and cost-efficiency but also supports long-term growth and innovation.</p>
</li>
<li><p><strong>Invest in Cloud Financial Management</strong></p>
<p> Investing in cloud financial management is crucial for effective cost optimisation. Organisations should prioritise training teams in cloud economics to ensure a comprehensive understanding of cost dynamics. Establishing cross-functional FinOps teams can bridge the gap between finance and technology, fostering collaboration and informed decision-making. Developing clear cloud spending policies is essential to guide resource allocation and cost control. These policies should be aligned with organisational objectives, ensuring that cloud investments support business goals. Continuous monitoring and analysis of cloud expenditure enable proactive management, allowing organisations to identify cost-saving opportunities and address inefficiencies promptly. By integrating financial management practices into cloud operations, organisations can achieve greater transparency and accountability in their spending. This strategic approach not only optimises costs but also enhances the overall value derived from cloud investments, supporting sustainable growth and innovation in a competitive digital landscape.</p>
</li>
</ol>
<h2 id="heading-emerging-trends">Emerging Trends</h2>
<p>Emerging trends in cloud cost optimisation are reshaping how organisations approach their digital infrastructure. The rise of FinOps, a practice that combines financial accountability with cloud operations, is gaining traction as companies seek to align their cloud spending with business objectives. This approach encourages collaboration between finance, technology, and business teams, fostering a culture of cost awareness and strategic investment. Additionally, advancements in artificial intelligence and machine learning are enabling more sophisticated cost management tools. These technologies provide predictive analytics and automated recommendations, allowing organisations to anticipate cost fluctuations and optimise resource allocation proactively. The growing emphasis on sustainability is also influencing cloud strategies, with organisations seeking to reduce their carbon footprint through efficient resource utilisation and green cloud initiatives. As cloud providers enhance their offerings with energy-efficient infrastructure and carbon tracking capabilities, organisations are increasingly considering environmental impact alongside financial metrics. These emerging trends underscore the importance of a holistic approach to cloud cost optimisation, where financial prudence, technological innovation, and sustainability converge to drive long-term value. By staying attuned to these developments, organisations can navigate the complexities of cloud economics and harness the full potential of their digital investments.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In conclusion, while the allure of cloud migration is undeniable, it is not a solution for all infrastructure challenges. Successful Azure cost optimisation requires a strategic approach that balances the benefits of cloud flexibility with the need for financial prudence. By implementing continuous monitoring, strategic resource allocation, and flexible infrastructure planning, organisations can transform cloud spending from a potential financial burden into a strategic advantage. By learning from the experiences of companies like Dropbox and Basecamp, and embracing robust cost management techniques, organisations can harness the full potential of Azure, ensuring that their cloud investments are both economically and operationally sound.</p>
<h2 id="heading-references">References</h2>
<p>[1] Dropbox Engineering Blog, "<a target="_blank" href="https://dropbox.tech/infrastructure/magic-pocket-infrastructure">Scaling to Exabytes and Beyond" (2016)</a></p>
<p>[2] David Heinemeier Hansson, <a target="_blank" href="https://world.hey.com/dhh/why-we-re-leaving-the-cloud-654b47e0">"Why we're leaving the cloud" (2022)</a></p>
]]></content:encoded></item><item><title><![CDATA[Beyond the Code: Why Tech Leaders Need More Than Technical Expertise]]></title><description><![CDATA[In technology divisions, leadership potential is often equated with technical prowess. While deep technical knowledge is undoubtedly crucial, the most effective technical leaders possess a rich toolkit of both technical and non-technical skills that ...]]></description><link>https://ronaldkainda.blog/beyond-the-code-why-tech-leaders-need-more-than-technical-expertise</link><guid isPermaLink="true">https://ronaldkainda.blog/beyond-the-code-why-tech-leaders-need-more-than-technical-expertise</guid><category><![CDATA[tech leadership]]></category><category><![CDATA[Non Technical Skills]]></category><category><![CDATA[#emotional intelligence]]></category><category><![CDATA[engineering leadership]]></category><category><![CDATA[Leadership Skills]]></category><category><![CDATA[Career development ]]></category><category><![CDATA[Strategic Communication]]></category><dc:creator><![CDATA[Ronald Kainda]]></dc:creator><pubDate>Mon, 16 Dec 2024 00:00:46 GMT</pubDate><content:encoded><![CDATA[<p>In technology divisions, leadership potential is often equated with technical prowess. While deep technical knowledge is undoubtedly crucial, the most effective technical leaders possess a rich toolkit of both technical and non-technical skills that are vital for success. Many engineers’ careers have stagnated partly due to the absence of leaders in their organisation - but full of managers. Managers are focused on processes, procedures, and achieving specific objectives. They rarely focus on the big picture, setting direction or inspiring their teams. In this post, I explore the essential non-technical leadership competencies that help transform great engineers into exceptional leaders.</p>
<h3 id="heading-the-evolution-from-engineer-to-leader"><strong>The Evolution from Engineer to Leader</strong></h3>
<p>The transition from a hands-on engineer to a technical leader is often fraught with challenges. Many engineers are promoted due to their technical excellence, only to realise that the skills that made them exceptional engineers do not necessarily translate into effective leadership. As an engineer, you spend years perfecting your craft, whether it's designing, implementing, or maintaining scalable systems. However, upon entering a leadership role, your world shifts dramatically - your deliverables are no longer solely dependent on your efforts. You must rely on others to achieve your Objective Key Results (OKRs). How do you navigate this new landscape? An engineer’s initial instinct might be, "I know my craft, and I don't trust others to do as good a job as I can. I'll just do it myself." Those who are more daring may allow others to contribute but will meticulously check and recheck everything to ensure it meets their standards. Neither approach is sustainable, and some engineers may eventually conclude that leadership is not for them, preferring to remain individual contributors. But who is responsible when engineers find themselves in leadership roles for which they are unprepared? It's perfectly reasonable for an engineer to choose to remain an individual contributor, as long as it's an informed decision with a clear understanding of the alternative. My hypothesis is that when leadership focuses on motivating and inspiring people, engineers can make well-informed decisions. Leaders who possess more than just technical skills have the ability to effectively support their teams.</p>
<h3 id="heading-essential-non-technical-leadership-skills"><strong>Essential Non-Technical Leadership Skills</strong></h3>
<p>“I have delivered on the objectives we agreed upon at the beginning of the year, and I expected to be on the list of promoted employees, but I am not. What happened?” asks an engineer. The manager replies, “But I didn't know you wanted a promotion.” This conversation highlights a significant disconnect between the engineer and the manager. How did it reach this point? The engineer assumed that meeting objectives would naturally lead to a promotion, while the manager was unaware of the engineer's aspirations.</p>
<p>This scenario underscores the importance of clear communication and expectation management within teams. It is crucial for both engineers and managers to engage in open dialogues about career goals and development plans. Regular check-ins and feedback sessions can help bridge the gap, ensuring that both parties are aligned on expectations and aspirations.</p>
<p>Moreover, this situation illustrates the need for managers to develop essential leadership skills beyond technical expertise. Effective leaders must actively listen, understand their team members' career ambitions, and provide guidance and support to help them achieve their goals. By fostering an environment of transparency and mutual understanding, leaders can cultivate a motivated and engaged team.</p>
<p>In the following sections, I will discuss non-technical skills necessary for leading successful technical teams. These skills help prevent misunderstandings, promote a culture of growth and development, ensure organizational goals are met, and support the development of future leaders.</p>
<ol>
<li><p><strong>Emotional Intelligence and Active Listening</strong></p>
<p> The ability to truly listen and understand team members' perspectives is fundamental to effective leadership. Active listening means deliberately being present during one-on-ones, giving your direct the space and time to express their lows and highs and aspirations. It also means reading between the lines when team members express concerns directly or indirectly while acknowledging emotions without immediately jumping to solutions. For example, when an engineer says, "This deadline seems unrealistic," a good leader doesn't immediately defend the timeline but might respond, "Tell me more about your concerns. What specific challenges do you foresee?" Conversely, an engineer might say “it's fine, I will deliver this by the deadline” when they are effectively saying I will work late nights and weekends to get this done. By applying emotional intelligence and active listening, a leader will read between the lines and get the true meaning of what their direct report means.</p>
</li>
<li><p><strong>Understanding and Supporting Neurodiversity</strong></p>
<p> Modern tech teams are increasingly diverse, including team members who may be neurodivergent. Effective leaders treat each member of their team as unique individuals and, therefore, strive to understand what differentiates each member from others. It does not mean asking intrusive questions but, by practicing emotional intelligence and active listening, one can understand individual members better. When leaders use their one-to-one as a safe space platform for their team to discuss what matters to them, they have a chance at understanding them better.</p>
<p> The multitude of communication channels that currently exist provide both opportunities and challenges. When you understand your team, as a leader you should provide multiple communication channels (chat, email, face-to-face) to accommodate different preferences. You should also create clear and explicit expectations. Where possible, offer flexible work environments that support different sensory needs. Moreover, aim to apply various modes when sharing information - this could be a combination of visual, audio and text.</p>
</li>
<li><p><strong>Strategic Communication</strong></p>
<p> There are few cases where technical solutions do not have non-technical stakeholders. It is, therefore, critical that technical leaders can bridge the gap between technical and non-technical stakeholders. This involves translating complex technical concepts into business value and vice versa. Communication must, therefore, be adapted for different audiences. One way of doing so is by using storytelling to make technical information more accessible. Instead of telling your business stakeholders "We need to refactor our monolithic architecture into microservices," an effective leader might say, "We can reduce our time-to-market by 40% and cut deployment risks in half by modernising our software architecture."</p>
</li>
<li><p><strong>Coaching and Mentorship</strong></p>
<p> Great technical leaders develop their team members through regular feedback sessions focused on growth and creating individualised development plans. It also involves identifying and nurturing both technical and leadership potential. As a technical leader, coaching and mentorship are critical responsibilities that go far beyond simply managing tasks. These practices involve deliberately investing in the professional and personal growth of team members, creating an environment of continuous learning and development.</p>
<p> Effective coaching means providing targeted guidance that helps individuals identify their strengths and address developmental areas. This is not about dictating solutions, but asking probing questions that enable team members to discover insights independently. A technical leader coaches by offering constructive feedback, helping team members troubleshoot complex problems.</p>
<p> Mentorship, meanwhile, takes a broader, more holistic view of an individual's career trajectory. It involves sharing personal experiences, industry insights, and strategic career advice. A technical mentor helps junior professionals navigate technical challenges, understand unwritten organisational dynamics, and develop both technical and soft skills critical for long-term success.</p>
</li>
<li><p><strong>Conflict Resolution and Difficult Conversations</strong></p>
<p> Both technical and non-technical disagreements can become personal without proper leadership. Effective leaders focus on facts and impact rather than blame. They create safe spaces for healthy debate and guide teams toward consensus while ensuring all voices are heard. For instance, when two senior engineers disagree on a technical approach, a good leader might structure a decision-making framework that evaluates options against agreed-upon criteria, turning a potential conflict into a collaborative problem-solving exercise.</p>
</li>
<li><p><strong>Understanding Diverse Perspectives</strong></p>
<p> Tech teams are often composed of individuals from diverse backgrounds and experiences. As a leader, understanding diverse perspectives is fundamental to creating an inclusive, innovative, and high-performing environment. This goes beyond mere tolerance - it requires active listening, genuine curiosity, and a commitment to creating psychological safety where all team members feel valued and heard.</p>
<p> Diverse perspectives bring richness to problem-solving by introducing varied insights, challenging assumptions, and uncovering blind spots. A leader must intentionally create spaces for different voices to emerge, whether through structured feedback mechanisms, or inclusive meeting formats.</p>
<p> The key is to move from passive acknowledgment to active integration. This means not just hearing different viewpoints, but actively seeking them out, creating mechanisms for their expression, and demonstrating how these perspectives directly influence decision-making. When team members see their unique contributions genuinely respected and incorporated, it fosters trust, engagement, and collective ownership.</p>
</li>
<li><p><strong>Recognising and Addressing Emotional Needs</strong></p>
<p> As a leader, recognising and addressing emotional needs is a critical skill that transcends traditional management approaches. It requires deep emotional intelligence and the ability to create a supportive environment where team members feel psychologically safe and understood.</p>
<p> Effective leaders acknowledge that emotions are not separate from professional performance, but integral to it. They create space for authentic conversations, actively listen without judgment, and demonstrate empathy. This means recognising signs of stress, burnout, or personal challenges, and responding with compassion and practical support.</p>
<p> The goal is not to become a therapist, but to foster a human-centred workplace where individuals feel valued beyond their productivity. This involves checking in genuinely about team members' well-being, offering flexible support during challenging times, creating mechanisms for open, non-punitive communication, and modelling emotional vulnerability and resilience. By prioritising emotional intelligence, leaders build trust, enhance team cohesion, and create an environment where individuals can bring their whole, authentic selves to work.</p>
</li>
<li><p><strong>Fostering Talent</strong></p>
<p> As a leader, fostering talent is a strategy that goes beyond traditional performance management. It's about creating an ecosystem where individual potential can be discovered, nurtured, and fully realised.</p>
<p> Effective talent cultivation requires a multifaceted approach. This means providing clear developmental pathways, offering challenging assignments that stretch capabilities, and creating opportunities for continuous learning. Leaders must act as talent architects, designing personalised growth plans that align individual aspirations with organisational goals.</p>
<p> Key strategies to talent cultivation include conducting regular, meaningful career development conversations, providing targeted mentorship and coaching, creating exposure opportunities, investing in skill development programs, and recognising and rewarding potential, not just current performance.</p>
</li>
</ol>
<h3 id="heading-practical-steps-for-developing-non-technical-leadership-skills">Practical Steps for Developing Non-Technical Leadership Skills</h3>
<ol>
<li><p><strong>Seek Feedback Actively</strong></p>
<p> Seeking feedback actively is a transformative leadership practice that requires genuine vulnerability and commitment to personal growth. Effective leaders create multiple channels for receiving candid, constructive input from peers, team members, and mentors. This means establishing regular, structured feedback mechanisms like 360-degree reviews, informal check-ins, and open communication platforms that encourage honest dialogue.</p>
<p> The key is to approach feedback not as a defensive exercise, but as a strategic opportunity for development. Leaders must demonstrate openness by actively listening, asking clarifying questions, and visibly implementing insights gained. This involves setting aside ego, embracing potential areas of improvement, and modelling a growth mindset that encourages continuous learning. By consistently seeking and acting on feedback, leaders signal their commitment to personal and organisational improvement, creating a culture of transparency, trust, and ongoing professional development.</p>
</li>
<li><p><strong>Find a Leadership Mentor</strong></p>
<p> Finding a leadership mentor is a strategic investment in personal and professional development. Successful leaders actively seek mentors who can provide nuanced guidance, share strategic insights, and offer perspectives gained from extensive leadership experience. This relationship goes beyond traditional coaching, creating a trusted partnership where wisdom is transferred through deep, reflective conversations.</p>
<p> The most effective mentorship connections are not formalised transactions, but organic relationships built on mutual respect, shared values, and genuine curiosity. Leaders should seek mentors who challenge their thinking, expose them to different leadership approaches, and provide honest, constructive feedback.</p>
</li>
<li><p><strong>Practice Deliberately</strong></p>
<p> Practicing deliberately is a methodical approach to leadership skill development that requires intentional, focused effort and continuous self-reflection. Unlike casual learning, deliberate practice involves setting specific leadership development goals, creating structured scenarios to challenge existing capabilities, and systematically analysing performance and outcomes.</p>
<p> Deliberate practice demands vulnerability - embracing discomfort, acknowledging limitations, and treating each leadership experience as a learning opportunity. By maintaining a disciplined, self-aware approach to skill development, leaders can progressively expand their capabilities, transforming potential into tangible leadership expertise through consistent, intentional effort.</p>
</li>
<li><p><strong>Invest in Learning</strong></p>
<p> Investing in learning is a critical leadership strategy that goes beyond traditional training programs. Successful leaders view continuous learning as a strategic imperative, allocating time, resources, and mental energy to expanding their knowledge and skills. This means actively seeking diverse learning opportunities, from formal educational programs and workshops to reading, podcasts, and cross-functional experiences.</p>
<p> Effective learning investment involves creating a personalised development plan that aligns with both personal growth objectives and organisational needs. Leaders must be intentional about exploring areas outside their comfort zone, embracing interdisciplinary knowledge that can provide fresh perspectives on leadership challenges.</p>
<p> Effective leaders cultivate a growth mindset that sees learning as a lifelong journey, continuously challenging themselves, remaining curious, and demonstrating a commitment to personal and professional development that inspires their teams.</p>
</li>
</ol>
<h3 id="heading-the-impact-of-strong-leadership"><strong>The Impact of Strong Leadership</strong></h3>
<p>When technical leaders develop these non-technical skills, the results are significant. Strong leadership represents a powerful fusion of technical expertise and profound interpersonal capabilities that fundamentally transforms organisational potential. Technical skills provide the foundational knowledge and strategic understanding necessary to navigate complex technical environments, while soft skills enable the translation of that knowledge into meaningful, collaborative action.</p>
<p>A truly impactful leader creates an ecosystem where technical proficiency and human-centred approaches coexist harmoniously. This means understanding intricate technical challenges while simultaneously cultivating an environment of psychological safety, trust, and continuous learning. Technical skills ensure strategic direction and operational excellence, while soft skills like emotional intelligence, communication, and empathy enable leaders to inspire, motivate, and align diverse teams toward shared objectives.</p>
<p>The most transformative leaders recognise that technical competence alone is insufficient. They build bridges between complex technical concepts and human potential, creating narratives that make strategic goals meaningful and engaging. They translate technical challenges into opportunities for growth, innovation, and collective achievement.</p>
<h3 id="heading-conclusion"><strong>Conclusion</strong></h3>
<p>While technical expertise might get you into a leadership position, it is the mastery of non-technical skills that will determine your success as a leader. The most respected technical leaders are those who can balance technical knowledge with strong people skills, creating environments where both technology and people can thrive. Leadership is a journey, not a destination. The most successful technical leaders are those who remain committed to growing both their technical and non-technical capabilities throughout their careers.</p>
]]></content:encoded></item><item><title><![CDATA[Is Azure Kubernetes Service Dead?]]></title><description><![CDATA[In the last year or so, one question I have been persistently asked by both junior and senior software engineers concerns where they should deploy their containerised services. Not too long ago, this was never even a question - at least for Azure clo...]]></description><link>https://ronaldkainda.blog/is-azure-kubernetes-service-dead</link><guid isPermaLink="true">https://ronaldkainda.blog/is-azure-kubernetes-service-dead</guid><category><![CDATA[Azure]]></category><category><![CDATA[#kubernetes #container ]]></category><category><![CDATA[azure-container-apps]]></category><dc:creator><![CDATA[Ronald Kainda]]></dc:creator><pubDate>Wed, 04 Dec 2024 07:57:48 GMT</pubDate><content:encoded><![CDATA[<p>In the last year or so, one question I have been persistently asked by both junior and senior software engineers concerns where they should deploy their containerised services. Not too long ago, this was never even a question - at least for Azure cloud. Azure only provided a managed Kubernetes service. You could, in theory, deploy your container to Azure Web Apps but there is little benefit unless your services dependencies are not available in Azure environment. Anyone containerising applications at the time would only have one platform to deploy to. However, in the ever-evolving landscape of cloud computing, Azure now offers two prominent services for containerised applications: Azure Kubernetes Service (AKS) and Azure Container Apps. With the rising popularity of Container Apps, many engineers (and organisations) are questioning which service to deploy their containerised apps to and whether AKS is becoming obsolete. I may have contributed to the thinking that AKS is becoming obsolete among the engineers I have interacted with partly because most of their use cases fit into Container Apps and not AKS. In this article, I want to dive deep into both services to clarify their distinct value propositions and use cases. I will conclude by answering the question, “Is AKS dead?”.</p>
<p><strong>The Rise of Azure Container Apps</strong></p>
<p>Azure Container Apps, introduced in 2021, represent Microsoft's serverless container offering. It provides a simplified approach to deploying containerised applications without the complexity of managing a Kubernetes cluster. Think of it as a middle ground between Azure Functions and AKS. It is a fully managed serverless container platform that simplifies application deployment and scaling. It eliminates the need for managing infrastructure or Kubernetes clusters, allowing developers to focus on building and deploying applications.</p>
<p><strong>Key Features of Container Apps:</strong></p>
<ul>
<li><p><strong>Serverless:</strong> Automatic scaling based on HTTP traffic, events, or Kubernetes Event-Driven Autoscaling (KEDA) supported triggers.</p>
</li>
<li><p><strong>Built-in Service Mesh:</strong> Features such as Distributed Application Runtime (Dapr) make it easier to implement state management, message brokering, and service-to-service communication.</p>
</li>
<li><p><strong>Built-in service discovery and ingress:</strong> each container app in your environment can communicate with others using their <strong>DNS name</strong>. There’s no need to configure or manage complex networking settings manually.</p>
</li>
<li><p><strong>Fully managed:</strong> Azure handles infrastructure provisioning, scaling, and security.</p>
</li>
<li><p><strong>Simplicity</strong>: Developers do not need to understand Kubernetes internals to deploy and manage applications.</p>
</li>
<li><p><strong>Pay-per-use pricing model:</strong> You can scale your services to zero instances and thereby save during periods of no activity in your applications.</p>
</li>
</ul>
<p><strong>Azure Kubernetes Service: Enterprise-Grade Container Orchestration</strong></p>
<p>AKS is a fully managed Kubernetes service that provides a platform for deploying and managing containerised applications. It simplifies the complexities of Kubernetes by handling infrastructure provisioning, cluster management, and security. AKS is well-suited for a wide range of applications, from simple microservices to complex, stateful workloads.</p>
<p>Despite AKS being a managed service, Microsoft only manages the control plane, underlying infrastructure and security. Engineers, however, are responsible for a number of components such as:</p>
<ul>
<li><p><strong>Node Pool Management:</strong></p>
<ul>
<li><p>Engineers are responsible for configuring node pools, including VM sizes, auto-scaling, and labels for workload segregation.</p>
</li>
<li><p>They also need to ensure the underlying worker nodes (VMs) have sufficient capacity to run workloads efficiently.</p>
</li>
</ul>
</li>
<li><p><strong>Cluster Networking:</strong></p>
<ul>
<li><p>Setting up appropriate network models (Azure CNI or Kubenet) and configuring pod-to-pod and pod-to-service communication.</p>
</li>
<li><p>Implementing ingress controllers (e.g., NGINX, Application Gateway) and setting up DNS.</p>
</li>
<li><p>Managing IP address ranges, subnets, and custom VNET configurations.</p>
</li>
</ul>
</li>
</ul>
<p>Despite the simplicity of Container Apps, AKS remains a powerhouse for complex, enterprise-scale container orchestration. It offers complete control over your Kubernetes environment while abstracting away the control plane management.</p>
<p><strong>Key Features of AKS:</strong></p>
<ul>
<li><p><strong>Full Kubernetes API access</strong>: AKS gives you the full power of Kubernetes, allowing custom configurations and integrations with third-party tools.</p>
</li>
<li><p><strong>Granular Control</strong>: Developers and devops teams can fine-tune deployment strategies, manage scaling policies, and implement advanced networking and security configurations.</p>
</li>
<li><p><strong>Multi-Container Support</strong>: Ideal for microservices architectures that require inter-container communication and advanced workflows.</p>
</li>
<li><p><strong>Horizontal scaling</strong>: Easily scale applications up or down to meet changing demands.</p>
</li>
<li><p><strong>Support for stateful applications:</strong> Supports persistent storage for stateful workloads like databases or message queues.</p>
</li>
</ul>
<p><strong>Key Differences from a deployment perspective</strong></p>
<ul>
<li><strong>Development Complexity</strong></li>
</ul>
<pre><code class="lang-yaml"><span class="hljs-comment"># Container Apps</span>
<span class="hljs-comment"># container-apps.yaml</span>
<span class="hljs-attr">resources:</span>
  <span class="hljs-attr">containerApps:</span>
    <span class="hljs-comment"># name of the container app</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">my-cool-app</span>
    <span class="hljs-attr">containers:</span>
      <span class="hljs-comment">#container to be deployed to this container app</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">image:</span> <span class="hljs-string">myregistry.azurecr.io/mycoolapp:v1</span>
        <span class="hljs-attr">env:</span>
          <span class="hljs-comment">#environment variables to be passed to the container at runtime</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">PORT</span>
            <span class="hljs-attr">value:</span> <span class="hljs-string">"80"</span>
<span class="hljs-comment"># Additional configuration required...</span>
</code></pre>
<p>This YAML configuration defines an Azure Container App deployment, specifying a container named "my-cool-app" that will use an image from an Azure Container Registry (<a target="_blank" href="http://myregistry.azurecr.io/myapp:v1">myregistry.azurecr.io/mycoolapp:v1</a>). The configuration sets up a basic container deployment with a single environment variable (PORT set to 80), indicating how the container should be initialised and run. While this snippet provides the fundamental structure for deploying a containerised application, it represents only a partial configuration and would require additional settings for a complete container app deployment in an Azure environment. The snippet is only used for illustration.</p>
<pre><code class="lang-yaml"><span class="hljs-comment">#AKS</span>
<span class="hljs-comment"># deployment.yaml</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">my-cool-app</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-comment">#3 pods for this deployment</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">3</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-comment">#must match the label defined under template</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">my-cool-pp</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">my-cool-app</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-comment">#one container specified</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">my-cool-app</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">myregistry.azurecr.io/mycoolapp:v1</span>
        <span class="hljs-attr">env:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">PORT</span>
          <span class="hljs-attr">value:</span> <span class="hljs-string">"80"</span>

<span class="hljs-comment">#required to create a Kubernetes Service, which exposes </span>
<span class="hljs-comment">#the pods created by the Deployment.</span>
<span class="hljs-comment"># service.yaml</span>

<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">my-cool-app-service</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">my-cool-app</span>
  <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>    <span class="hljs-comment"># Port exposed by the Service</span>
      <span class="hljs-attr">targetPort:</span> <span class="hljs-number">80</span> <span class="hljs-comment"># Port the container listens on</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">ClusterIP</span> <span class="hljs-comment"># Internal access within the cluster</span>
<span class="hljs-comment"># Additional configuration required...</span>
</code></pre>
<p>This Kubernetes configuration defines a deployment and service for an application in Azure Kubernetes Service (AKS). The Deployment resource creates three identical pods (replicas) running a container from a specified Azure Container Registry image, ensuring high availability and consistent application instances. The accompanying Service resource provides internal cluster networking by exposing these pods on port 80, using a ClusterIP type that allows internal communication within the Kubernetes cluster. Together, these resources enable a scalable and accessible containerised application deployment, with built-in load balancing and replication to enhance reliability and performance.</p>
<ul>
<li><strong>Scaling Mechanisms</strong></li>
</ul>
<pre><code class="lang-yaml"><span class="hljs-comment"># Container Apps handles scaling automatically</span>
<span class="hljs-attr">scale:</span>
  <span class="hljs-attr">minReplicas:</span> <span class="hljs-number">0</span>
  <span class="hljs-attr">maxReplicas:</span> <span class="hljs-number">10</span>
  <span class="hljs-attr">rules:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">http-rule</span>
      <span class="hljs-attr">http:</span>
        <span class="hljs-attr">metadata:</span>
          <span class="hljs-attr">concurrentRequests:</span> <span class="hljs-string">"100"</span>
</code></pre>
<p>The configuration defines scaling rules. It establishes a flexible scaling strategy that allows the application to automatically scale from 0 to 10 replicas based on HTTP traffic. The specific rule named "http-rule" sets a scaling threshold of 100 concurrent requests, meaning when the number of simultaneous requests to the application reaches or exceeds 100, the system will automatically increase the number of replicas to handle the load, up to a maximum of 10 instances. Conversely, during periods of low traffic, the application can scale down to zero replicas, which helps optimise resource utilisation and reduce costs by only running instances when needed.</p>
<pre><code class="lang-yaml"><span class="hljs-comment">#AKS requires explicit configuration</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">autoscaling/v2</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">HorizontalPodAutoscaler</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">my-cool-app-hpa</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">scaleTargetRef:</span>
    <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
    <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">my-cool-app</span>
  <span class="hljs-attr">minReplicas:</span> <span class="hljs-number">1</span>
  <span class="hljs-attr">maxReplicas:</span> <span class="hljs-number">10</span>
  <span class="hljs-attr">metrics:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">type:</span> <span class="hljs-string">Resource</span>
    <span class="hljs-attr">resource:</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">cpu</span>
      <span class="hljs-attr">target:</span>
        <span class="hljs-attr">type:</span> <span class="hljs-string">Utilization</span>
        <span class="hljs-attr">averageUtilization:</span> <span class="hljs-number">50</span>
</code></pre>
<p>This Kubernetes Horizontal Pod Autoscaler (HPA) configuration automatically scales the "my-cool-app" deployment based on CPU utilisation. It targets the specified deployment and allows the number of pod replicas to dynamically adjust between 1 and 10 instances. The scaling rule is triggered when the average CPU usage across all pods exceeds 50%, meaning if the deployment's pods collectively consume more than half of their allocated CPU resources, Kubernetes will automatically add more replicas to distribute the load, up to a maximum of 10 pods. Conversely, if CPU usage drops, the number of replicas will be reduced, ensuring efficient resource utilisation and maintaining application performance under varying workload conditions.</p>
<p>The code snippets above are intended to provide an idea of the differences when it comes to deploying your containers to either environment and not provide a full working deployment. For AKS, getting to the container deployment stage requires a lot of plumbing to ensure your cluster is production ready and able to handle the workload. Moreover, there will be ongoing admin overheads to maintain the cluster. Given the above, the question of when to use what still remains.</p>
<p><strong>When to Use What?</strong></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Feature</strong></td><td><strong>Azure Kubernetes Service (AKS)</strong></td><td><strong>Azure</strong> <strong>Container</strong> <strong>Apps</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Infrastructure Management</td><td>Fully managed (control plane and underlying infrastructure only)</td><td>Fully managed</td></tr>
<tr>
<td>Complexity</td><td>Requires Kubernetes expertise</td><td>No Kubernetes expertise required</td></tr>
<tr>
<td>Scaling</td><td>Manual or automatic scaling</td><td>Automatic scaling</td></tr>
<tr>
<td>Networking</td><td>Advanced networking features</td><td>Simplified networking</td></tr>
<tr>
<td>Use Case</td><td>Advanced microservices, stateful apps</td><td>Event-driven apps, lightweight workloads, stateless apps</td></tr>
<tr>
<td>Cost</td><td>Higher operational costs. You pay for the VMs</td><td>Cost-efficient for intermittent usage. Scale to zero</td></tr>
<tr>
<td>Developer Experience</td><td>Requires deep operational expertise</td><td>Developer-friendly, minimal overhead</td></tr>
</tbody>
</table>
</div><p><strong>The Verdict</strong></p>
<p>Is Azure Kubernetes dead? Far from it. While Azure Container Apps offers an excellent solution for many modern application scenarios, AKS continues to serve as the backbone for complex, enterprise-grade container orchestration. The choice between the two depends on your specific requirements, team expertise, and application architecture.</p>
<p>Think of Container Apps as a high-level abstraction perfect for teams wanting to focus purely on application logic, while AKS provides the full power and flexibility of Kubernetes for teams requiring complete control over their container infrastructure.</p>
<p>Rather than viewing them as competitors, consider them complementary services in Azure's container ecosystem. Many organisations successfully use both: Container Apps for simpler, event-driven workloads and AKS for complex, stateful applications requiring fine-grained control.</p>
<p>The future of containerisation in Azure likely involves both services evolving to serve their distinct use cases better, providing developers with the right tool for the right job. To the many engineers I have directed towards Container Apps, I hope this article summarises the various discussions we have had and can be used in future when met with the same question.</p>
]]></content:encoded></item><item><title><![CDATA[Enhancing Code Quality Through Peer Reviews: A Collaborative Approach]]></title><description><![CDATA[Code quality is fundamental to successful software development projects, influencing stability, maintainability, and overall efficiency. Among various quality assurance techniques, peer code reviews stand out as particularly effective. These reviews ...]]></description><link>https://ronaldkainda.blog/enhancing-code-quality-through-peer-reviews-a-collaborative-approach</link><guid isPermaLink="true">https://ronaldkainda.blog/enhancing-code-quality-through-peer-reviews-a-collaborative-approach</guid><category><![CDATA[Quality Assurance]]></category><category><![CDATA[QA]]></category><category><![CDATA[code review]]></category><category><![CDATA[Collaboration]]></category><dc:creator><![CDATA[Ronald Kainda]]></dc:creator><pubDate>Sat, 23 Nov 2024 21:29:19 GMT</pubDate><content:encoded><![CDATA[<p>Code quality is fundamental to successful software development projects, influencing stability, maintainability, and overall efficiency. Among various quality assurance techniques, peer code reviews stand out as particularly effective. These reviews involve developers examining each other's code to identify issues, suggest improvements, and ensure adherence to coding standards. This article explores the benefits of peer reviews and provides practical guidelines for conducting effective code reviews to enhance code quality.</p>
<h3 id="heading-benefits-of-peer-code-reviews">Benefits of Peer Code Reviews:</h3>
<p><strong>Early Detection of Issues:</strong></p>
<p>One of the primary benefits of peer code reviews is the early detection of issues. By having multiple developers scrutinise the code, errors, bugs, and potential issues can be identified at an early stage. This proactive approach significantly reduces the likelihood of bugs reaching production, thereby minimising costs associated with bug fixing and maintenance.</p>
<p><strong>Knowledge Sharing and Learning:</strong></p>
<p>Peer code reviews naturally foster a culture of knowledge sharing and continuous learning among team members. During the review process, developers are exposed to different coding styles, techniques, and best practices. Reviewers can share their expertise, insights, and alternative approaches, enabling others to learn and grow. This collaborative environment promotes the exchange of ideas and encourages developers to stay updated with the latest industry trends and practices.</p>
<p><strong>Improved Code Readability:</strong></p>
<p>Peer reviews emphasize code readability and maintainability. Reviewers have the opportunity to suggest improvements to enhance code structure, naming conventions, and documentation. By striving for clean and understandable code, the overall readability and maintainability of the codebase are improved. This, in turn, leads to easier debugging, refactoring, and long-term maintenance.</p>
<p><strong>Consistency and Adherence to Standards:</strong></p>
<p>Peer reviews play a vital role in enforcing coding standards and ensuring consistency across the codebase. Reviewers can identify deviations from established guidelines and suggest necessary modifications to align with coding standards. This practice leads to a more uniform and cohesive codebase, making it easier for developers to understand and collaborate on the project. Consistency in coding style and practices also contributes to better code integration and reduces potential conflicts during code merging.</p>
<p><strong>Enhanced Collaboration and Team Building:</strong></p>
<p>Peer reviews encourage collaboration and foster a sense of camaraderie within the development team. Developers come together to review each other's code, exchange ideas, discuss implementation approaches, and share their insights. This collaborative environment strengthens teamwork, improves communication, and builds trust among team members. It also fosters innovation and creative problem-solving as different perspectives and expertise are brought to the table.</p>
<h3 id="heading-tips-for-effective-peer-code-reviews">Tips for Effective Peer Code Reviews:</h3>
<p><strong>Define Clear Objectives:</strong></p>
<p>To ensure effective peer code reviews, it is crucial to establish clear objectives and goals. Clearly communicate the purpose of the code review process, specifying the aspects to focus on, such as functionality, performance, security, or readability. This helps reviewers understand the specific areas they need to scrutinise, allowing for more targeted feedback and suggestions.</p>
<p><strong>Encourage Constructive Feedback:</strong></p>
<p>Creating a positive and respectful environment for the code review process is essential. Encourage reviewers to provide specific and actionable feedback, focusing on the code itself rather than criticizing the developer. Constructive criticism helps identify areas for improvement without demoralizing the developer. Encourage reviewers to provide not only feedback on issues but also highlight the positive aspects of the code.</p>
<p><strong>Keep Reviews Focused and Manageable:</strong></p>
<p>To ensure efficient and effective code reviews, it is essential to limit the scope of each review to a manageable size. Reviewing smaller chunks of code improves efficiency, allows reviewers to provide more detailed feedback, and makes the process less overwhelming. Avoid overwhelming reviewers with excessively large code submissions that may result in important issues being overlooked.</p>
<p><strong>Use Tools for Efficiency:</strong></p>
<p>Leverage code review tools that provide a structured framework for conducting reviews. These tools allow reviewers to leave comments directly on the code, track changes, and facilitate discussions. Popular code review tools include GitHub Pull Requests, GitLab Merge Requests, and Atlassian Crucible. Utilizing these tools streamlines the review process, ensures transparency, and provides a central platform for collaboration and documentation.</p>
<p><strong>Rotate Reviewers:</strong></p>
<p>To encourage diverse perspectives and prevent blind spots, it is beneficial to rotate the responsibility of code reviewing among team members. Different reviewers bring fresh insights and experiences, which can lead to the identification of issues that may have been missed otherwise. Rotation also helps distribute the workload more evenly, prevents reviewer fatigue, and ensures that everyone benefits from the learning experience of being a reviewer.</p>
<p><strong>Follow Up on Feedback:</strong></p>
<p>To maximize the impact of peer code reviews, it is crucial to encourage developers to actively address the feedback received during the process. Engage in meaningful discussions with reviewers, clarify any doubts, and resolve any concerns raised. Regularly revisit previous feedback to track progress and ensure that suggested improvements are implemented. This iterative approach fosters continuous improvement and helps developers grow their skills and knowledge.</p>
<h3 id="heading-conclusion">Conclusion:</h3>
<p>Peer code reviews offer significant benefits in improving code quality throughout the software development lifecycle. By leveraging the expertise and knowledge of team members, code reviews promote collaboration, enhance code readability, and detect issues early on. Implementing effective peer review practices in software development workflows leads to more robust, maintainable, and high-quality codebases. Embrace peer reviews as an integral part of your development process, and watch your code quality soar to new heights. The combined effort and collaboration of team members will ultimately result in software that is more stable, efficient, and aligned with coding standards. It is important, however, to be mindful that code reviews is just one tool for improved code quality and it should be used in combination with other tools.</p>
]]></content:encoded></item><item><title><![CDATA[C# Minimal API - the node js migration]]></title><description><![CDATA[Minimal API is a feature introduced in .NET 6 that simplifies the process of building APIs. This feature is designed to help developers write less code, reduce boilerplate code, and increase productivity. Minimal API provides a lightweight, streamlin...]]></description><link>https://ronaldkainda.blog/c-minimal-api-the-node-js-migration</link><guid isPermaLink="true">https://ronaldkainda.blog/c-minimal-api-the-node-js-migration</guid><category><![CDATA[C#]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[minimal-apis]]></category><category><![CDATA[.net 6.0]]></category><category><![CDATA[.net core vs nodejs]]></category><dc:creator><![CDATA[Ronald Kainda]]></dc:creator><pubDate>Fri, 19 May 2023 13:35:11 GMT</pubDate><content:encoded><![CDATA[<p>Minimal API is a feature introduced in .NET 6 that simplifies the process of building APIs. This feature is designed to help developers write less code, reduce boilerplate code, and increase productivity. Minimal API provides a lightweight, streamlined syntax for creating APIs that are easy to understand, maintain, and extend.</p>
<p>In this article, I will explore the concept of minimal API in C# and look at how a similar approach is done in node js.</p>
<p><strong>What is Minimal API in C#?</strong></p>
<p>Minimal API is a way of building APIs that are designed to be lightweight and easy to use. Minimal API uses a feature introduced in C# 9 called top-level statements, which allows developers to write code without the need for a class or method.</p>
<p>Minimal API is based on the concept of convention over configuration, which means that developers can write less code by following a set of conventions. This approach reduces the need for boilerplate code, and it helps developers to focus on writing the core functionality of their APIs.</p>
<p>It also provides a simplified routing syntax that allows mapping endpoints to methods without the need for attributes or controllers. This approach makes it easy to build APIs without having to worry about the details of the routing system.</p>
<p><strong>How to use Minimal API?</strong></p>
<p>To use Minimal API, you need to install at least .NET 6 and create a new project. You can do this using the following steps:</p>
<ol>
<li><p>Install .NET 6: To install .NET 6, go to the official .NET website and download the latest version of the .NET SDK.</p>
</li>
<li><p>Create a new project: Once you have installed .NET 6, open the command prompt or terminal and run the following command:</p>
<p> <code>dotnet new web -o minapi</code></p>
<p> This command creates a new web project named "minapi" in the current directory.</p>
</li>
<li><p>Add a minimal API endpoint: Now, open the <code>Program.cs</code> file in the minapi project and replace the code with the following:</p>
</li>
</ol>
<pre><code class="lang-csharp">
WebApplicationBuilder builder = WebApplication.CreateBuilder(args);
WebApplication app = builder.Build();

app.MapGet(<span class="hljs-string">"/"</span>, () =&gt; <span class="hljs-string">"Hello World!"</span>);
app.Run(<span class="hljs-string">"http://localhost:5000"</span>);
</code></pre>
<p>In this code, we are using the <code>WebApplication.CreateBuilder</code> method to create a new instance of the <code>WebApplicationBuilder</code> class. We call the <code>Build</code> method to create an instance of <code>WebApplication</code> Then we use the <code>MapGet</code> method to map a GET request to the root endpoint ("/") to a lambda expression that returns the string "<code>Hello World!</code>".</p>
<p>Finally, we use the <code>Run</code> method to start the web server and listen for incoming requests.</p>
<ol>
<li><p>Run the application: To run the application, open the command prompt or terminal and navigate to the "minapi" directory. Then run the following command:</p>
<p> <code>dotnet run</code></p>
<p> This command will start the web server and listen for incoming requests.</p>
</li>
<li><p>Test the endpoint: Now, open a web browser and navigate to <a target="_blank" href="http://localhost:5000/"><strong>http://localhost:</strong></a><strong>5000/.</strong> You should see the message "Hello World!" displayed in the browser.</p>
</li>
</ol>
<p>While minimal APIs have only been around since .Net 6, one would argue that this is driven by what has already been implemented in other languages and frameworks. Let's consider how a web server is set up in node js.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> http = <span class="hljs-built_in">require</span>(<span class="hljs-string">'http'</span>);

<span class="hljs-keyword">const</span> server = http.createServer(<span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
  res.statusCode = <span class="hljs-number">200</span>;
  res.setHeader(<span class="hljs-string">'Content-Type'</span>, <span class="hljs-string">'text/plain'</span>);
  res.end(<span class="hljs-string">'Hello World\n'</span>);
});

server.listen(<span class="hljs-number">3000</span>, <span class="hljs-string">'127.0.0.1'</span>, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Server running at http://127.0.0.1:3000/`</span>);
});
</code></pre>
<p>This server creates a basic HTTP server using the built-in Node.js <code>http</code> module. It listens on port <code>3000</code> of the <a target="_blank" href="http://localhost"><code>localhost</code></a> IP address (<code>127.0.0.1</code>). When a client makes a request to the server, the server responds with a simple <code>Hello World</code> message.</p>
<p>To run this server, save the code in a file with a <code>.js</code> extension (e.g. <code>server.js</code>), navigate to the file's directory in your terminal or command prompt, and run the command <code>node server.js</code>. This will start the server and print <code>Server running at</code> <a target="_blank" href="http://127.0.0.1:3000/"><code>http://127.0.0.1:3000/</code></a> to the console. You can then open a web browser and visit <a target="_blank" href="http://localhost:3000/"><code>http://localhost:3000/</code></a> to see the <code>Hello World</code> message. Note that you can only run this code if you have node installed on your machine.</p>
<p>A lot of developers have always liked how simple creating a server in node js is. With only a few lines of code, you have your node js server up and running. In C#, it has required a lot of boilerplate code and loads of pipeline configuration. Minimal API has changed all this!</p>
<p><strong>Conclusion</strong></p>
<p>Minimal API is a feature introduced in .NET 6 that simplifies the process of building APIs. It is designed to help developers write less code, reduce boilerplate code, and increase productivity. It provides a lightweight, streamlined syntax for creating APIs that are easy to understand, maintain, and extend.</p>
<p>In this article, I explored the concept of minimal API and looked at some examples to understand how it works. I also discussed how to use minimal API by creating a new project and adding a minimal API endpoint. C# developers now have a node js equivalent in terms of the simplicity of setting up a web service.</p>
]]></content:encoded></item><item><title><![CDATA[C# and record types]]></title><description><![CDATA[Records provide a way to create and work with structured data in a more efficient and organized manner. In this article, I will explore what C# records are and how they can be used to improve the design and functionality of your code.
Records were in...]]></description><link>https://ronaldkainda.blog/c-and-record-types</link><guid isPermaLink="true">https://ronaldkainda.blog/c-and-record-types</guid><category><![CDATA[.NET]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[.net 6.0]]></category><category><![CDATA[c# records]]></category><dc:creator><![CDATA[Ronald Kainda]]></dc:creator><pubDate>Mon, 13 Mar 2023 18:15:08 GMT</pubDate><content:encoded><![CDATA[<p>Records provide a way to create and work with structured data in a more efficient and organized manner. In this article, I will explore what C# records are and how they can be used to improve the design and functionality of your code.</p>
<p>Records were introduced in C# 9.0 and provide a way to define immutable classes with value semantics. They are similar to classes but offer several advantages over traditional class definitions. Records can be reference types (class records) or value types (struct records).</p>
<p>Records are defined using the "record" keyword, followed by the name of the record and an optional set of properties enclosed in parentheses. For example, the following code defines a record for a person with a name and age property:</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">public</span> record <span class="hljs-title">Person</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> Name, <span class="hljs-keyword">int</span> Age</span>)</span>;
</code></pre>
<p>This record definition creates a new class record with two properties, Name and Age. These properties are automatically initialized when a new instance of the record is created, and cannot be modified after initialization. The above declaration is equivalent to below:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">record</span> <span class="hljs-title">Person</span>
{
    <span class="hljs-keyword">public</span> required <span class="hljs-keyword">string</span> Name { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">init</span>; }
    <span class="hljs-keyword">public</span> required <span class="hljs-keyword">int</span> Age { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">init</span>; }
};
</code></pre>
<p>Just like classes, you can define records with mutable properties by defining the <code>set</code> method.</p>
<p>Records offer several advantages over traditional class definitions:</p>
<ol>
<li><p>Simplified syntax: Records provide a more concise syntax for defining classes, making it easier to create and maintain your code. This is achieved by generating a lot of boilerplate code automatically, reducing the amount of code that you need to write. Here are some examples of how C# records simplify syntax:</p>
<ol>
<li><p>Constructor: With traditional C# classes, you need to define a constructor for each property, which can be time-consuming and verbose. With records, a constructor is automatically generated, which means that you don't need to define it yourself. This simplifies the syntax and reduces the amount of code you need to write.</p>
</li>
<li><p>Properties: In a traditional C# class, you need to define properties for each field, which can also be time-consuming and verbose. With records, you can define properties using a simplified syntax, like this:</p>
<pre><code class="lang-csharp"> <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Name { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">init</span>; }
</code></pre>
<p> This syntax defines a read-only property called <code>Name</code> that can be set during object initialization.</p>
</li>
<li><p>Equality: C# records generate an equality method automatically, which means that you don't need to write code to compare object values. The generated equality method compares all properties of the record for equality, simplifying the syntax and reducing the amount of code you need to write. Remember that two classes are equal if they have the same reference in memory. To achieve equality based on individual properties of two classes with different references, you need to override the <code>Equals</code> method and implement <code>IEquatable</code> interface.</p>
</li>
<li><p>Built-in formatting for display: C# records generate a <code>ToString()</code> method that provides a default string representation of the record. This is useful for debugging and testing purposes, and it simplifies the syntax by eliminating the need to write a custom <code>ToString()</code> methods.</p>
</li>
</ol>
</li>
</ol>
<p>    Overall, C# records simplify syntax by generating a lot of boilerplate code automatically. This reduces the amount of code you need to write, making your code easier to read, write, and maintain.</p>
<ol>
<li><p>Immutable by default: In C#, immutability refers to the property of an object whose state cannot be changed after it has been initialized. In other words, once an object is created, its values cannot be modified.</p>
<p> Immutability is important because it helps to prevent bugs that can occur when objects are modified unexpectedly. For example, if an object is modified by one part of your code, but not updated properly in another part of your code, you can end up with unexpected behaviour and hard-to-debug errors.</p>
<p> In C#, there are several ways to create immutable objects. One way is to use the <code>readonly</code> keyword to declare fields that can only be set once, either in the constructor or in the field declaration itself. Another way is to use immutable collections, which are collections that cannot be modified after they are created.</p>
<p> In addition to preventing bugs, immutability can also improve the performance of your code by allowing the compiler to optimize your code more effectively.</p>
<p> Immutable objects can be safely shared between threads without the need for locking or synchronization, which can also help to improve performance.</p>
<p> Overall, immutability is an important concept in C# and in programming in general. By designing your code with immutability in mind, you can create more robust, maintainable, and performant applications.</p>
<p> Records are immutable by default, which means that their values cannot be modified once they are created. This makes them safer to use in multi-threaded environments and reduces the risk of bugs caused by unintended modifications.</p>
</li>
<li><p>Value semantics: Records are designed to have value semantics, which means that they are compared based on their property values rather than their memory addresses. This makes it easier to compare and work with records in your code.</p>
<p> Value semantics in C# refers to the way that values are compared and passed around in memory. When a value type is used, its value is copied and passed around in memory. This means that when two variables of a value type are compared, they are compared based on their values, rather than their memory locations.</p>
<p> For example, consider the following code:</p>
<pre><code class="lang-csharp"> <span class="hljs-keyword">int</span> a = <span class="hljs-number">5</span>;
 <span class="hljs-keyword">int</span> b = <span class="hljs-number">5</span>;
 <span class="hljs-keyword">bool</span> areEqual = a == b; <span class="hljs-comment">// true</span>
</code></pre>
<p> In this case, the values of <code>a</code> and <code>b</code> are compared, and they are considered equal because they have the same value.</p>
<p> Value semantics are different from reference semantics, which are used with reference types. When a reference type is used, a reference to the object is passed around in memory, rather than the actual object itself. This means that when two variables of a reference type are compared, they are compared based on their memory locations, rather than their values.</p>
<p> For example, consider the following code:</p>
<pre><code class="lang-csharp"> <span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">PersonClass</span>
 {
     <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> FirstName { <span class="hljs-keyword">get</span>; }
     <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> LastName { <span class="hljs-keyword">get</span>; }
     <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">PersonClass</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> firstName,<span class="hljs-keyword">string</span> lastName</span>)</span>
     {
         FirstName = firstName;
         LastName = lastName;
     }    
 }

 PersonClass p1= <span class="hljs-keyword">new</span> PersonClass(<span class="hljs-string">"John"</span>, <span class="hljs-string">"Doe"</span>);
 PersonClass p2= <span class="hljs-keyword">new</span> PersonClass(<span class="hljs-string">"John"</span>, <span class="hljs-string">"Doe"</span>);
 <span class="hljs-keyword">bool</span> same = p1==p2; <span class="hljs-comment">//false</span>
</code></pre>
<p> In this case, the values of <code>p1</code> and <code>p2</code> are compared, but they are not considered equal because they have different memory locations.</p>
<p> Value semantics are important in C# because they help to ensure that values are compared and passed around correctly in memory. This can help to prevent bugs and improve the overall performance of your code.</p>
</li>
<li><p>Pattern matching: Records can be used with pattern matching, which allows you to write more expressive and concise code. For example, you can use pattern matching to check if a record has a certain value or property.</p>
</li>
</ol>
<p>To use records in your C# code, you first need to define a record using the <code>record</code> keyword. Once you have defined a record, you can create new instances of the record using the same syntax as a regular class:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">var</span> person = <span class="hljs-keyword">new</span> Person(<span class="hljs-string">"John Doe"</span>, <span class="hljs-number">30</span>);
</code></pre>
<p>You can access the properties of a record using the dot notation:</p>
<pre><code class="lang-csharp">Console.WriteLine(person.Name); <span class="hljs-comment">// Output: John Doe</span>
Console.WriteLine(person.Age); <span class="hljs-comment">// Output: 30</span>
</code></pre>
<p>Records can be used with other C# features, such as inheritance, interfaces, and generics. For example, you can define an interface that requires a record with certain properties:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">interface</span> <span class="hljs-title">IPerson</span>
{
    <span class="hljs-keyword">string</span> Name { <span class="hljs-keyword">get</span>; }
    <span class="hljs-keyword">int</span> Age { <span class="hljs-keyword">get</span>; }
}
</code></pre>
<p>You can then use this interface to create a method that accepts any record that implements the IPerson interface:</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">PrintPerson</span>(<span class="hljs-params">IPerson person</span>)</span>
{
    Console.WriteLine(<span class="hljs-string">$"Name: <span class="hljs-subst">{person.Name}</span>, Age: <span class="hljs-subst">{person.Age}</span>"</span>);
}
</code></pre>
<p>In summary, records are a powerful feature in C# that provide a more efficient and organized way to work with structured data. They offer several advantages over traditional class definitions including simplified syntax, immutability by default, value semantics, and pattern matching.</p>
<p>By using records in your C# code, you can create more expressive and concise code that is easier to maintain and less prone to bugs. Whether you are working on a small project or a large-scale application, records can help you improve the design and functionality of your code.</p>
]]></content:encoded></item><item><title><![CDATA[Why I write unit tests]]></title><description><![CDATA[Unit testing is a software testing technique that involves testing individual units of source code to ensure that they work as expected. Unit tests are automated tests that execute small pieces of code and validate their behaviour. These tests help d...]]></description><link>https://ronaldkainda.blog/why-i-write-unit-tests</link><guid isPermaLink="true">https://ronaldkainda.blog/why-i-write-unit-tests</guid><category><![CDATA[C#]]></category><category><![CDATA[Testing]]></category><category><![CDATA[test driven development]]></category><category><![CDATA[test-automation]]></category><dc:creator><![CDATA[Ronald Kainda]]></dc:creator><pubDate>Mon, 06 Mar 2023 05:13:32 GMT</pubDate><content:encoded><![CDATA[<p>Unit testing is a software testing technique that involves testing individual units of source code to ensure that they work as expected. Unit tests are automated tests that execute small pieces of code and validate their behaviour. These tests help developers catch defects early in the development cycle before they become more difficult and costly to fix.</p>
<p>There are two main approaches to writing unit tests; code-first and test-first. In code-first, a developer writes the functional component of the application before writing unit tests for that component. This approach is popular with inexperienced developers as well as legacy code that has no unit tests in place. In both cases, writing unit tests as an afterthought is a difficult task. This is because it usually results in rewriting (refactoring) some of the code under test.</p>
<p>The test-first approach is where unit tests are written before actual functional code. Inexperienced developers struggle with this approach as it requires different thinking; writing unit tests for code that does not exist. With the test-first approach, no code needs refactoring in order to be testable. Unit tests written in advance also provide guardrails for functional code. It forces a developer to write code that is loosely coupled, maintainable and modular. It is, however, not a substitute for well-thought-out design patterns and clean code principles.</p>
<p>In my view, the benefits of unit testing cannot be overstated. There are arguments against unit testing mostly centred around resource management: time spent writing unit tests can be spent writing functional code. However, the opposite is true: unit testing saves you a ton of time in the long run! Some of the benefits of unit testing include:</p>
<ol>
<li><p>Reduced bugs: One of the primary benefits of unit testing is the ability to detect and fix bugs early in the development process. By testing individual units of code in isolation, developers can identify issues before they are integrated into the larger codebase, making them easier to resolve. Imagine refactoring code that has no unit tests. For code with any meaningful complexity, It is hard in this scenario to be sure that refactoring has not introduced any bugs. With unit tests in place, you only have to make sure that the tests pass before and after refactoring.</p>
</li>
<li><p>Better code quality: Unit testing forces developers to write better code by ensuring that each unit of code not only behaves as expected but is also structured in a testable way. This leads to higher-quality code that is easier to maintain, modify, and extend. As pointed out earlier, a unit test acts as a guardrail for your functional code not just functionally but structurally as well. It will force the developer to write smaller, single-purpose methods and use abstractions rather than relying on implementations.</p>
</li>
<li><p>Faster development: Unit testing helps speed up the development process by identifying bugs early and preventing them from becoming more significant issues later on. This allows developers to focus on writing new code and improving existing code rather than fixing bugs. Unit tests also mean that less time is spent on manually retesting existing code when new code is introduced.</p>
</li>
<li><p>Simplified debugging: When unit tests fail, developers can easily pinpoint the location of the bug and fix it quickly. This helps simplify the debugging process and makes it easier to maintain the codebase.</p>
</li>
</ol>
<p>Unit tests are not a silver bullet to bad coding practices or eliminating bugs in code bases. They are an effective tool when properly employed with the right mindset. When writing unit tests becomes about code coverage numbers, then they are not being employed effectively. Let's look at some of the downsides of unit tests:</p>
<ol>
<li><p>Over-reliance on unit tests: It's easy to fall into the trap of relying solely on unit tests to catch bugs, leading to a false sense of security. Unit tests can only test individual units of code, and bugs can still exist in the larger codebase that is not caught by unit tests. This is why integration tests must be part of a test suit for any business application.</p>
</li>
<li><p>Time-consuming: Writing unit tests can be time-consuming, especially for large codebases. Developers must strike a balance between writing enough tests to ensure adequate code coverage while still delivering code on time. This is especially true in legacy code bases. In greenfield applications, it is easier to maintain good test coverage with less effort compared to legacy applications.</p>
</li>
<li><p>False positives and negatives: Sometimes, unit tests can fail for reasons other than bugs, such as environmental issues or test setup problems. These false negatives can lead to wasted time and effort and make it harder to identify real bugs. Sufficient effort needs to be spent to remediate the cause of false negatives and ensure that tests are reliable and consistent.</p>
</li>
</ol>
<p>I want to end this article with an example of a unit test using the test-first approach. This example is trivial compared to what you will encounter in practice. However, it demonstrates the thought process when writing a unit test before actual code.</p>
<p>Let's say you are asked to implement a class that calculates the sum of two numbers. The first question will be what name makes sense for this class? I will call it <code>Calculator</code>. The second question is what methods will the class have? Since we have only been given one functional requirement, i.e. calculate the sum of two numbers, I will have a single method called <code>Add</code>. I expect this method to take 2 parameters i.e. the numbers to be added and return the result. There are additional questions you must ask yourself here such as what is the number type (int, double, decimal etc.), how will parameters be passed, what is the return type etc. Below is an example of how we can write a test for this imaginary class and method using the NUnit testing framework:</p>
<pre><code class="lang-csharp">[<span class="hljs-meta">TestFixture</span>]
<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">CalculatorTests</span>
{
    [<span class="hljs-meta">Test</span>]
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">TestAddition</span>(<span class="hljs-params"></span>)</span>
    {
        Calculator calculator = <span class="hljs-keyword">new</span> Calculator();
        <span class="hljs-keyword">long</span> result = calculator.Add(<span class="hljs-number">2</span>, <span class="hljs-number">3</span>);
        Assert.AreEqual(result, <span class="hljs-number">5</span>);
    }
}
</code></pre>
<p>In this example, we're creating a new instance of the <code>Calculator</code> (class under test) class and calling its <code>Add</code> method with two arguments. We're then using the Assert.AreEqual method to check that the result of the <code>Add</code> method is equal to the expected value of 5. We do not care how <code>Add</code> is performing the actual addition of the two numbers. All we care about is that given inputs of 2 and 3, we should get 5 as the output. We can then go ahead and create our <code>Calculator</code> class with the <code>Add</code> method. When our unit test starts passing then we know our work is done.</p>
<p>In conclusion, unit testing is a critical aspect of software development that helps catch bugs early and improve code quality. In the long term, it speeds up development by enabling developers to only focus on new functionality and not manually retesting existing code. However, developers should be aware of the pitfalls of unit testing, such as false positives/negatives and over-reliance on unit tests, to ensure that they are getting the most out of their testing efforts.</p>
]]></content:encoded></item><item><title><![CDATA[Designing Resilient Systems]]></title><description><![CDATA[In our interconnected and rapidly evolving world, organizations face a range of threats that can significantly impact their operations, from cyberattacks to natural disasters. To minimize losses and maintain operations, building resilient systems has...]]></description><link>https://ronaldkainda.blog/designing-resilient-systems</link><guid isPermaLink="true">https://ronaldkainda.blog/designing-resilient-systems</guid><category><![CDATA[Resilience]]></category><category><![CDATA[Disaster recovery]]></category><category><![CDATA[Redundancy]]></category><category><![CDATA[scalability]]></category><category><![CDATA[high availability]]></category><dc:creator><![CDATA[Ronald Kainda]]></dc:creator><pubDate>Sun, 05 Mar 2023 04:53:21 GMT</pubDate><content:encoded><![CDATA[<p>In our interconnected and rapidly evolving world, organizations face a range of threats that can significantly impact their operations, from cyberattacks to natural disasters. To minimize losses and maintain operations, building resilient systems has become increasingly crucial. Resilient systems are designed to endure disruptions and swiftly recover from them while minimizing damages. This article will examine the key features of resilient systems, their benefits, and their components, as well as the challenges involved in building them. It will also offer actionable steps for organizations to create and sustain resilient systems, ensuring uninterrupted operations and reputational resilience in the event of disruptions.</p>
<p>Resilient systems are those that are designed and developed to withstand and rapidly recover from disruptions, failures, or unexpected events while maintaining continuity of operations and minimizing damages. Such systems are characterized by their ability to adapt to changing conditions, anticipate potential risks, and leverage redundancy and diverse resources to mitigate the impacts of disruptions. Resilient systems can be found in various domains, such as critical infrastructure, supply chains, healthcare, and cybersecurity, and are essential to ensure the sustainability and stability of operations in the face of uncertainty and volatility.</p>
<p>Resilient systems play a crucial role in ensuring the sustainability and stability of operations in the face of disruptions and uncertainties. They provide organizations with the ability to rapidly adapt to changing conditions, anticipate and mitigate risks, and maintain continuity of critical operations. Building resilient systems can lead to reduced downtime, losses, and reputational damage in the event of disruptions. Additionally, resilient systems can improve stakeholder confidence, enhance operational efficiency, and support long-term growth and competitiveness. In short, investing in resilient systems is vital for organizations that seek to thrive in today's volatile and interconnected environment.</p>
<p>Resilient systems should have the following elements:</p>
<ul>
<li><p><strong>Availability</strong>: Availability refers to the ability of a system to remain operational and accessible to users. The people, processes, technology, data, and facilities all contribute to ensuring availability. Effective management of these elements can help ensure that the system remains available to users even during disruptions.</p>
</li>
<li><p><strong>Scalability</strong>: Scalability refers to the ability of a system to handle increasing workloads and data volumes. The technology and facilities elements are critical to ensuring scalability. The technology must be designed to scale up or down quickly to handle changes in workload, while the facilities must have adequate resources to support the increased workload.</p>
</li>
<li><p><strong>Maintainability</strong>: Maintainability refers to the ease and speed with which a system can be repaired and restored to normal operation after a disruption. The processes, technology, and facilities elements all play a critical role in maintainability. Effective processes can reduce repair times, while resilient technology and facilities can help ensure that the system can be restored quickly.</p>
</li>
<li><p><strong>Recoverability</strong>: Recoverability refers to the ability of a system to recover from disruption and return to normal operation. The people, processes, technology, data, and facilities elements are all critical to ensuring recoverability. Effective management of these elements can help ensure that the system can recover quickly and minimise the impact of disruptions.</p>
</li>
<li><p><strong>Security</strong>: Security refers to the protection of a system from unauthorized access, data breaches, and other malicious activities. The people, processes, technology, data, and facilities elements are all critical to ensuring security. Effective management of these elements can help ensure that the system remains secure even during disruptions or cyber-attacks.</p>
</li>
</ul>
<p>Several design patterns can be used to ensure a system has some or all of the above elements of resilience:</p>
<ul>
<li><p><strong>Circuit Breaker Pattern</strong>: The Circuit Breaker pattern is a design pattern that can help prevent cascading failures in a distributed system. In a distributed system, failures in one component can cause failures in other components, leading to a cascade of failures that can bring down the entire system. The Circuit Breaker pattern works by monitoring requests to a particular service, and if the number of failures exceeds a certain threshold, it breaks the circuit and stops sending requests to that service. This approach can help isolate failures and prevent them from spreading to other parts of the system, improving overall resilience. The Circuit Breaker pattern is often used in conjunction with other techniques such as load balancing and auto-scaling to improve system availability and scalability.</p>
</li>
<li><p><strong>Load Balancing and Auto-Scaling</strong>: Load balancing and auto-scaling are techniques used to distribute workloads across multiple instances and automatically adjust resources in response to changes in demand. Load balancing involves distributing incoming traffic across multiple instances to ensure that no single instance is overloaded, while auto-scaling involves automatically adding or removing instances in response to changes in demand. By distributing workloads across multiple instances, organizations can reduce the risk of overloading any one instance, and by auto-scaling, organizations can ensure that resources are always available to handle peak demand. This approach can help maintain system availability and scalability, even during periods of high demand.</p>
</li>
<li><p><strong>Backup and Disaster Recovery Strategies</strong>: Backup and disaster recovery strategies involve regularly backing up data and storing it in a safe location, and developing plans to quickly recover from a disaster. By regularly backing up data, organizations can minimize the risk of data loss, and by developing disaster recovery plans, organizations can quickly restore critical systems in the event of a disaster. Backup and disaster recovery strategies are critical components of any resilient system, as they help maintain system recoverability and availability.</p>
</li>
<li><p><strong>Redundancy and Fault Tolerance</strong>: Creating redundant systems is one of the most important best practices for designing resilient systems. Redundancy can involve deploying multiple instances of critical components, using distributed databases, or implementing failover mechanisms that can automatically switch to backup systems. This approach ensures that even if one or more components fail, the system can continue to operate without any disruption.</p>
<p>  In addition to redundancy, fault tolerance is also crucial for resilient system design. Fault tolerance refers to the ability of a system to continue operating even if one or more components experience failure. To increase fault tolerance, organizations can use techniques such as isolation, encapsulation, and graceful degradation. Isolation involves separating different components of a system so that a failure in one component does not affect other components. Encapsulation involves shielding components from each other to prevent failures from spreading. Graceful degradation involves designing components so that they can continue to function at a reduced capacity even if other components fail.</p>
</li>
<li><p><strong>Monitoring and Logging</strong>: Monitoring and logging are essential for identifying and diagnosing problems. There are various tools and services that enable organizations to monitor and log various aspects of their systems, including system health, performance, and security. By monitoring and logging key metrics, organizations can identify potential problems early and take corrective action before they become more serious. Monitoring and logging solutions can also provide real-time visibility into system performance, making it easier to detect and respond to issues as they arise. This approach can help improve system maintainability and recoverability.</p>
</li>
</ul>
<p>By incorporating these specific techniques into the system design process, organizations can further improve the resilience of their systems. Each of these techniques addresses a specific aspect of resilience, such as availability, scalability, recoverability, and maintainability, and can help organisations develop systems that are better able to withstand disruptions and maintain continuity of critical operations. Overall, a comprehensive approach to designing resilient systems should include a combination of these techniques, tailored to the specific needs and requirements of the organization and its applications.</p>
<p>Specific practices are required for designing resilient systems. Moreover, even well-designed resilient systems require processes and practices to be in place to work effectively. Some of these processes and practices include:</p>
<ul>
<li><p><strong>Automated Testing and Deployment</strong>: Automated testing and deployment can help organizations identify and address potential problems before they become more serious. By automating testing and deployment processes, organizations can catch issues early in the development cycle and quickly make changes to address them. This can help reduce the risk of outages caused by software defects or configuration issues.</p>
<p>  Automated testing and deployment processes can include tools such as continuous integration and continuous delivery (CI/CD). CI/CD allows organizations to automatically test and deploy code changes to production environments. By automating these processes, organizations can quickly identify and address issues, reducing the risk of outages and minimizing downtime.</p>
</li>
<li><p><strong>Incident Response Planning</strong>: Incident response planning is essential for quickly and effectively responding to system disruptions. By developing a comprehensive incident response plan, organizations can minimize the impact of system disruptions and quickly restore critical systems.</p>
<p>  An incident response plan should include clear procedures for identifying, diagnosing, and responding to incidents. The plan should also identify the roles and responsibilities of each team member involved in the response effort. Additionally, the plan should include communication procedures to ensure that all team members are aware of the incident and can quickly respond.</p>
</li>
<li><p><strong>Communication and Collaboration:</strong> Communication and collaboration are critical for effective incident response and maintaining resilient systems. By fostering a culture of open communication and collaboration, organizations can ensure that team members can quickly share information, identify potential problems, and work together to address issues.</p>
<p>  To improve communication and collaboration, organizations can use tools such as incident management systems and performance monitoring dashboards. Incident management systems allow teams to track incidents and collaborate on incident response efforts. Performance monitoring dashboards provide real-time visibility into system performance, allowing teams to quickly identify and address potential issues.</p>
</li>
</ul>
<p>Overall, following these best practices can help organizations design and maintain resilient systems that can withstand disruptions and maintain continuity of critical operations. By creating redundancy, increasing fault tolerance, automating processes, planning for incidents, and improving communication and collaboration, organizations can build systems that are better able to adapt to changing conditions and provide reliable and consistent services to users.</p>
]]></content:encoded></item><item><title><![CDATA[Tightly coupled code]]></title><description><![CDATA[Tightly coupled code is a common issue in software development, where modules or components are so dependent on each other such that a change in one requires a change in the other, making it challenging to modify or maintain code. In C#, tight coupli...]]></description><link>https://ronaldkainda.blog/tightly-coupled-code</link><guid isPermaLink="true">https://ronaldkainda.blog/tightly-coupled-code</guid><category><![CDATA[C#]]></category><category><![CDATA[test driven development]]></category><category><![CDATA[clean code]]></category><category><![CDATA[dependency injection]]></category><dc:creator><![CDATA[Ronald Kainda]]></dc:creator><pubDate>Sun, 05 Mar 2023 04:13:43 GMT</pubDate><content:encoded><![CDATA[<p>Tightly coupled code is a common issue in software development, where modules or components are so dependent on each other such that a change in one requires a change in the other, making it challenging to modify or maintain code. In C#, tight coupling often arises when classes and methods are too intertwined, making it difficult to make changes without impacting other parts of the code. In this article, I will explore the concept of tightly coupled code and provide C# examples.</p>
<p>Consider a simple example where we have a class called <code>User</code> and another class called <code>Authenticator</code>. The <code>Authenticator</code> class is responsible for authenticating users and relies heavily on the <code>User</code> class to do so. Here is an example of how these two classes might be implemented in a tightly coupled way:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">User</span>
{
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Username { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Password { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
}

<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">Authenticator</span>
{
    <span class="hljs-keyword">private</span> User _user;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">Authenticator</span>(<span class="hljs-params"></span>)</span>
    {
        _user = <span class="hljs-keyword">new</span> User { Username = <span class="hljs-string">"admin"</span>, Password = <span class="hljs-string">"password"</span> };
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">bool</span> <span class="hljs-title">Authenticate</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> username, <span class="hljs-keyword">string</span> password</span>)</span>
    {
        <span class="hljs-keyword">if</span> (username == _user.Username &amp;&amp; password == _user.Password)
        {
            <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;
        }

        <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
    }
}
</code></pre>
<p>In this example, the <code>Authenticator</code> class creates an instance of the <code>User</code> class and uses it to authenticate users. This creates tight coupling between the two classes because the <code>Authenticator</code> class is dependent on the <code>User</code> class. Let's try to test this code. A test might look like below code:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">Tests</span>
    {
        [<span class="hljs-meta">Test</span>]
        <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">WHenPasswordDoesNotMatchResultIsFalse</span>(<span class="hljs-params"></span>)</span>
        {
            Authenticator auth = <span class="hljs-keyword">new</span>();
            <span class="hljs-keyword">bool</span> authResult = auth.Authenticate(<span class="hljs-string">"admin"</span>, <span class="hljs-string">"password1"</span>);
            authResult.Should().Be(<span class="hljs-literal">false</span>);
        }
    }
</code></pre>
<p>Any problems with the above test? The test passes fine but what happens when another developer updates the <code>Authenticator</code> class to create a user whose password is now <em>password1</em>? The test above will need updating to match the change in <code>Authenticator</code> class. Let's say we had a test for a positive case of a username and a password that matches, and negative cases for either username or password not matching. That's 3 tests already where you need to change the code.</p>
<p>Let's improve this a bit to avoid such changes.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">User</span>
{
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Username { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Password { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
}

<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">Authenticator</span>
{
    <span class="hljs-keyword">private</span> User _user;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">Authenticator</span>(<span class="hljs-params">User user</span>)</span>
    {
        _user = user;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">bool</span> <span class="hljs-title">Authenticate</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> username, <span class="hljs-keyword">string</span> password</span>)</span>
    {
        <span class="hljs-keyword">if</span> (username == _user.Username &amp;&amp; password == _user.Password)
        {
            <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;
        }
        <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
    }
}
</code></pre>
<p>This is better! Our tests now do not rely on a user object created in the <code>Authenticator</code> class. We can create user objects and just inject them into <code>Authenticator</code>. However, <code>Authenticator</code> is still dependent on the <code>User</code> class. For each User to be authenticated, we need a new instance of <code>Authenticator</code>.</p>
<p>To fully decouple <code>Authenticator</code> and <em>User</em> classes, and avoid creating a new instance of <code>Authenticator</code> each time a user needs authenticating, we can use another object that is responsible for creating a <code>User</code> object. In fact, in a real application, users will be stored somewhere where a lookup can be performed when there is a login attempt. Let's look at the below code snippet.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">User</span>
{
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Username { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Password { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
}

<span class="hljs-keyword">public</span> <span class="hljs-keyword">interface</span> <span class="hljs-title">IUserRepository</span>
{
    <span class="hljs-function">User <span class="hljs-title">GetUser</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> username</span>)</span>;
}

<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">Authenticator</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> IUserRepository _userRepository;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">Authenticator</span>(<span class="hljs-params">IUserRepository userRepository</span>)</span>
    {
        _userRepository = userRepository;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">bool</span> <span class="hljs-title">Authenticate</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> username, <span class="hljs-keyword">string</span> password</span>)</span>
    {
        User user = _userRepository.GetUser(username);

        <span class="hljs-keyword">if</span> (user != <span class="hljs-literal">null</span> &amp;&amp; password == user.Password)
        {
            <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;
        }

        <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
    }
}
</code></pre>
<p>In this example, we define an interface called <code>IUserRepository</code> that specifies the methods for getting a <code>User</code> object. We then modify the <code>Authenticator</code> class to accept an instance of this interface via its constructor. This enables us to inject a different implementation of the <code>IUserRepository</code> interface, depending on our needs. As a start, we may have users in a database for the actual application but we may also have users in a file for our unit tests. We can then implement a <code>FileUserRepository</code> for our tests and a <code>DbUserRepository</code> for actual application code.</p>
<p>You may also argue that you may want to abstract <code>User</code> class by creating an <code>IUser</code> interface that can be implemented by classes representing different types of users.</p>
<p>Developers sometimes write tightly coupled code because it feels easier at the beginning. Loosely coupled code requires thinking and planning upfront. However, tightly coupled code quickly becomes difficult to main as a change in one place may result in changes in many other places.</p>
<p>One practice that forces developers to put in the effort to implement loosely coupled code is Test Driven Development. Following the principle of the path of least resistance, a developer can write unit tests upfront with much thought into the structure of the actual implementation. However, when actual code is developed, this is where effort will be required to align with the unit tests.</p>
]]></content:encoded></item><item><title><![CDATA[To await or not - C# and asynchronous programming]]></title><description><![CDATA[Since the introduction of async/await in C# 5 (2012), new developers still struggle to understand how it works and when to use it. I have conducted numerous interviews in which even experienced developers have struggled to explain how async/await wor...]]></description><link>https://ronaldkainda.blog/to-await-or-not-c-and-asynchronous-programming</link><guid isPermaLink="true">https://ronaldkainda.blog/to-await-or-not-c-and-asynchronous-programming</guid><category><![CDATA[C#]]></category><category><![CDATA[async]]></category><category><![CDATA[.NET]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[Microsoft]]></category><dc:creator><![CDATA[Ronald Kainda]]></dc:creator><pubDate>Sun, 26 Feb 2023 06:11:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/4T5MTKMrjZg/upload/60125ad45b698696cc4a2b9818c4598e.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Since the introduction of <em>async/await</em> in C# 5 (2012), new developers still struggle to understand how it works and when to use it. I have conducted numerous interviews in which even experienced developers have struggled to explain how <em>async/await</em> works. In this article, I will try to explain what is <em>async/await</em>, how it works, how to use it and what to avoid when using it.</p>
<p>In C#, the <em>await</em> keyword is used to asynchronously wait for a task to complete. When an asynchronous method is called, it returns a task that represents the ongoing work. The <em>await</em> keyword, when encountered, allows returning control to the calling method while the called method continues executing the task. Once, the <em>awaited</em> task is complete, the calling method continues to run the code following the <em>awaited</em> method call.</p>
<p>On the face of it, it seems as though a new thread to run the <em>awaited</em> method is created because how can control be returned to the caller if the same thread is being used? This misconception catches both new and experienced developers. One critical idea to understand is that <em>async</em>/<em>a</em>wait is aimed at I/O tasks. In synchronous programming, it means the calling thread is blocked and waits for the result of the I/O operation. Using <em>await</em> means that the calling thread does not have to be blocked and can be used for other tasks too while waiting for the I/O task result. Under the hood, a state machine is created that then allows execution to continue after the <em>awaited</em> task is complete. I will not go into details about the state machine but you can read <a target="_blank" href="https://devblogs.microsoft.com/premier-developer/dissecting-the-async-methods-in-c/">a good article here</a>.</p>
<p><em>Async/await</em> provides a cleaner way of writing asynchronous code. Before <em>async/await</em>, C# developers had other ways of achieving asynchronous programming including background worker, threads and Task Parallel Library (TPL). The <em>async</em> keyword is used to mark a method as asynchronous. A method marked as <em>async</em> must always return a Task or Task&lt;T&gt;. A caller of this method can use the <em>await</em> keyword in front of the method call. Let's look at an example to make this concrete:</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;<span class="hljs-keyword">string</span>&gt; <span class="hljs-title">GetDataAsync</span>(<span class="hljs-params"></span>)</span>
{
    <span class="hljs-keyword">using</span> (HttpClient client = <span class="hljs-keyword">new</span> HttpClient())
    {
        <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">await</span> client.GetAsync(<span class="hljs-string">"https://api.example.com/data"</span>);
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">await</span> response.Content.ReadAsStringAsync();
    }
}

<span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">PrintData</span>(<span class="hljs-params"></span>)</span>
{
    <span class="hljs-keyword">var</span> data = <span class="hljs-keyword">await</span> GetDataAsync();
    Console.WriteLine(data);
}
</code></pre>
<p>In the above code snippet, when <em>PrintData</em> calls <em>GetDataAsync</em>, the control flow returns to the caller of <em>PrintData</em> while <em>GetDataAsync</em> is executing. The code after the call to <em>GetDataAsync</em> does not execute. However, once <em>GetDataAsync</em> completes executing, <em>PrintData</em> then continues to execute the rest of the code in the method. Notice that, the call to <em>GetDataAsync</em> is non-blocking but that does not mean the code after the call to <em>GetDataAsync</em> will execute before the task completes.</p>
<p>Using <em>await</em> has several benefits. First, it allows for more efficient use of resources. When a method is <em>awaited</em>, the calling code can continue executing, rather than being blocked until the task completes. This can lead to a significant boost in performance, especially when working with I/O-bound operations such as network or file access.</p>
<p>Another benefit of using <em>await</em> is that it makes asynchronous code easier to read and understand. Without <em>await</em>, asynchronous code can easily become complex and difficult to follow, with multiple levels of nested callbacks. By using <em>await</em>, the code can be written more linearly and predictably, making it easier to debug and maintain.</p>
<p>However, there are scenarios where it might not be appropriate to use <em>await</em>. For example, if an operation is expected to complete immediately, using <em>await</em> might add unnecessary overhead. In these cases, it might be better to simply use synchronous code.</p>
<p>Additionally, if an application requires a high degree of parallelism, it might be more appropriate to use the Task.Run method to run operations in parallel, rather than using <em>await</em>. Let's look at the example below. Notice that <em>CalculateTotal</em> awaits the <em>Calculate</em> method. There are several issues with this code. First, the operations in Calculate are non-complex and the method will likely return immediately.</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;<span class="hljs-keyword">decimal</span>&gt; <span class="hljs-title">Calculate</span>(<span class="hljs-params"><span class="hljs-keyword">int</span> quantity, <span class="hljs-keyword">decimal</span> price</span>)</span>
{
    <span class="hljs-keyword">decimal</span> subtotal = quantity * price;
    <span class="hljs-keyword">decimal</span> total = subtotal += subtotal * VAT_CONST;
}

<span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">CalculateTotal</span>(<span class="hljs-params"><span class="hljs-keyword">int</span> quantity, <span class="hljs-keyword">decimal</span> price</span>)</span>
{
    <span class="hljs-keyword">decimal</span> total = <span class="hljs-keyword">await</span> Calculate(quantity, price);
    Console.WriteLine(total);
}
</code></pre>
<p>Using <em>await</em> in this scenario is most likely to create more overheads than using synchronous code. Remember that a call to <em>await</em> will result in a state machine that keeps track of the state of each awaited task.</p>
<p>The second issue with the above code is that <em>Calculate</em> is a CPU-bound task. The method is still consuming CPU cycles and <em>await</em> offers no benefit here. In many cases where code is written this way, the developer's intention is usually to run the task in the background. In such a scenario, Task.Run may be more suitable.</p>
<p>In general, whether to use <em>await</em> or not depends on the specific requirements of the task and the nature of the operations being performed. For I/O-bound operations, <em>await</em> is generally recommended to improve performance and readability. For CPU-bound operations, it may be more appropriate to use synchronous code or the Task.Run method.</p>
]]></content:encoded></item><item><title><![CDATA[Cost optimisation in public clouds]]></title><description><![CDATA[Introduction
Public cloud is a type of cloud computing that delivers shared computing resources and data over the internet on a pay-as-you-go model. It allows organizations to access and utilize computing resources, such as servers, storage, and data...]]></description><link>https://ronaldkainda.blog/cost-optimisation-in-public-clouds</link><guid isPermaLink="true">https://ronaldkainda.blog/cost-optimisation-in-public-clouds</guid><category><![CDATA[Cloud Computing]]></category><category><![CDATA[cost-optimisation]]></category><category><![CDATA[Azure]]></category><category><![CDATA[GCP]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Ronald Kainda]]></dc:creator><pubDate>Mon, 30 Jan 2023 06:25:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/A9_IsUtjHm4/upload/d74f05446727596981be833613448c00.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Introduction</strong></p>
<p>Public cloud is a type of cloud computing that delivers shared computing resources and data over the internet on a pay-as-you-go model. It allows organizations to access and utilize computing resources, such as servers, storage, and databases, without having to invest in and maintain their own infrastructure. Public cloud services are provided by major technology companies such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).</p>
<p>Cost control is a critical aspect of using public cloud services as it directly impacts the overall budget and profitability of an organization. With the pay-as-you-go model, organizations only pay for the resources they use, but without proper management and optimization, costs can quickly spiral out of control. This is especially true for organizations that have a high degree of variability in their workloads or are experiencing rapid growth.</p>
<p>In this article, we will discuss how cloud costs may easily get more expensive than on-prem ones. We first look at the various pricing models of public cloud services and then look at some of the actions/inactions that can lead to higher costs and how to mitigate them.</p>
<p><strong>Understanding Public Cloud Pricing</strong></p>
<p>Understanding Public Cloud Pricing is an essential step in controlling costs when using public cloud services. The three main pricing models for the public cloud are pay-as-you-go, reserved instances, and spot instances.</p>
<p><strong>Pay-as-you-go</strong>: This is the most common pricing model. Organizations only pay for the resources they use, such as computing power, storage, and data transfer. This model is ideal for organizations that have a high degree of variability in their workloads or are experiencing rapid growth.</p>
<p><strong>Reserved instances</strong>: This pricing model is similar to pay-as-you-go, but with a commitment to use a certain amount of resources over a specific period. Organizations that know they will need a certain amount of resources over a long period can save money by purchasing reserved instances.</p>
<p><strong>Spot instances</strong>: This pricing model allows organizations to bid on spare computing capacity at a discounted price. This is ideal for organizations that have flexible workloads and can stop or start instances as needed.</p>
<p><strong>On-demand pricing</strong>: This pricing model is similar to pay-as-you-go but with no commitment. This is ideal for organizations that have a predictable workload and want to avoid long-term commitments.</p>
<p>Organizations need to understand these pricing models and choose the one that best suits their needs. Additionally, it is also important to regularly review and optimize resource usage to ensure that costs are kept under control.</p>
<p><strong>Cost control challenges</strong></p>
<p>Given all the benefits of public clouds, why then would they cost more than on-prem data centres and services? Many organisations, especially small ones, move to the cloud without a strict policy of cost optimisation. This can result in heft bills and make an organisation reconsider whether using a public cloud is the right strategy in the first place. There are cases, however, where cost optimisation has been front and centre but still faces prohibitive costs. One example of such can be found <a target="_blank" href="https://world.hey.com/dhh/why-we-re-leaving-the-cloud-654b47e0">here</a>. This article is not intended to address such scenarios. It is intended to encourage organisations to plan and strategise for cost optimisation. Let's look at areas that can be costly and how to address them.</p>
<p><strong>Not right-sizing resources:</strong> A key feature of public clouds is the elasticity of resources. If you are migrating from on-prem and running a service on a 1TB RAM machine with 32 cores, do you need to have the same spec in the cloud? Once such a server is provisioned, you pay for it whether in use or not. To optimise cost, analyse how your service is currently used. If memory and CPU usage demands fluctuate depending on the time of day, week, month or year, it may be optimal to provision a server with minimal specs and scale up and down based on demand. Another consideration is to analyse whether it is possible to decompose your service into several smaller services that require a lot fewer resources and yet still can scale up or horizontally when required. This means using the right amount of resources for each workload and not over-provisioning.</p>
<p><strong>Lack of automation</strong>: Managing cloud costs require ensuring that the right amount of resource capacity is available for a given task. This entails reporting on resource under/over-utilisation and making adjustments accordingly. Automating tasks such as scaling up or down, monitoring and reporting can help organizations save money by reducing manual errors and inefficiencies.</p>
<p><strong>Not taking advantage of long-term commitment</strong>: For organisations that have a medium to long-term budget for cloud usage, they can save money by entering into a long-term commitment with a cloud provider. Reserved instances are a pricing model in cloud computing that allows customers to reserve capacity in advance for a discounted price. With reserved instances, customers can save money on their cloud computing costs compared to using on-demand instances. In the case of Amazon Web Services (AWS), reserved instances can be purchased with one- or three-year terms, with the option to make partial or full upfront payments, to receive even greater discounts. In Azure, reserved instances are called "Reserved Virtual Machine Instances (RIs)" and they allow customers to save money on virtual machine costs by committing to use virtual machines for a one- or three-year term.</p>
<p><strong>Not taking advantage of low-cost options</strong>: Cloud providers offer pricing tiers for most resources. Organizations that have flexible workloads can save money by using the cheapest tier wherever that is possible. For example, AWS spot instances (low-cost VMs for Azure) that run on spare computing capacity at a discounted price. This can be an effective strategy, especially in lower environments where consistent performance is usually not required.</p>
<p><strong>Optimizing data storage</strong>: Public cloud providers offer a range of storage options, each with different costs. Organizations should choose the right storage options for their data and regularly review their usage to ensure they are not paying for unnecessary storage. Most cloud providers offer different storage tiers that are optimal for specific use cases. Understanding the use case of your data will help in choosing the right tier for your storage.</p>
<p><strong>Lack of monitoring and reporting</strong>: Regularly monitoring and reporting on resource usage can help organizations identify and address any issues that may be driving up costs. Without monitoring and reporting, it is impossible to figure out which resources are the major cost contributors. It is important to ensure that reporting is granular enough to be able to drill down individual teams or business areas and specific resources. This can highlight resources that have wrongly been provisioned or are not in use anymore.</p>
<p><strong>Lack of policies:</strong> The ease with which resources can be provisioned makes controlling costs challenging. In on-prem environments, teams will normally raise paperwork that usually would have to go through many approvals before new resources can be provisioned. This is no longer the case with public clouds; they make it easier for developers and teams to provision resources within minutes by running a few commands on a terminal. This is where policies become important - to limit what can be provisioned. For example, you can put policies in place that limit the size and power of VMs that can be created in development environments. You can also limit what type of resource tier (free, standard or premium) can be provisioned. The goal is to minimise costs by only provisioning resources with the right capacity and tier for a given use.</p>
<p><strong>Summary</strong></p>
<p>In summary, controlling costs in public clouds is an important aspect of using these services. Key strategies for controlling costs include right-sizing resources, automation, reserved instances, spot instances, optimizing data storage, monitoring &amp; reporting and policies.</p>
<p>The ease with which resources can be provisioned can easily result in expensive bills. Having policies in place to control what can be provisioned in what environment is key. Even with policies in place, organisations need to continually monitor and report on cloud resource utilisation and cost. This will enable continued optimisations possibly through refining policies and user education.</p>
<p>Even before embarking on a cloud journey, organisations need to analyse their existing environments and plan their migration taking advantage of cost optimisation options available in public clouds. As cloud architects, our role is to architect solutions that are fit for purpose while minimising costs. It is not our goal to use the latest and greatest but advise organisations on what is best for them given their workloads.</p>
]]></content:encoded></item><item><title><![CDATA[Is it a record or a Record]]></title><description><![CDATA[When records were introduced in C# 9, it reminded me of Pascal's records. Pascal was the first language I programmed in. Despite the existence of VB, C++, Java and even C#, my teachers felt Pascal was just the perfect language to teach us programming...]]></description><link>https://ronaldkainda.blog/is-it-a-record-or-a-record</link><guid isPermaLink="true">https://ronaldkainda.blog/is-it-a-record-or-a-record</guid><category><![CDATA[C#]]></category><category><![CDATA[pascal]]></category><category><![CDATA[programming languages]]></category><dc:creator><![CDATA[Ronald Kainda]]></dc:creator><pubDate>Sun, 01 Jan 2023 21:14:07 GMT</pubDate><content:encoded><![CDATA[<p>When records were introduced in C# 9, it reminded me of Pascal's records. Pascal was the first language I programmed in. Despite the existence of VB, C++, Java and even C#, my teachers felt Pascal was just the perfect language to teach us programming. Reading about C# records years later took me down memory lane and started wondering whether there are similarities with Pascal records. I know what you are thinking - with all the advances in computing power and modern compilers how can there be similarities? Nevertheless, I was on a journey to satisfy my curiosity.</p>
<p>Let's briefly look at records in Pascal. Just like in C#, a record in Pascal was a way to create complex structured data types. Let's create a Person record:</p>
<pre><code class="lang-plaintext">type Person = record firstName : string; lastName : string; city : string; end;

var person1: Person;
</code></pre>
<p>We can then use the created instance to assign and read individual properties using dot notation similar to modern languages.</p>
<pre><code class="lang-plaintext">person1.firstName = 'john';
person1.lastName = 'doe';

writeln('First name is: ', person1.firstName)
</code></pre>
<p>You can also access properties using <em>with</em> keyword. You can compare Pascal records using @ operator (@person1 = @person2). This is equivalent to ReferenceEquals in C#. You can also override comparison operators to customise the equality check of objects. Pascal records are limited in that they do not support encapsulation and do not enforce immutability.</p>
<h3 id="heading-c-records-what-are-they-and-for-what"><strong>C# records, what are they and for what</strong></h3>
<p>A record is a reference type, similar to classes, that provides value-based equality out of the box. C# 10 goes further to add record structs - value type records. You can achieve value-based equality using classes as well but you will not get any help from the compiler. Let's look at some examples to make sense of this:</p>
<pre><code class="lang-csharp"><span class="hljs-comment">//define a record with 2 properties</span>
<span class="hljs-function"><span class="hljs-keyword">public</span> record class <span class="hljs-title">PersonRecord</span> (<span class="hljs-params"><span class="hljs-keyword">string</span> FirstName, <span class="hljs-keyword">string</span> LastName</span>)</span>;

<span class="hljs-comment">//create two instances of person record</span>
PersonRecord personRecord1 = <span class="hljs-keyword">new</span> PersonRecord(<span class="hljs-string">"John"</span>, <span class="hljs-string">"Doe"</span>);
PersonRecord personRecord = <span class="hljs-keyword">new</span> PersonRecord(<span class="hljs-string">"John"</span>, <span class="hljs-string">"Doe"</span>);

Console.WriteLine(<span class="hljs-string">$"<span class="hljs-subst">{personRecord == personRecord1}</span>"</span>);
Console.WriteLine(<span class="hljs-string">$"<span class="hljs-subst">{personRecord.Equals(personRecord1)}</span>"</span>);

<span class="hljs-comment">//Outputs</span>
True
True
</code></pre>
<p>First, note the definition of the record class - there is no explicit definition of FirstName and LastName properties. However, you can still refer to them just as you would for class-defined properties i.e. personRecord1.FirstName. Second, you might be wondering how come both statements for comparing the two instances output true. Let's look at how you define a similar normal class:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">PersonClass</span>
{
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> FirstName { <span class="hljs-keyword">get</span>; }
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> LastName { <span class="hljs-keyword">get</span>; }
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">PersonClass</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> firstName,<span class="hljs-keyword">string</span> lastName</span>)</span>
    {
        FirstName = firstName;
        LastName = lastName;
    }
}


PersonClass personClass = <span class="hljs-keyword">new</span> PersonClass(<span class="hljs-string">"John"</span>, <span class="hljs-string">"Doe"</span>);
PersonClass personClass1 = <span class="hljs-keyword">new</span> PersonClass(<span class="hljs-string">"John"</span>, <span class="hljs-string">"Doe"</span>);
Console.WriteLine(<span class="hljs-string">$"<span class="hljs-subst">{personClass == personClass1}</span>"</span>);
Console.WriteLine(<span class="hljs-string">$"<span class="hljs-subst">{personClass.Equals(personClass1)}</span>"</span>);

<span class="hljs-comment">//Outputs </span>
False
False
</code></pre>
<p>Notice how much code you have to write to define a normal class of the 'equivalent' record class. Even after writing so much more code, you get different results when comparing two instances of the class. To get the same results when comparing two different instances of the same class as that of record instances, you need to implement IEquatable&lt;T&gt; interface as well as override ==, != operators and GetHashCode.</p>
<p>What is happening under the hood is that in addition to generating public properties on the record type based on the constructor parameters, the compiler also generates an implementation of IEquatable&lt;T&gt; interface and equality operator overrides.</p>
<p>Records provide many additional features. Among them is non-destructive properties defined through constructor positional parameters as in the example record definition above. A ToString() override also provides formatted output for record instances. Record types can only inherit from other record types. They cannot inherit class or struct types and vice versa.</p>
<p>In addition to a record class, you also have a record struct. What is the difference? A record struct is a value type as opposed to a reference type of record class. Remember that for reference types, you only pass a reference to your object and not the actual object. For a value type, you pass the actual object. The decision of whether to use a record class or record struct is largely dependent on how large your object can be. If it is small enough not to be an overhead copying it around and you want to utilise the features of a record, then a record struct will make sense otherwise a record class is ideal.</p>
<p>The record type is one piece of evidence of what Microsoft's C# and Visual Studio teams have been paying attention to in the last few years: developer productivity. This is a topic for another day though but it's worth just mentioning here. In all, there is little similarities between C# and Pascal records!</p>
]]></content:encoded></item><item><title><![CDATA[Azure Infrastructure as Code]]></title><description><![CDATA[Azure Resource Manager
Before diving into Azure Resource Manager (ARM) templates and Terraform, let's briefly look into what ARM is and how important it is to the deployment and management of resources in Azure. ARM provides a consistent layer throug...]]></description><link>https://ronaldkainda.blog/azure-infrastructure-as-code</link><guid isPermaLink="true">https://ronaldkainda.blog/azure-infrastructure-as-code</guid><category><![CDATA[Azure]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[ARM]]></category><category><![CDATA[arm template]]></category><dc:creator><![CDATA[Ronald Kainda]]></dc:creator><pubDate>Tue, 22 Nov 2022 06:03:09 GMT</pubDate><content:encoded><![CDATA[<h3 id="heading-azure-resource-manager"><strong>Azure Resource Manager</strong></h3>
<p>Before diving into Azure Resource Manager (ARM) templates and Terraform, let's briefly look into what ARM is and how important it is to the deployment and management of resources in Azure. ARM provides a consistent layer through which you create, update and delete resources in Azure. Through ARM, you can also manage security, access control, locks and tags of your resources.</p>
<p>Imagine you want to create a virtual machine in your subscription. When you submit a request for the VM to be created, Azure Resource Manager first performs authentication and authorisation to ensure that the request is coming from a valid user and that the user has the authorisation to create a VM in the given subscription. Once authentication and authorisation are complete, the request is passed to the VM service which in turn will create the requested VM. Updates or delete requests for the VM also follow the same path.</p>
<p>To submit a request to Azure Resource Manager, one can use Azure portal, REST API endpoints, Azure Command Line Interface (CLI) or Azure PowerShell. Because all requests go through ARM, you get a consistent result no matter what tool you use to submit your request. Microsoft has done a great job ensuring that Azure Portal, CLI, PowerShell and REST API offer the same capability.</p>
<p>A newcomer to Azure trying out a few things might find the portal more user-friendly and can quickly get results without a steep learning curve. However, as soon as one gets past the initial stages of introduction to Azure, the Portal's limitations are easily exposed. The portal, in itself, is not limited but it limits the user. Let's look at some scenarios where Azure Portal limits the user:</p>
<ul>
<li><p>Recreating resources: when you create a resource through the portal, you need to go through the same steps again if you want to create similar resources e.g. additional VMs.</p>
</li>
<li><p>Reproducing environments: your pilot in a DEV subscription has worked like a charm and you want all the resources in that environment to be recreated in a Test environment. You will need to manually recreate each resource manually.</p>
</li>
</ul>
<p>The above two scenarios sound very much like an on-prem environment setup; you buy servers for your dev environment, set them up and check everything works as expected. Repeat the steps for your test, UAT, PROD and DR. The appeal of the Cloud for most organisations is to reduce the Total Cost of Ownership (TCO). The ability to get resources at a click of a button (a resource with exact same specification all the time!) reduces the cost of setting up new hardware to match the existing one.</p>
<h3 id="heading-infrastructure-as-code-iac"><strong>Infrastructure as Code (IaC)</strong></h3>
<p>Azure enables developers to describe their infrastructure in form of code. This code can then be deployed and redeployed with the same result. Developers are familiar with the DRY principle and IaC guarantees that DRY is adhered to. You can define your entire infrastructure (VMs, Storage, Service Bus, Azure Cache for Redis, Web Apps etc) and deploy it to your dev subscription. When ready to create that infrastructure in your Test environment, at the press of a button you will get the exact copy of your dev infrastructure in Test. Any serious infrastructure setup and maintenance will use continuous delivery (CD) for deploying resources similar to CI/CD for deploying software. Microsoft provides an Azure DevOps platform for creating and running your infrastructure deployment pipeline. Other tools out there can also be used including GitLab and GitHub.</p>
<h3 id="heading-arm-templates"><strong>ARM templates</strong></h3>
<p>To implement Infrastructure as Code, you use ARM templates. A template is a declarative language using JSON or Bicep syntax to define Infrastructure. There are plenty of examples of templates for different resources on the web. If you are trying out ARM templates for the first time, bear in mind that templates adhere to a defined schema and most examples on the web are based on older versions. If you are copying snippets from one example to another, ensure that they are from the same version of the schema you are using otherwise you may get errors.</p>
<p>A template can have five sections:</p>
<ol>
<li><p>Parameters: they provide placeholders for values that you can pass at runtime and are not hard coded. For example, a VM name is a candidate for parameterisation otherwise every time you want to create a new VM you will need to change your template. You can parameterise as much or as little as you want. You may want to provision low-specification VMs in the dev/test environments and high-specification ones in prod. By parameterising the VM spec, you will not need to change your template when deploying to these two environments - you just pass different parameter values.</p>
</li>
<li><p>Variables: variables generate values at runtime usually by combining one or more parameters with other values. For example, if you wanted your VM name to include environment, you may create a variable that combines the name and environment parameters.</p>
</li>
<li><p>User-defined functions: you can use this section to combine multiple standard functions into a complicated function.</p>
</li>
<li><p>Resources: this is the section where you define all the resources you want to deploy.</p>
</li>
<li><p>Output: this section is used to display output at the end of template deployment.</p>
</li>
</ol>
<p>At first, ARM templates could only be defined using JSON syntax. Microsoft later developed Bicep as an alternative and more concise language. I can see developers preferring Bicep to JSON as the former would feel natural to a developer. Let's look at an example to demonstrate how concise Bicep is compared to JSON syntax:</p>
<p><strong>JSON</strong></p>
<pre><code class="lang-json"><span class="hljs-string">"parameters"</span>: {
    <span class="hljs-attr">"vnetName"</span>: {
        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"string"</span>,
        <span class="hljs-attr">"defaultValue"</span>: <span class="hljs-string">"myCoolVnet"</span>
     }
 }
</code></pre>
<p><strong>Bicep</strong></p>
<pre><code class="lang-bash">param vnetName string = <span class="hljs-string">'myCoolVnet'</span>
</code></pre>
<p>The above snippets are both defining a string parameter and assigning a default value.</p>
<h3 id="heading-terraform"><strong>Terraform</strong></h3>
<p>Terraform is an open-source tool developed by HashiCorp. It uses HashiCorp Configuration Language (HCL) to declaratively define Infrastructure as code. Terraform takes the idea of having a predictable and repeated IaC further than ARM templates.</p>
<p>Imagine you are experimenting with Azure. You have your IaC scripted in ARM templates and then suddenly you want to experiment with Amazon Web Services (AWS). How do you redeploy what you have in Azure to AWS? Terraform is a multi-cloud IaC language that allows the deployment of the same infrastructure to different cloud providers.</p>
<h3 id="heading-conclusion"><strong>Conclusion</strong></h3>
<p>Infrastructure as Code is a powerful concept. It enables organisations to deploy and maintain infrastructure predictably and consistently. Using IaC, organisations can deploy resources to different environments or regions with little overheads. Azure provides ARM templates that use JSON or Bicep syntax as a solution. HashiCorp provides Terraform as a cross-cloud IaC tool that supports most of the major Cloud platforms.</p>
]]></content:encoded></item><item><title><![CDATA[The Road to Microsoft's .Net]]></title><description><![CDATA[I was thinking of what I should write as my first post and struggled to narrow down on a subject that will be interesting to me and, hopefully, others too. This blog will cover a range of technologies from open source to proprietary. However, I felt ...]]></description><link>https://ronaldkainda.blog/the-road-to-microsofts-net</link><guid isPermaLink="true">https://ronaldkainda.blog/the-road-to-microsofts-net</guid><category><![CDATA[.NET]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[C#]]></category><dc:creator><![CDATA[Ronald Kainda]]></dc:creator><pubDate>Thu, 10 Nov 2022 06:10:30 GMT</pubDate><content:encoded><![CDATA[<p>I was thinking of what I should write as my first post and struggled to narrow down on a subject that will be interesting to me and, hopefully, others too. This blog will cover a range of technologies from open source to proprietary. However, I felt that since I will be covering a lot of subjects on Microsoft's .Net, it will be prudent to provide a historical whirlwind of how the platform has developed over the years to where we are (.Net 7 release candidate).</p>
<p>With no further ado, I will highlight the key features released under each major version of .Net Framework and .Net.</p>
<h5 id="heading-what-is-net-and-net-framework">What is .Net and .Net Framework</h5>
<p>Microsoft defines .Net and .Net Framework as follows:</p>
<ul>
<li><p><strong>.NET Framework</strong> is the original implementation of .NET. It supports running websites, services, desktop apps, and more on Windows.</p>
</li>
<li><p><strong>.NET</strong> is a cross-platform implementation for running websites, services, and console apps on Windows, Linux, and macOS. .NET is open source on GitHub. .NET was previously called .NET Core.</p>
</li>
</ul>
<h5 id="heading-microsoft-net-framework-10">Microsoft .Net Framework 1.0</h5>
<ul>
<li>The original version of .Net Framework was released in 2002.</li>
</ul>
<h5 id="heading-microsoft-net-framework-20">Microsoft .Net Framework 2.0</h5>
<ul>
<li><p>This was a major upgrade to the previous version and brought the following features:</p>
</li>
<li><p>Generics - the need to write once and use everywhere as well as improving performance by avoiding boxing and unboxing</p>
</li>
<li><p>Debugger edit and continue feature that every developer has come to love</p>
</li>
<li><p>64-bit support, improvements to <a target="_blank" href="http://ASP.NET">ASP.NET</a> and ClickOnce deployment</p>
</li>
</ul>
<h5 id="heading-microsoft-net-framework-30">Microsoft .Net Framework 3.0</h5>
<ul>
<li><p>Windows CardSpace - Microsoft's attempt to address issues with password security and user identity</p>
</li>
<li><p>Windows Communication Foundation - aimed client-server communication models</p>
</li>
<li><p>Windows Workflow Foundation - ability to implement business processes using a visual workflow.</p>
</li>
<li><p>Windows Presentation Foundation - for developing desktop applications using declarative language syntax called XAML</p>
</li>
</ul>
<h5 id="heading-microsoft-net-framework-35">Microsoft .Net Framework 3.5</h5>
<ul>
<li>The main features included expressions trees, HashSet collection, and LINQ</li>
</ul>
<h5 id="heading-microsoft-net-framework-40">Microsoft .Net Framework 4.0</h5>
<ul>
<li><p>Garbage Collection - improved performance by introducing background garbage collection replacing concurrent garbage collection</p>
</li>
<li><p>Dynamic Language Runtime - enabled runtime type binding.</p>
</li>
</ul>
<p>Other features included</p>
<ul>
<li><p><a target="_blank" href="https://msdn.microsoft.com/en-us/library/dd289498(v=vs.100)">Monitor.Enter(Object, Boolean)</a></p>
</li>
<li><p>System.TimeSpan</p>
</li>
<li><p><a target="_blank" href="https://msdn.microsoft.com/en-us/library/dd992422(v=vs.100)">String.IsNullOrWhiteSpace</a></p>
</li>
<li><p><a target="_blank" href="https://msdn.microsoft.com/en-us/library/xb636w5t(v=vs.100)">String.Concat</a></p>
</li>
<li><p><a target="_blank" href="https://msdn.microsoft.com/en-us/library/dd992720(v=vs.100)">StringBuilder.Clear</a></p>
</li>
<li><p>BigInteger and Complex Numbers</p>
</li>
<li><p>Tuples</p>
</li>
<li><p><a target="_blank" href="https://msdn.microsoft.com/en-us/library/dd990128(v=vs.100)">Stopwatch.Restart</a></p>
</li>
<li><p><a target="_blank" href="https://msdn.microsoft.com/en-us/library/dd642331(v=vs.100)">System.Lazy&lt;T&gt;</a></p>
</li>
<li><p><a target="_blank" href="https://msdn.microsoft.com/en-us/library/dd412070(v=vs.100)">System.Collections.Generic.SortedSet&lt;T&gt;</a></p>
</li>
</ul>
<h5 id="heading-microsoft-net-framework-45">Microsoft .Net Framework 4.5</h5>
<ul>
<li><p>Support for arrays larger than 2 gigabytes (GB) on 64-bit platforms</p>
</li>
<li><p>Background just-in-time (JIT) compilation</p>
</li>
<li><p>Improvements to:</p>
</li>
<li><p>Windows Presentation Foundation (WPF)</p>
</li>
<li><p>Windows Communication Foundation (WCF)</p>
</li>
<li><p>Windows Workflow Foundation (WF)</p>
</li>
</ul>
<h5 id="heading-microsoft-net-framework-451-452-46-461-462-47-471-472-and-48">Microsoft .Net Framework 4.5.1, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, and 4.8</h5>
<ul>
<li>These are in-place updates to .Net Framework 4.5. Version 4.5 is in itself an in-place upgrade for .Net Framework 4.0. What this means is that all these versions use the same runtime despite new versions of assemblies with new features. Application targeting a lower runtime version than what is installed on the target machine will run without needing recompilation. However, if an application targets a higher version of the framework than what is on the machine, it may run just fine if it does not reference any APIs that are not present in the old version of the Framework</li>
</ul>
<h5 id="heading-microsoft-net-core-10">Microsoft .Net Core 1.0</h5>
<ul>
<li>Released in 2016 as an open-source project and the start of a new journey to bring Microsoft .Net to other platforms such as Linux and Mac. Support for this version ended in 2019</li>
</ul>
<h5 id="heading-microsoft-net-core-20">Microsoft .Net Core 2.0</h5>
<ul>
<li>This version brought improvements around tooling e.g. implicit running of restore during project build, support for C# 7.1 and Visual Basic and support for .Net Standard 2.0</li>
</ul>
<h5 id="heading-microsoft-net-core-30">Microsoft .Net Core 3.0</h5>
<ul>
<li>Added support for .Net Standard 2.1, C# 8.0, and introduced new types such System.Index.</li>
</ul>
<h5 id="heading-microsoft-net-50">Microsoft .Net 5.0</h5>
<ul>
<li><p>Improvement to System.Text.Json and Relational pattern matching</p>
</li>
<li><p>Introduced Records, top-level statements</p>
</li>
</ul>
<h5 id="heading-microsoft-net-60">Microsoft .Net 6.0</h5>
<ul>
<li><p>Support for C# 10</p>
</li>
<li><p>Minimal APIs</p>
</li>
<li><p>Single page applications</p>
</li>
<li><p>Unified platform across browser, cloud, desktop, IoT, and mobile apps</p>
</li>
<li><p>Performance improvements</p>
</li>
<li><p>Improvements to Blazor and Blazor WebAssembly (Wasm)</p>
</li>
</ul>
]]></content:encoded></item></channel></rss>