<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Tech-Journey]]></title><description><![CDATA[This blog explores Linux (Ubuntu), backend systems, system design, and DevOps through hands-on learning. It covers APIs, security, automation, and infrastructur]]></description><link>https://blog.tech-journey.co.za</link><generator>RSS for Node</generator><lastBuildDate>Sun, 10 May 2026 19:27:05 GMT</lastBuildDate><atom:link href="https://blog.tech-journey.co.za/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Local Dev Tools and Security Implications]]></title><description><![CDATA[Modern development environments are powerful by design. That power comes with a trade-off that is often ignored in day-to-day work: third-party extensions running with local access to your machine.
A ]]></description><link>https://blog.tech-journey.co.za/how-your-code-editor-can-become-a-security-risk</link><guid isPermaLink="true">https://blog.tech-journey.co.za/how-your-code-editor-can-become-a-security-risk</guid><category><![CDATA[vscode extensions]]></category><category><![CDATA[IDEs]]></category><category><![CDATA[vulnerability]]></category><category><![CDATA[devtools]]></category><category><![CDATA[Security]]></category><category><![CDATA[cybersecurity]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Sat, 14 Feb 2026 11:21:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1771435374776/9bfd3395-6009-4f46-b2d4-ec51c3f5f134.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Modern development environments are powerful by design. That power comes with a trade-off that is often ignored in day-to-day work: third-party extensions running with local access to your machine.</p>
<p>A recent write-up from OX Security highlights this clearly, documenting vulnerabilities in popular IDE extensions that can lead to remote code execution and local file exfiltration.</p>
<p>This isn’t limited to software engineers. Anyone using an IDE for scripting, automation, or data work is operating in the same environment.</p>
<blockquote>
<p>Reference:<br /><a href="https://www.ox.security/blog/four-vulnerabilities-expose-a-massive-security-blind-spot-in-ide-extensions/?utm_source=chatgpt.com">https://www.ox.security/blog/four-vulnerabilities-expose-a-massive-security-blind-spot-in-ide-extensions/</a></p>
</blockquote>
<hr />
<h2>The Trust Boundary Problem</h2>
<p>VS Code and similar editors tend to feel like local tools. They are familiar, fast, and deeply integrated into daily workflows. Over time, that familiarity creates a subtle assumption: that everything inside the editor is safe by default.</p>
<p>In practice, many extensions run with broad local permissions. They may be able to access:</p>
<ul>
<li><p>Local source code and build outputs</p>
</li>
<li><p>Environment variables and configuration files</p>
</li>
<li><p>Secrets such as API tokens, SSH keys, and cloud credentials</p>
</li>
</ul>
<p>These permissions are rarely reviewed with the same rigor as backend dependencies. Extensions are installed ad hoc, updated automatically, and often left in place indefinitely.</p>
<p>The risk is not that extensions are inherently malicious, but that they operate inside a trust boundary that is often undefined, unenforced, and rarely audited.</p>
<hr />
<h2>What the Research Highlights</h2>
<p>The OX Security report shows that this trust model does not hold under scrutiny, even for widely used extensions with large install bases.</p>
<p>Key findings include:</p>
<ul>
<li><p>Remote code execution via exposed extension functionality</p>
</li>
<li><p>Local file exfiltration triggered through crafted inputs</p>
</li>
<li><p>Weak assumptions about the origin and integrity of incoming data</p>
</li>
</ul>
<p>These issues arise because extensions operate in a hybrid model: local execution combined with network access, without the same controls typically applied to production services.</p>
<hr />
<h2>The Visibility Gap</h2>
<p>This is not primarily a tooling problem. It is a visibility problem.</p>
<p>In most environments:</p>
<ul>
<li><p>Extension inventories are not centrally tracked</p>
</li>
<li><p>Permissions are not regularly reviewed</p>
</li>
<li><p>There is no lifecycle process for reassessment or removal</p>
</li>
</ul>
<p>Over time, IDE extensions become a silent part of the attack surface—present on every developer machine, but rarely included in security reviews.</p>
<hr />
<h2>Listing Installed VS Code Extensions on macOS</h2>
<p>If you want to audit extensions, the first step is simply knowing what is installed.</p>
<p>On macOS, you can list all your VS Code extensions by running:</p>
<pre><code class="language-bash">ls ~/.vscode/extensions
</code></pre>
<p>Each directory follows this format:</p>
<pre><code class="language-bash">publisher.extension-version
</code></pre>
<p><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Important:</mark> Only include the Extension ID and <mark class="bg-yellow-200 dark:bg-yellow-500/30">NOT</mark> the version number</strong></p>
<p>Extension scanning and auditing tools, such as <a href="https://vscan.dev/">VSCan</a>, expect the extension ID in the following format otherwise the tool will not recognise it:</p>
<pre><code class="language-bash">publisher.extension #nothing else
</code></pre>
<p>To extract clean extension IDs on macOS, run this in your terminal:</p>
<pre><code class="language-bash">for dir in ~/.vscode/extensions/*; do
  name=\((basename "\)dir")
  echo "${name%-*}"
done
</code></pre>
<p>This list can then be used for:</p>
<ul>
<li><p><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Manual review</mark></strong> – Check if extensions like <code>ms-python.python</code> or <code>eamodio.gitlens</code> are really needed, and <strong>remove</strong> any that are unused.</p>
</li>
<li><p><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Feeding into scanning tools</mark></strong> – tools such as <strong>VSCan</strong> can analyse each extension for risky behaviours or suspicious permissions.</p>
</li>
<li><p><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Cross-referencing against vulnerability disclosures and advisories</mark></strong> – See if installed extensions match those flagged in vulnerability reports, such as the OX Security blog highlighting <code>Live Server</code> or <code>Markdown Preview Enhanced</code>.</p>
</li>
<li><p><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Staying informed</mark></strong> – Devs should consider following sources like <a href="https://thehackernews.com/">The Hacker News</a>, security blogs, and advisory feeds to stay up to date on new extension vulnerabilities and patches.</p>
</li>
</ul>
<p>Here’s an example of the output you can expect to see from VSCan:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1771434463907/f7be4ae1-1e04-47c4-9ea3-a916c31d592d.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">Risk indicators are not absolute</mark>: Even well-maintained extensions can introduce exposure depending on how they are configured, what files they can access, and whether they have network reach. The actual risk is determined less by the extension itself and more by the environment it runs in and the privileges it operates under.</p>
</blockquote>
<hr />
<h2>Auditing Extensions Without Overengineering</h2>
<p>Unlike system packages or backend dependencies, IDE extensions don't have a mature, centralised vulnerability tracking ecosystem. That makes conventional dependency scanning only partially effective.</p>
<p>In environments where no formal process exists, tools such as <a href="https://vscan.dev/">VSCan</a> can help by analysing installed extensions for risky behaviour patterns and known security concerns. The goal is to replace assumption-based trust with observable facts about what each extension actually does.</p>
<p>From there, a lightweight operational baseline is often enough:</p>
<ul>
<li><p>Periodic review of installed extensions to ensure each one still serves a purpose</p>
</li>
<li><p>Removal of unused or redundant extensions to reduce overall attack surface</p>
</li>
<li><p>Preference for extensions that are actively maintained or published by trusted sources</p>
</li>
</ul>
<p>This approach avoids unnecessary process overhead while still reducing exposure in a meaningful way.</p>
<hr />
<h2>A Small Shift in Perspective</h2>
<p>The key takeaway is straightforward. IDE extensions sit inside the same trust boundary as other dependencies, even if they are often treated as harmless convenience tools.</p>
<p>Addressing this doesn't require heavy governance or restrictive controls. It starts with basic visibility and a consistent habit of asking a simple question:</p>
<p>What is actually executing inside this environment?</p>
<p>Whether you're writing application code, working with data, or managing infrastructure, the editor is part of the software supply chain. It deserves to be treated with that same level of scrutiny.</p>
]]></content:encoded></item><item><title><![CDATA[Part 1: Ubuntu System Hardening and Execution Baseline]]></title><description><![CDATA[A fresh Ubuntu installation is designed to be permissive, catering equally to desktop, server, and container workloads without assuming a strict security model.
However, the moment a system is exposed]]></description><link>https://blog.tech-journey.co.za/part-1-ubuntu-system-hardening-and-execution-baseline</link><guid isPermaLink="true">https://blog.tech-journey.co.za/part-1-ubuntu-system-hardening-and-execution-baseline</guid><category><![CDATA[Ubuntu]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Fri, 02 Jan 2026 08:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/68a5e6b9bf57f369891da8e0/4a6f2900-c753-424a-bdbe-346974739180.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A fresh Ubuntu installation is designed to be permissive, catering equally to desktop, server, and container workloads without assuming a strict security model.</p>
<p>However, the moment a system is exposed to a network (internal/external), its threat model changes. It transitions from an isolated compute environment into a reachable execution target. What follows is the process of moving from this default, permissive state to a controlled execution environment, ending with an operational monitoring model which will be covered in <strong>Part Two</strong>.</p>
<h2>System Exposure and Baseline Inspection</h2>
<p>Before altering the system, you need to understand its current exposure.<br /><strong>Network listeners:</strong></p>
<pre><code class="language-shell">sudo ss -tulnp
</code></pre>
<p>This command reveals which processes are actively bound to network interfaces. At this stage, the network layer makes no distinction between intended services (like SSH) and accidental exposure (like a default database binding to <code>0.0.0.0</code>).</p>
<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">Example output:</mark></p>
<img src="https://cdn.hashnode.com/uploads/covers/68a5e6b9bf57f369891da8e0/d7add587-6e72-4d4b-8c3d-d9e8bfb17176.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Installed package drift:</strong></p>
<pre><code class="language-shell">apt list --upgradable
</code></pre>
<p>This command shows the gap between what’s installed locally and what the repositories currently provide. That gap is where security debt accumulates.</p>
<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">Example output:</mark></p>
<img src="https://cdn.hashnode.com/uploads/covers/68a5e6b9bf57f369891da8e0/8362eb02-12e4-45f4-ae17-8aabd680cde7.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Enabled services:</strong></p>
<pre><code class="language-shell">systemctl list-unit-files --state=enabled
</code></pre>
<p>This command lists everything configured to execute automatically during boot. Its the most accurate representation of the system's baseline behaviour.</p>
<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">Example output:</mark></p>
<img src="https://cdn.hashnode.com/uploads/covers/68a5e6b9bf57f369891da8e0/128cb36a-899f-4746-88d2-9c9798078afd.png" alt="" />

<hr />
<h3>Package State Alignment</h3>
<p>Systems degrade securely over time if they're not consistently aligned with upstream security patches.</p>
<p><strong>Update alignment:</strong></p>
<pre><code class="language-shell">sudo apt update &amp;&amp; sudo apt upgrade -y
</code></pre>
<p><strong>Verification:</strong></p>
<pre><code class="language-shell">apt list --upgradable
</code></pre>
<p>The expected steady state is an <strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">empty</mark></strong> upgrade list. Anything else represents drift that needs to be addressed.</p>
<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">Expected output:</mark></p>
<img src="https://cdn.hashnode.com/uploads/covers/68a5e6b9bf57f369891da8e0/0c251773-4b7f-4fdb-ba45-5f6fae4b704e.png" alt="" />

<hr />
<h3>Automating Security Updates</h3>
<p>Manual update cycles rely on human consistency, which is inherently flawed. Security patch latency (the window between vulnerability disclosure, package availability, and human execution) is where the majority of exploitation occurs.</p>
<p><strong>Enable unattended updates:</strong></p>
<pre><code class="language-shell">sudo apt install unattended-upgrades -y
</code></pre>
<blockquote>
<p><em>Note: you might already have this package installed on your system.</em></p>
</blockquote>
<p><strong>Verify behaviour configuration:</strong></p>
<pre><code class="language-shell">cat /etc/apt/apt.conf.d/20auto-upgrades
</code></pre>
<p>This file dictates whether update scheduling is successfully delegated to the system. You should see <code>APT::Periodic::Update-Package-Lists "1";</code> and <code>APT::Periodic::Unattended-Upgrade "1";</code>.</p>
<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">Example output:</mark></p>
<img src="https://cdn.hashnode.com/uploads/covers/68a5e6b9bf57f369891da8e0/b9f5eddd-5869-46e2-81d6-ef6dcc43c37e.png" alt="" />

<hr />
<h3>SSH as an Execution Boundary</h3>
<p>SSH operates as a remote execution boundary at the operating system level. Once access is established, the session effectively represents direct command execution on the host. When password authentication is enabled, that boundary is governed primarily by password strength and any rate-limiting controls enforced by the service.</p>
<p>Removing password authentication reduces the system to key-based identity verification only, but this change must be executed with a verified fallback path already in place.</p>
<blockquote>
<p><strong>Warning:</strong> Before disabling password authentication, ensure you have successfully generated and copied your SSH public key to <code>~/.ssh/authorized_keys</code> on the remote server. Otherwise, you will lock yourself out.</p>
</blockquote>
<h3>Precondition: Verify SSH Key Access</h3>
<p>From your local machine, confirm you can log in using SSH:</p>
<pre><code class="language-shell">ssh user@server_ip
</code></pre>
<p>If a password is still required, key authentication hasn't been configured yet.</p>
<p>You can explicitly test key usage:</p>
<pre><code class="language-shell">ssh -i ~/.ssh/id_rsa user@server_ip
</code></pre>
<p>Ensure your public key exists on the remote server:</p>
<pre><code class="language-shell">cat ~/.ssh/authorized_keys
</code></pre>
<p>If its missing, copy it using:</p>
<pre><code class="language-shell">ssh-copy-id user@server_ip
</code></pre>
<p>Make sure permissions are correct:</p>
<pre><code class="language-shell">chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
</code></pre>
<blockquote>
<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">Keep an active SSH session open while making changes, in case rollback is needed.</mark></p>
</blockquote>
<p><strong>SSH configuration:</strong><br />Modifying the <code>/etc/ssh/sshd_config</code> file.</p>
<pre><code class="language-shell">sudo sed -i 's/^#*PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
sudo sed -i 's/^#*PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config
</code></pre>
<p>The commands above will ensure the following directives are set:</p>
<pre><code class="language-plaintext">PasswordAuthentication no
PermitRootLogin no
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/68a5e6b9bf57f369891da8e0/41441b3a-072d-4a63-90db-69b0715185a3.png" alt="" />

<p><strong>Apply and verify the changes:</strong></p>
<pre><code class="language-shell">sudo systemctl restart ssh
sudo sshd -T | grep -i passwordauthentication
</code></pre>
<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">Example output:</mark></p>
<img src="https://cdn.hashnode.com/uploads/covers/68a5e6b9bf57f369891da8e0/9b45eff8-97f3-49f3-8e18-89a273488692.png" alt="" />

<p>After this change, SSH access depends entirely on key-based authentication. Password login is no longer accepted by the service configuration.</p>
<p><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Example of key-based authentication:</mark></strong></p>
<img src="https://cdn.hashnode.com/uploads/covers/68a5e6b9bf57f369891da8e0/d2140bde-56c4-4453-a4b3-6cfbce2dcca3.png" alt="" />

<hr />
<h3>The Firewall as Traffic Definition</h3>
<p>A firewall does not inherently secure a vulnerable application, but it enforces strict network boundaries, defining exactly which paths are valid.</p>
<p><strong>Configure UFW (Uncomplicated Firewall):</strong></p>
<pre><code class="language-shell">sudo ufw allow OpenSSH
sudo ufw enable
</code></pre>
<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">Example output:</mark></p>
<img src="https://cdn.hashnode.com/uploads/covers/68a5e6b9bf57f369891da8e0/ea1cfde1-169f-412c-8c31-a653896af182.png" alt="" />

<p>Verification:</p>
<pre><code class="language-shell">sudo ufw status verbose
</code></pre>
<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">Example output:</mark></p>
<img src="https://cdn.hashnode.com/uploads/covers/68a5e6b9bf57f369891da8e0/9f008b51-460e-46a5-a416-e95ebd329811.png" alt="" />

<blockquote>
<p>Note: If SSH is not explicitly allowed before enabling the firewall, all active and future connections will be dropped.</p>
</blockquote>
<hr />
<h3>Service Minimisation</h3>
<p>Every enabled system service increases the memory footprint, dependency surface during boot, and potential attack vectors. There is no distinction between a service you deliberately intended to expose and one installed silently as a dependency unless you actively audit them.</p>
<p>Review your active listeners (<code>sudo ss -tulnp</code>) and disable anything unnecessary:</p>
<pre><code class="language-shell">sudo systemctl disable --now &lt;service_name&gt;
</code></pre>
<h3>The Operational Monitoring Layer</h3>
<p>At this stage, the system is secured but not yet structured for robust observability.</p>
<p>Linux produces vital telemetry, natively split between <strong>runtime event streams</strong> and <strong>persisted file logs</strong> (traditionally in <code>/var/log</code>). Historically, managing this was fragmented. Systemd unifies this execution and monitoring model.</p>
<h3>Why systemd supersedes legacy execution</h3>
<p>Historically, service execution was handled by a mix of SysV init scripts, cron-based scheduling, and ad-hoc supervision tools. This resulted in:</p>
<ul>
<li><p>No unified dependency graph.</p>
</li>
<li><p>Inconsistent startup ordering.</p>
</li>
<li><p>Manual supervision required for long-running processes.</p>
</li>
<li><p>Fragmented logging channels.</p>
</li>
</ul>
<p>Systemd is often reductively called a "service manager", but it is actually a unified execution and dependency management framework. Instead of executing disparate scripts, systemd treats the system as a strict dependency graph of managed units.</p>
<p><strong>Core systemd components:</strong></p>
<ul>
<li><p><strong>PID 1:</strong> The systemd daemon itself, controlling the boot lifecycle.</p>
</li>
<li><p><strong>Unit files:</strong> Declarative service definitions.</p>
</li>
<li><p><strong>Journald:</strong> The integrated, binary log collection subsystem.</p>
</li>
<li><p><strong>Timers:</strong> The scheduled execution model.</p>
</li>
</ul>
<h3>Why systemd timers replace cron</h3>
<p>Cron operates on a rudimentary premise: <em>"<mark class="bg-yellow-200 dark:bg-yellow-500/30">execute command X at time Y</mark>"</em>.<br />It has no contextual awareness of whether the previous execution is still running, if the system is overloaded, or if prerequisite services are available.</p>
<p>Systemd timers resolve these operational blind spots:</p>
<ul>
<li><p><strong>Contextual execution:</strong> Execution is tied strictly to service units, allowing for dependency mapping (e.g., "only run this script if the network is up").</p>
</li>
<li><p><strong>State awareness:</strong> You can query execution state reliably via <code>systemctl</code>.</p>
</li>
<li><p><strong>Built-in supervision:</strong> Overlapping executions can be prevented automatically.</p>
</li>
<li><p><strong>Unified logging:</strong> Standard output (stdout) and errors (stderr) from timed tasks are captured automatically by <code>journald</code>, ensuring scheduled tasks are monitored exactly like persistent daemons.</p>
</li>
</ul>
<p>The shift from cron to systemd timers reflects a move toward system-aware scheduling that integrates with service state, logging, and lifecycle management. Because systemd inherently understands execution state, manages dependencies, and natively captures logs, it provides the robust execution engine required for the active alerting architecture we will implement in <strong>Part Two</strong>.</p>
]]></content:encoded></item><item><title><![CDATA[REST API Concepts, Illustrated with Rugby]]></title><description><![CDATA[In South Africa, rugby is not just a sport but a language many of us speak. Whether its the timing of a counter-attack in a Test match or the precision of a backline move involving Grant Williams, Sac]]></description><link>https://blog.tech-journey.co.za/understanding-rest-apis-through-rugby</link><guid isPermaLink="true">https://blog.tech-journey.co.za/understanding-rest-apis-through-rugby</guid><category><![CDATA[REST API]]></category><category><![CDATA[Python 3]]></category><category><![CDATA[json]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Sat, 08 Nov 2025 09:23:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1771207318739/c0050ac9-1569-4a80-9972-0c2f8d4f908b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In South Africa, rugby is not just a sport but a language many of us speak. Whether its the timing of a counter-attack in a Test match or the precision of a backline move involving Grant Williams, Sacha Feinberg-Mngomezulu, Andre Esterhuizen, Damian Willemse, Cheslin Kolbe and Kurt-Lee Arendse, the game works because everyone knows the rules. Players, referees and coaches all understand what is allowed, what is expected, and what happens when the rules are followed.</p>
<p>APIs are the same. Once you understand the rules, what first seems chaotic becomes structured. If you can follow a rugby match from kickoff to final whistle, you can understand how a web server talks to the outside world.</p>
<h2>What is a REST API?</h2>
<p>To most people, "API" sounds like dense technical jargon. In reality, an <strong>Application Programming Interface</strong> is just a <strong>formal agreement</strong> (or a digital contract) between two systems.</p>
<p>It defines exactly how one piece of software asks for information and what it gets back in return. Without these strict rules, different programs would be "lost in translation," like two people trying to collaborate while speaking different languages and following different social customs. An API ensures they both speak the same dialect.</p>
<p>Why this agreement is considered a contract:</p>
<ul>
<li><p><strong>Predictable Inputs:</strong> The contract states, "If you want this data, you must ask for it in <em>this</em> specific format" (e.g., a specific URL or a piece of JSON code).</p>
</li>
<li><p><strong>Guaranteed Outputs:</strong> The system promises, "If you ask correctly, I will always give you the data in <em>this</em> specific structure."</p>
</li>
<li><p><strong>Stability:</strong> If the team decides to change their internal training regime or swap out the locks in the engine room (the internal database), it doesn’t matter to the rest of the backline. As long as the fly-half executes the same "contracted" pass to the center, the play continues smoothly. The internal "gym work" might change, but the <strong>delivery</strong> on the pitch remains predictable so the team doesn't lose its rhythm.</p>
</li>
</ul>
<p><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">REST</mark></strong> (Representational State Transfer) is the most popular "architectural style" for building web APIs. Think of it as a set of <strong>design rules</strong> rather than a specific piece of software.</p>
<p>It works by piggybacking on <strong>HTTP</strong>, the same language your browser uses to load this page. By following REST’s <strong>predictable</strong> standards, developers can move data across the internet as easily as you navigate from one website to another.</p>
<p><strong>A Quick Note on Other API Styles</strong></p>
<p>Other API <em><strong>styles</strong></em> exist to handle specific needs. For example, <strong>SOAP</strong> is a rigid, XML-based protocol often used in legacy enterprise systems, while <strong>gRPC</strong> is a high-performance binary system designed for speed and is often more specialised than a standard web application requires.</p>
<blockquote>
<p><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">I’m not very familiar with these other styles </mark> <em><mark class="bg-yellow-200 dark:bg-yellow-500/30">yet</mark></em></strong>, and for now my main focus is <strong>REST</strong> as it is widely used and sufficient for most web applications.</p>
</blockquote>
<h3>API Types</h3>
<p>APIs can also be classified by <em><strong>who</strong></em> can use them and <em><strong>how</strong></em> they are intended to be <em><strong>consumed</strong></em>. In this context, “<em>consumed</em>” simply means <strong>how a client or system uses the API</strong> to get data or perform actions.</p>
<p>Common API types include:</p>
<ul>
<li><p><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Public APIs</mark></strong> – available for anyone to use. External developers can consume them freely, often following published documentation.</p>
</li>
<li><p><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Private APIs</mark></strong> – intended for internal use only. They are consumed by systems or teams within the organisation.</p>
</li>
<li><p><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Partner APIs</mark></strong> – shared with specific business partners. Only <em><strong>authorised</strong></em> partners can consume these APIs under controlled conditions.</p>
</li>
</ul>
<p>Understanding both <strong>API styles</strong> (how an API works) and <strong>API types</strong> (who can use it) gives you a complete picture of how APIs are designed and organised.</p>
<hr />
<h2>How REST Works</h2>
<p>In REST, every piece of data you interact with is called a <strong>resource</strong>. A resource is any identifiable "thing" the system manages, such as a player like Cheslin Kolbe, a match, or even the entire United Rugby Championship league.</p>
<p>To manage and reference these resources, we use <strong>identifiers</strong>. This is where <strong>URI</strong> and <strong>URL</strong> come in.</p>
<h3>URI (Uniform Resource Identifier)</h3>
<p>A <strong>URI</strong> is like a player’s <strong>unique ID in the league database</strong>. It identifies the resource itself, regardless of where it’s stored or how you access it.</p>
<ul>
<li>Example: <code>player:911</code><br />This identifies player 911 permanently, without specifying how to fetch their data.</li>
</ul>
<p><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Key point:</mark></strong> A URI identifies something but doesn’t tell you where or how to access it.</p>
<h3>URL (Uniform Resource Locator)</h3>
<p>A <strong>URL</strong> is a <strong>special type of URI</strong> that not only identifies the resource but also tells you <strong>how and where to access it</strong> over a network.</p>
<ul>
<li>Example: <a href="https://tech-journeys.com/players/kolbe%EF%BF%BCThis"><code>https://tech-journeys.com/players/kolbe</code>  
</a>This URL identifies Cheslin Kolbe and also tells your system to fetch his data via HTTPS from this exact location.</li>
</ul>
<p><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Key point:</mark></strong> All URLs are URIs because they identify a resource, but not all URIs are URLs.</p>
<h3>Quick Rule of Thumb</h3>
<ol>
<li><p><strong>URI:</strong> identifies a resource (who/what)</p>
</li>
<li><p><strong>URL:</strong> identifies a resource <strong>and</strong> tells you how to locate it (who/what + where/how)</p>
</li>
<li><p><strong>Relationship:</strong> Every URL is a URI, but not every URI is a URL.</p>
</li>
</ol>
<h3>Rugby Analogy Summary</h3>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1771213348254/c6ba985c-680b-41e1-ae84-afc55082090d.png" alt="" style="display:block;margin:0 auto" />

<p>Here’s an example written in Python:</p>
<pre><code class="language-python"># Define a list of players (unique identifiers in the system)
player_uris = [
    "players/kolbe",
    "players/smith",
    "players/duplessis",
    "players/mapimpi"
]

# Base URL of our rugby API or website
base_url = "https://tech-journeys.com/"

# Loop through each player and generate their full URL
for uri in player_uris:
    player_url = f"{base_url}{uri}"
    print("Player URI:", uri)
    print("Player URL:", player_url)
    print("-" * 40)  # separator for readability
</code></pre>
<h3>Output</h3>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1771212641833/4cdcc6b0-ade7-4ca9-be88-2b355212f22a.png" alt="" />

<hr />
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">Scenario</mark></h3>
<p>Imagine we are building an international rugby statistics platform. One of our resources is a player.</p>
<p>To interact with that player, REST allows a small set of actions called <strong>HTTP</strong> <em><strong>methods</strong></em>. These are the legal moves of the game, <strong>defining what you can do with each resource</strong>.</p>
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">GET</mark> – Scout Watching the Match</h3>
<p>A <code>GET</code> request <em><strong>asks</strong></em> the server <em><strong>for</strong></em> information without changing anything. Think of it as a scout reviewing footage, observing without interfering.</p>
<pre><code class="language-python">import requests

# Get Cheslin Kolbe's stats
response = requests.get("https://tech-journeys.com/players/kolbe") #This URL doesn't really exist :)
player_data = response.json()
print(player_data)
</code></pre>
<h3>Output</h3>
<pre><code class="language-json">{
  "name": "Cheslin Kolbe",
  "team": "South Africa",
  "position": "Wing",
  "special_move": "Ankle Breaker",
  "world_cups": [2019, 2023]
}
</code></pre>
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">POST</mark> – Adding a New Player</h3>
<p>A <code>POST</code> request <em><strong>sends</strong></em> <em><strong>new</strong></em> information to the server to create something that <strong>does not <em>yet</em> exist</strong>. This is like officially adding a new player to the squad.</p>
<pre><code class="language-python">new_player = {
    "name": "Canan Moodie",
    "team": "South Africa",
    "position": "Centre",
    "world_cups": [2023]
}

response = requests.post("https://tech-journeys.com/players", json=new_player)
print(response.status_code)  # 201 means created
</code></pre>
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">PUT</mark> – Updating a Player</h3>
<p>A <code>PUT</code> request <em><strong>replaces</strong></em> or <em><strong>updates</strong></em> an <strong>existing</strong> resource. For example, updating a player’s position after a tactical change.</p>
<pre><code class="language-python">updated_position = {
    "position": "Fullback"
}

response = requests.put("https://tech-journeys.com/players/kolbe", json=updated_position)
print(response.status_code)  # 200 means success
</code></pre>
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">DELETE</mark> – Removing a Player</h3>
<p>A <code>DELETE</code> request <em><strong>removes</strong></em> a resource <strong>completely</strong>. This is like a <strong>permanent</strong> substitution.</p>
<pre><code class="language-python">response = requests.delete("https://tech-journeys.com/players/retired_player_id")
print(response.status_code)  # 204 means no content, successfully deleted
</code></pre>
<hr />
<h2>A Simple System Design View</h2>
<p><strong>System design</strong> is the process of planning and organising all the parts of a software system so it works reliably, efficiently, and can handle real-world use. You can think of it as designing a rugby stadium and playbook, you need to know where everything goes, how players move, and how referees, coaches, and spectators interact.</p>
<p>For a REST API, the high-level view looks like this:</p>
<pre><code class="language-css">[Client] ----------&gt; [REST API Server] ---------&gt; [Database]
GET/POST/PUT/DELETE                               CRUD operations
</code></pre>
<ul>
<li><p><strong>Client</strong> – could be a web app, mobile app, or Python script.</p>
</li>
<li><p><strong>REST API Server</strong> – receives requests, enforces rules (HTTP methods), and manages resources.</p>
</li>
<li><p><strong>Database</strong> – stores the resources like players, matches, and stats.</p>
</li>
</ul>
<p>Good system design ensures your API behaves <strong>predictably,</strong> <strong>scales</strong> well, and is <strong>easier to maintain</strong>, like a well-coached rugby team.</p>
<hr />
<h3>Looking at the Language of the Game: <mark class="bg-yellow-200 dark:bg-yellow-500/30">JSON</mark></h3>
<p>When a server responds, it doesn’t send pictures of Kolbe. It sends <em><strong>data</strong></em>.</p>
<p>That data is usually formatted as <code>JSON</code>, JavaScript Object Notation, <strong>a lightweight text-based structure designed to be easy for both humans and machines to read</strong>. JSON organises information into <em><strong>key-value pairs</strong></em>, much like a stat sheet in rugby.</p>
<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">Example of a player in JSON:</mark></p>
<pre><code class="language-json">{
  "name": "Cheslin Kolbe",
  "team": "South Africa",
  "position": "Wing",
  "special_move": "Ankle Breaker",
  "world_cups": [2019, 2023]
}
</code></pre>
<p>Here, <code>"name"</code>, <code>"team"</code>, <code>"position"</code> <code>"special_move"</code> and <code>"world_cups"</code> are the <em><strong>keys</strong></em>, and the <em><strong>values</strong></em> <em>describe</em> the player.</p>
<hr />
<h2>API Security</h2>
<p>You wouldn't just let anyone walk into the changing room, and you shouldn't just let anyone access your data. So make sure Security is <strong>always</strong> at the forefront.</p>
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">Authentication and Authorisation</mark></h3>
<p><strong>Authentication</strong> answers the question, <strong>who are you?</strong> It verifies identity. In APIs, this is commonly done using an API key or a JWT, JSON Web Token, which is a signed token proving identity for a limited time.</p>
<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">Python example:</mark></p>
<pre><code class="language-python">import requests

headers = {"Authorization": "Bearer YOUR_JWT_TOKEN"}
response = requests.get("https://tech-journeys.com/players/kolbe", headers=headers)
print(response.json())
</code></pre>
<p><strong>Authorisation</strong> answers the question, <strong>what are you allowed to do?</strong> Even after identity is confirmed, the system checks permissions. Just because you have a ticket to the stadium doesn’t mean you can sit in the coaching box (although it would’ve been nice listening to the mastermind Rassie himself). Both checks happen on <strong>every request</strong>.</p>
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">Encryption</mark></h3>
<p>Data travelling between a client and a server <strong>must</strong> be protected. <a href="https://www.cloudflare.com/learning/ssl/what-is-https/">HTTPS</a> uses <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/">TLS</a>, Transport Layer Security, to encrypt communication. This ensures that if a "hacker" intercepts the data while it’s traveling between the Cape Town server and your phone, all they see is scrambled nonsense.</p>
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">Rate Limiting</mark></h3>
<p>APIs must defend against overload. Rate limiting <strong>restricts</strong> <strong>how many requests a client can make in a given period</strong>, for example 100 requests per minute. This prevents abuse and protects the system from <a href="https://www.cloudflare.com/learning/ddos/glossary/denial-of-service/">Denial-of-Service</a> (<strong>DoS</strong>) attacks, which try to overwhelm servers by flooding them with traffic. You can think of it as crowd control at a sold-out international match, because without limits, the system would collapse under the pressure of <strong>too many requests</strong>.</p>
<hr />
<h2>How I Approach Complex Systems</h2>
<p>Over time, I’ve learned that most systems only feel complex until you relate them to something familiar. Once you do that, the moving parts become easier to reason about.</p>
<p>A <strong>GET</strong> request is just observing state.<br />A <strong>POST</strong> request is introducing change.<br />Access control is no different from checking permissions at a gate.</p>
<p>The specific analogy doesn’t matter. It could be rugby, football, baking, or fixing cars. What matters is mapping abstract behaviour to something you already understand.</p>
<p>That shift in thinking turns technical concepts into something more durable. Instead of memorising terminology, you recognise patterns. And once you recognise the pattern, it’s much easier to apply it when you’re building or debugging real systems.</p>
]]></content:encoded></item><item><title><![CDATA[Would You Hire Yourself? The Difference Between Stagnation and Growth]]></title><description><![CDATA[With over eight years in the tech industry, I’ve learned a simple but uncomfortable truth. Effort compounds. The work you put in today pays off later in ways you cannot predict when you’re just starti]]></description><link>https://blog.tech-journey.co.za/the-difference-between-stagnation-and-growth</link><guid isPermaLink="true">https://blog.tech-journey.co.za/the-difference-between-stagnation-and-growth</guid><category><![CDATA[Productivity]]></category><category><![CDATA[Career]]></category><category><![CDATA[networking]]></category><category><![CDATA[jobs]]></category><category><![CDATA[problem solving skills]]></category><category><![CDATA[success]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Sat, 11 Oct 2025 16:54:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1771460784652/51607bd0-f080-4f7d-bb19-8be8c0081107.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>With over eight years in the tech industry, I’ve learned a simple but uncomfortable truth. Effort compounds. The work you put in today pays off later in ways you cannot predict when you’re just starting out.</p>
<blockquote>
<h3>Credentials open doors. Consistent work determines progression.</h3>
</blockquote>
<p>Degrees, diplomas, and certifications have value, but it is limited. They may help your CV pass initial screening or get you into an interview. Beyond that, they carry little weight on their own.</p>
<p>Once you're in a conversation, the focus shifts to how you reason, how you approach problems, and what you've actually built. Without concrete work to reference, there is little to anchor that discussion, and that gap becomes visible quickly.</p>
<hr />
<h2>No Shortcuts in Competitive Markets</h2>
<p>The job market is tough, and there’s no real bypass around that reality.</p>
<p>A common assumption, especially for graduates or career switchers is that companies will provide structured, in-depth training after hiring. In practice, that expectation often doesn’t match how teams operate.</p>
<p>Attitude does matter, but it has limited weight on its own. Intent is not a substitute for exposure, repetition, or hands-on experience with real systems.</p>
<hr />
<h2>Think Like a Hiring Manager</h2>
<p>From a hiring manager’s perspective, the role exists to offload execution so they can focus on higher-level work and system direction.</p>
<p>In that context, the expectation is rarely to onboard someone from zero. The preference is usually for someone who can contribute meaningfully within a short time frame, even if they're still growing in the role.</p>
<p>Support and guidance are part of the environment, but most positions assume a baseline of practical competence is already in place before day one.</p>
<hr />
<h2>Credentials Without Evidence Have Limited Reach</h2>
<p>If getting interviews or traction is difficult, relying only on qualifications on paper is often not enough. At some point, visibility and demonstrated ability start to matter more than listed credentials.</p>
<p>That visibility comes from consistent output. Whether through writing, short technical notes, videos, or documenting personal projects, the goal is the same: <strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">show</mark></strong> how you think, how you approach problems, and how you communicate technical ideas.</p>
<p>Networking and public work are part of engineering practice as they surface how you think, communicate, and apply technical understanding in context that a CV alone cannot carry.</p>
<p>Whether the goal is a first role or progression within an existing one, the underlying principle stays consistent: make your work and reasoning <strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">visible</mark></strong>.</p>
<hr />
<h2>Ask the Hard Question</h2>
<p>At some point, self-assessment becomes necessary.</p>
<p><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Would you hire yourself for this role, based on what you can demonstrate today?</mark></strong></p>
<p>If the answer is yes, there should be clear, practical reasons you can point to (work you’ve done, problems you’ve solved, and evidence of how you operate).  </p>
<p>If your answer is no, then you’ve just identified a clear roadmap for your <strong>Tech Journey</strong>🙂</p>
<p>Hiring is centred on problem-solving capability. Stating that ability carries little weight on its own. What matters is showing how you approach problems, how you break them down, and whether you can carry that process through consistently.</p>
<hr />
<h2>Documentation Is Part of the Job</h2>
<p>In technical environments, documentation is a requirement of system longevity. People rotate, teams change, but the systems remain.</p>
<p>Good documentation reduces operational risk, improves maintainability, and lowers the cost of future change. The act of writing it also exposes gaps in understanding, making it a form of validation rather than just record-keeping.</p>
<blockquote>
<p><em><mark class="bg-yellow-200 dark:bg-yellow-500/30">Tip</mark></em>: If you are already in a role and want to remain effective over time, avoid outsourcing all reasoning to others. Asking questions is part of the workflow, but doing so without first attempting to understand the system yourself will eventually slow both learning and contribution.</p>
</blockquote>
<hr />
<h2>Experience Doesn’t Automatically Translate to Growth</h2>
<p>Be careful not to confuse tenure with progression. Years of experience can accumulate without a corresponding increase in depth, ownership, or impact.</p>
<p>This often becomes visible when progression is expected but not granted. Being reliable within a defined scope is valuable, but it does not always translate into readiness for broader responsibility.</p>
<p>In many cases, the gap is not time spent in the industry, but the extent to which work has evolved in complexity, scope, and impact.</p>
<hr />
<h2>Use AI Carefully and Intentionally</h2>
<p>Over-reliance on AI to handle so-called tedious tasks can hide a deeper issue. Over time, people stop engaging directly with problems that once forced them to think critically and work through uncertainty. That kind of mental effort is what builds real technical strength.</p>
<p>AI is useful and should be part of the workflow, but as a <strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">support tool</mark></strong> rather than a replacement for understanding. Every output still needs to be interpreted, validated, and understood in context. <em><strong>The reasoning behind a solution remains <mark class="bg-yellow-200 dark:bg-yellow-500/30">your responsibility</mark>, regardless of how it was produced.</strong></em></p>
<hr />
<h2>You Don’t Need Expensive Tools to Learn Properly</h2>
<p>Homelabbing since 2020 has made one thing clear: effective learning is not dependent on premium software or high-end hardware. Constraints often force a better understanding of fundamentals.</p>
<p>Free tiers and open-source tools are widely used in industry environments. They expose the same underlying concepts as enterprise platforms, which makes the skills <strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">transferable</mark></strong> across stacks.</p>
<p>Tools such as Git, GitHub, Linux, Docker, Nginx, Python, Bash, Prometheus, Grafana, the ELK stack, n8n, Proxmox, and others can all be learned without financial cost. Some require account creation, while others can be installed and run locally for experimentation and practice.</p>
<p>The focus is not the tool itself, but the problem it solves and the system behaviour it represents.</p>
<hr />
<h2>Stop Waiting and Start Building</h2>
<p>At some point, planning stops being productive. Prolonged preparation without execution leads to stagnation while others gain experience through repetition and real work.</p>
<p>If there is intent to get serious about learning, it often requires practical trade-offs. That can include saving for an affordable, second-hand, upgradeable machine. Appearance is irrelevant. Stability and usability matter more than specifications or aesthetics.</p>
<p>Once a usable setup is in place, the workflow becomes straightforward: set up a lab, build and deploy something functional (<strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">don't worry about perfection</mark></strong>), break it, recover it, and document the process. That cycle is where understanding develops.</p>
<blockquote>
<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">NB</mark>: Avoid switching between tools or technologies without direction. Constantly chasing new stacks creates surface-level familiarity without depth. Sustained progress comes from selecting a small set of relevant tools and working with them long enough to understand their behaviour in real scenarios.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[A Comprehensive Guide to ITIL v4]]></title><description><![CDATA[ITIL v4, or the Information Technology Infrastructure Library version 4, is a framework designed to help organisations manage their IT services effectively. It provides a comprehensive set of best pra]]></description><link>https://blog.tech-journey.co.za/a-comprehensive-guide-to-itil-v4</link><guid isPermaLink="true">https://blog.tech-journey.co.za/a-comprehensive-guide-to-itil-v4</guid><category><![CDATA[ITIL]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Sun, 10 Aug 2025 07:03:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772441716/f9e125a7-47c2-48c4-b208-8f7e0fdb6db6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>ITIL v4, or the <strong>Information Technology Infrastructure Library version 4</strong>, is a framework designed to <strong>help organisations manage their IT services effectively</strong>. It provides a comprehensive set of best practices for making sure IT services are properly aligned with the needs of the business, ensuring IT consistently delivers value. This blog will explore the main components of ITIL v4, its principles, and how it operates to improve service management.</p>
<h4>What is ITIL v4?</h4>
<p>ITIL v4 is the most recent version of the ITIL framework, which has been widely adopted globally since its beginnings in the 1980s. It marks a significant shift from previous versions by incorporating modern philosophies such as <a href="https://agilealliance.org/agile101/"><strong>Agile</strong></a>, <a href="https://www.atlassian.com/devops"><strong>DevOps</strong></a>, and <a href="https://theleanway.net/The-Five-Principles-of-Lean"><strong>Lean</strong></a>, making it much more relevant in today’s fast-paced digital world. ITIL v4 takes a holistic approach to service management, <strong>focusing on the co-creation of value through collaborative efforts between IT and the business,</strong> rather than just on the IT department as a silo. It recognises that the entire <strong>organisation must work together to deliver valuable services</strong>.</p>
<hr />
<h3>Key Components of ITIL v4</h3>
<h3>1. The Service Value System (SVS)</h3>
<p>At the heart of ITIL v4 is the <strong>Service Value System</strong> (SVS). This is a comprehensive model that shows how all an organisation’s components and activities work together to facilitate value creation. The SVS <strong>ensures that the organisation is aligned and works together as a cohesive unit</strong>. The SVS includes several key elements:</p>
<ul>
<li><p><strong>Guiding Principles</strong>: These are universal recommendations that guide an organisation in all circumstances, regardless of changes in its goals or strategies.</p>
</li>
<li><p><strong>Governance</strong>: This component ensures that policies and continual improvement are aligned with the organisation’s overall objectives and are directed by the governing body.</p>
</li>
<li><p><strong>Service Value Chain</strong>: A flexible operating model that outlines the key activities required to respond to demand and create value.</p>
</li>
<li><p><strong>Practices</strong>: These are the sets of organisational resources designed for performing work or accomplishing an objective. ITIL v4 presents 34 practices that build on the processes from previous versions.</p>
</li>
<li><p><strong>Continual Improvement</strong>: A recurring activity that ensures the organisation is always enhancing its services and practices. This is a core theme throughout the entire framework.</p>
</li>
</ul>
<h3>2. The Guiding Principles</h3>
<p>ITIL v4 has <strong>seven guiding principles</strong> that help organisations adopt and adapt the framework to their specific needs. These principles are designed to be practical and applicable in any situation:</p>
<ol>
<li><p><strong>Focus on Value</strong>: Everything you do should directly or indirectly create value for stakeholders.</p>
</li>
<li><p><strong>Start Where You Are</strong>: Don’t rip and replace your existing systems. Instead, assess what you have and how it can be improved.</p>
</li>
<li><p><strong>Progress Iteratively with Feedback</strong>: Implement changes in small, manageable steps and continuously gather feedback to refine your approach.</p>
</li>
<li><p><strong>Collaborate and Promote Visibility</strong>: Encourage teamwork and transparency across all levels and departments to avoid silos.</p>
</li>
<li><p><strong>Think and Work Holistically</strong>: Recognise that services are part of a larger, interconnected system. Consider all components and how they interact.</p>
</li>
<li><p><strong>Keep It Simple and Practical</strong>: Avoid unnecessary complexity and bureaucracy. Focus on what is essential to achieve your objectives.</p>
</li>
<li><p><strong>Optimise and Automate</strong>: Streamline manual processes and use automation where it makes sense to improve efficiency and reduce human error.</p>
</li>
</ol>
<h3>3. The Service Value Chain (SVC)</h3>
<p>The <strong>Service Value Chain</strong> is a central element of the SVS. It is an operational model that consists of six activities that organisations can use to create value in response to demand. <strong>The SVC is highly flexible and can be adapted for any scenario</strong>. Its activities are:</p>
<ul>
<li><p><strong>Plan</strong>: Understand the organisation’s vision, current status, and objectives, and create plans for improvement.</p>
</li>
<li><p><strong>Improve</strong>: Continuously enhance products, services, and practices across the value chain.</p>
</li>
<li><p><strong>Engage</strong>: Foster relationships with all stakeholders, including customers, suppliers, and partners, to understand their needs and requirements.</p>
</li>
<li><p><strong>Design and Transition</strong>: Develop and implement new or changed services, ensuring they meet expectations for quality, cost, and time.</p>
</li>
<li><p><strong>Obtain/Build</strong>: Acquire or develop the necessary resources, whether hardware, software, or personnel, for service delivery.</p>
</li>
<li><p><strong>Deliver and Support</strong>: Ensure services are delivered effectively and provide support to users, managing any incidents and requests.</p>
</li>
</ul>
<h3>4. The Practices</h3>
<p>ITIL v4 expands on the traditional processes by introducing <strong>34 practices</strong> that encompass various aspects of service management. These practices are sets of resources and capabilities that are flexible and can be tailored to an organisation’s specific needs. They are grouped into <strong>three</strong> main categories:</p>
<ol>
<li><p><strong>General Management Practices</strong>: These are practices adopted and adapted for service management from general business management domains (e.g., Change Enablement, Information Security Management).</p>
</li>
<li><p><strong>Service Management Practices</strong>: These were developed specifically for service management and are based on a long history of best practices (e.g., Incident Management, Service Desk).</p>
</li>
<li><p><strong>Technical Management Practices</strong>: These are adapted from the technology management domain for specific technical services (e.g., Infrastructure and Platform Management).</p>
</li>
</ol>
<hr />
<h3>How ITIL v4 Works in an Organisation</h3>
<p>Implementing ITIL v4 is a journey of continuous improvement, not a one-time project. It typically involves a few key steps:</p>
<ol>
<li><p><strong>Assessment</strong>: An organisation begins by assessing its current service management practices to understand its strengths and weaknesses, and to identify areas for improvement.</p>
</li>
<li><p><strong>Training and Awareness</strong>: Staff members are trained on ITIL v4 principles and practices to ensure a common understanding and to get buy-in from all levels.</p>
</li>
<li><p><strong>Adoption of Practices</strong>: Organisations select the most relevant practices from ITIL v4 and integrate them into their existing processes, tailoring them to their specific context.</p>
</li>
<li><p><strong>Continual Improvement</strong>: An organisation establishes a culture of continual improvement, regularly reviewing and refining its practices based on feedback, performance metrics, and a changing business environment.</p>
</li>
</ol>
<h3>Conclusion</h3>
<p>ITIL v4 provides a robust framework for organisations seeking to enhance their IT service management capabilities. By focusing on value creation, adopting the guiding principles, and using the Service Value Chain and practices, organisations can align their IT services with business objectives, boost efficiency, and foster better collaboration. As the digital landscape continues to evolve, ITIL v4 remains a vital and flexible tool for organisations aiming to thrive in a competitive environment.</p>
]]></content:encoded></item><item><title><![CDATA[Why Documentation Isn’t Just a Chore]]></title><description><![CDATA[If you’ve ever spent hours deciphering someone else’s cryptic configuration, or frantically tried to recall that one crucial step you performed months ago, you’ll understand the silent agony of poor (]]></description><link>https://blog.tech-journey.co.za/importance-of-documentation</link><guid isPermaLink="true">https://blog.tech-journey.co.za/importance-of-documentation</guid><category><![CDATA[Technical writing ]]></category><category><![CDATA[documentation]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Tue, 05 Aug 2025 18:01:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772362240/52744ff4-1314-421c-a78b-10641b057cf4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you’ve ever spent hours deciphering someone else’s cryptic configuration, or frantically tried to recall that one crucial step you performed months ago, you’ll understand the silent agony of poor (or non-existent) documentation. Just like overthinking can stall our progress, the lack of clear documentation can cripple our teams and hinder our individual growth.</p>
<p>In the world of <strong>IT,</strong>  whether we’re on the front lines of helpdesk support, managing critical infrastructure, building automation pipelines, or writing integrations, it’s easy to prioritise <em><strong>doing</strong></em> over <em><strong>explaining</strong></em>. We troubleshoot, we fix, we deploy, and we move on. Documentation often feels like red tape, a slowdown we can’t afford.</p>
<p>But what if documentation isn’t a burden? What if it’s a <em>force multiplier</em> i.e, a way to increase the value of the work we’ve already done?</p>
<hr />
<h3>What Good Documentation Looks Like</h3>
<p>Good documentation isn’t about writing War and Peace for every single task. It’s about creating clear, concise, and easily accessible resources that help ourselves and our colleagues understand:</p>
<ul>
<li><p><strong>What was done:</strong> The exact steps taken to configure a system, resolve an issue, or implement a change.</p>
</li>
<li><p><strong>Why it was done:</strong> The reasoning behind decisions, the context of the problem, and the intended outcome.</p>
</li>
<li><p><strong>How it was done:</strong> The specific tools, commands, and configurations used.</p>
</li>
<li><p><strong>Where to find it:</strong> A centralised and organised system for storing and retrieving information.</p>
</li>
</ul>
<hr />
<h3>A Real-World Process for Writing Better Documentation</h3>
<blockquote>
<p>Creating documentation isn’t about perfection, it’s more about <strong>clarity</strong>, <strong>consistency</strong>, and <strong>actionability</strong>. The flow below outlines a practical approach I use to create high-impact, team-friendly documentation.</p>
</blockquote>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772359680/75721d4b-c1a4-4799-b28c-92d32c52ec95.gif" alt="This diagram was created using draw.io" />

<p><strong>Documentation can take many forms:</strong></p>
<ul>
<li><p>Knowledge base articles.</p>
</li>
<li><p>How-to guides.</p>
</li>
<li><p>Workflows.</p>
</li>
<li><p>Troubleshooting guides.</p>
</li>
<li><p>Release notes.</p>
</li>
<li><p>Onboarding/Off-boarding processes.</p>
</li>
<li><p>User manuals.</p>
</li>
<li><p>Infrastructure diagrams and playbooks for operations teams.</p>
</li>
</ul>
<p>The key is to tailor the format and depth to <strong>suit the <em>audience and purpose</em>   without compromising clarity</strong>.</p>
<hr />
<h3>The Hidden Power of Documented Processes</h3>
<p>Just as <a href="https://medium.com/@luqmaanmarthinus93/from-overthinking-to-execution-confronting-analysis-paralysis-in-tech-18ee46701e2d">analysis paralysis</a> can stem from a desire to “get it right,” the reluctance to document can sometimes come from a feeling that “it’s faster if I just do it myself.” While this might be true in the short term, it creates <strong>significant long-term risks</strong>.</p>
<h3><strong>Preventing the Single Point of Failure:</strong></h3>
<p>One of the most critical benefits of thorough documentation is its ability to <strong>mitigate</strong> the risk of <strong>a single point of failure</strong>. Imagine a scenario where only one person on your team knows how to troubleshoot a critical system or perform a key deployment. What happens when that person is unavailable due to illness, vacation, or a change in employment?</p>
<p>Suddenly, the team is scrambling, productivity grinds to a halt, and stress levels skyrocket. This is where documentation acts as a vital safety net. By clearly outlining processes and solutions, we empower the entire team to:</p>
<ul>
<li><p><strong>Troubleshoot independently:</strong> When issues arise, well-documented procedures allow other team members to diagnose and resolve problems without relying solely on the “expert.”</p>
</li>
<li><p><strong>Onboard new team members efficiently:</strong> Comprehensive documentation provides a valuable resource for new hires to quickly get up to speed on systems and processes.</p>
</li>
<li><p><strong>Maintain consistency:</strong> Documented standards and procedures ensure that tasks are performed consistently, reducing errors and improving reliability.</p>
</li>
<li><p><strong>Share knowledge and learn from each other:</strong> Documentation becomes a living repository of team knowledge, fostering collaboration and continuous learning.</p>
</li>
<li><p><strong>Reduce reliance on tribal knowledge:</strong> Tacit knowledge held only by individuals is a significant risk. Documentation helps to codify this knowledge and make it accessible to everyone.</p>
</li>
</ul>
<hr />
<h3>My Journey Towards Documentation Discipline</h3>
<p>Like many, I used to view documentation as a necessary evil. It felt time-consuming and often got pushed to the bottom of the priority list. However, after experiencing firsthand the chaos and frustration caused by a lack of documentation, I’ve come to see it as an indispensable part of effective IT practice.</p>
<p>I’ve started making a conscious effort to document as I go, even for seemingly small tasks. I’ve also championed the creation of a centralised knowledge base for my team. The initial investment of time has already paid dividends in terms of reduced support requests, faster troubleshooting, and a more resilient team.</p>
<hr />
<h3>A Call to Document: Empower Yourself and Your Team</h3>
<p>It’s time we shift our perspective on documentation because it’s an investment in our collective knowledge, our team’s resilience, and our own future success.</p>
<p>Just as we challenged ourselves to embrace “comfortable incompletion” in the pursuit of progress, let’s commit to embracing consistent and clear documentation. It’s a simple yet powerful way to prevent bottlenecks, empower our colleagues, and ensure that the knowledge we gain doesn’t walk out the door with any single individual.</p>
<p>Start small. Document one key process this week. Share your knowledge. You’ll be surprised by the positive impact it has on yourself and your team.🙂</p>
]]></content:encoded></item><item><title><![CDATA[Desktop vs Laptop vs Server: Understanding System Roles & Real-World Trade-offs]]></title><description><![CDATA[In computing, the terms desktop, laptop, and server are often used loosely, as if they’re interchangeable. But like comparing a sports car, a motorcycle, and a heavy-duty truck, each is engineered for]]></description><link>https://blog.tech-journey.co.za/desktop-laptop-or-server</link><guid isPermaLink="true">https://blog.tech-journey.co.za/desktop-laptop-or-server</guid><category><![CDATA[proxmox]]></category><category><![CDATA[Homelab]]></category><category><![CDATA[self-hosted]]></category><category><![CDATA[server]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Sun, 06 Jul 2025 16:36:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772473873/b235cb42-b4d2-41d2-bada-9a08e8823c7b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In computing, the terms <em>desktop</em>, <em>laptop</em>, and <em>server</em> are often used loosely, as if they’re interchangeable. But like comparing a sports car, a motorcycle, and a heavy-duty truck, each is engineered for a fundamentally different purpose.</p>
<p>This guide breaks down those differences clearly, and more importantly, shows why an old desktop or laptop can still become a powerful asset in a home lab.</p>
<h2><strong>Desktops: The Personal Productivity Machine</strong></h2>
<p><strong>What is a Desktop?</strong><br />A desktop computer is designed for individual use. It’s your daily driver for tasks like browsing, editing documents, gaming, content creation, or video conferencing. It prioritises user experience, responsiveness, and visual performance.</p>
<h3><strong>Typical Hardware Features:</strong></h3>
<ul>
<li><p><strong>CPU</strong>: High-clock-speed processors like Intel Core i5/i7/i9 or AMD Ryzen  which is great for single-threaded workloads</p>
</li>
<li><p><strong>RAM</strong>: 8–32GB (non-ECC), optimised for <strong>general</strong> productivity</p>
</li>
<li><p><strong>Storage</strong>: Fast SSDs for OS/apps + optional HDDs for data. Usually no RAID</p>
</li>
<li><p><strong>GPU</strong>: Often dedicated, supporting gaming, design, and media workloads</p>
</li>
<li><p><strong>Networking</strong>: Single Gigabit Ethernet port and Wi-Fi</p>
</li>
<li><p><strong>Redundancy</strong>: Minimal—hardware failure usually means downtime</p>
</li>
<li><p><strong>Form Factor</strong>: Towers, small form factor (SFF), or all-in-one</p>
</li>
<li><p><strong>Operating Systems</strong>: Windows, macOS, or Linux distros with a GUI</p>
</li>
</ul>
<h3><strong>Common Use Cases</strong></h3>
<ul>
<li><p>Browsing, email and office work</p>
</li>
<li><p>Gaming and creative content</p>
</li>
<li><p>Media streaming and light personal hosting</p>
</li>
<li><p>Development and learning environments</p>
</li>
</ul>
<hr />
<h2>Laptops: Portable, But Compromised</h2>
<p><strong>What is a Laptop?</strong></p>
<p>A laptop delivers desktop-like functionality in a portable form. It’s designed for mobility, not sustained performance or long-term uptime.</p>
<h3><strong>Hardware Characteristics</strong></h3>
<ul>
<li><p><strong>CPU:</strong> Power-efficient versions of desktop processors</p>
</li>
<li><p><strong>RAM:</strong> Typically 8–32GB, often soldered or limited in upgradeability</p>
</li>
<li><p><strong>Storage:</strong> Usually a single SSD/NVMe slot</p>
</li>
<li><p><strong>GPU</strong>: Integrated or lower-tier dedicated GPU.</p>
</li>
<li><p><strong>Networking</strong>: Wi-Fi primary, <em>sometimes</em> Ethernet (less common in newer models).</p>
</li>
<li><p><strong>Redundancy</strong>: None.</p>
</li>
<li><p><strong>Battery</strong>: Built-in UPS equivalent, but not suited for continuous 24/7 operation</p>
</li>
<li><p><strong>Form Factor:</strong> Compact, all-in-one design</p>
</li>
<li><p><strong>Operating Systems:</strong> Same as desktops</p>
</li>
</ul>
<h3><strong>Common Use Cases</strong></h3>
<ul>
<li><p>Mobile productivity and remote work</p>
</li>
<li><p>Learning and experimentation</p>
</li>
<li><p>Remote administration of servers</p>
</li>
<li><p>Light-duty home lab workloads</p>
</li>
</ul>
<hr />
<h2><strong>Servers: Built for Reliability &amp; Scale</strong></h2>
<p><strong>What is a Server?</strong><br />A server is purpose-built to deliver services to other systems over a network. It’s designed for continuous operation, high reliability, and scalability under load. This is where critical infrastructure lives—web apps, databases, authentication systems, and more.</p>
<h3>Typical Hardware Characteristics</h3>
<ul>
<li><p><strong>CPU:</strong> Multi-core, multi-socket processors (Intel Xeon, AMD EPYC)</p>
</li>
<li><p><strong>RAM:</strong> 64GB to 1TB+ with ECC for data integrity</p>
</li>
<li><p><strong>Storage:</strong> RAID arrays, hot-swappable drives, enterprise-grade disks</p>
</li>
<li><p><strong>GPU:</strong> Minimal or none unless required for specialised workloads</p>
</li>
<li><p><strong>Networking:</strong> Multiple NICs, often 10GbE+, with redundancy and bonding</p>
</li>
<li><p><strong>Redundancy:</strong> Dual PSUs, redundant cooling, failover systems</p>
</li>
<li><p><strong>Form Factor:</strong> Rack-mounted or enterprise towers (loud, heavy, and built for data centres)</p>
</li>
<li><p><strong>Operating Systems:</strong> Ubuntu Server, Debian, RHEL, and others<br /><strong>Note</strong>: I’m <strong>not</strong> a big Windows fan so you won’t be seeing that much content relating to Windows OS, you’re <strong>most</strong> welcome 😄).</p>
</li>
</ul>
<h3>Common Use Cases</h3>
<ul>
<li><p>Hosting web applications and APIs</p>
</li>
<li><p>Running databases and storage systems</p>
</li>
<li><p>Virtualisation and container platforms</p>
</li>
<li><p>Identity services (LDAP/AD), backups, and monitoring</p>
</li>
</ul>
<hr />
<h2>The Key Difference</h2>
<p>A desktop or laptop is built for <em>one user, interactive workloads, and visual output</em>.<br />A server is built for <em>many users, continuous workloads, and service delivery</em>.</p>
<h3>Turning Desktops and Laptops into Home Lab Servers</h3>
<p>I came to find that in resource-constrained environments, repurposing a desktop or laptop as a home lab server is a practical and effective approach which I've been doing since 2020.</p>
<ul>
<li><p><strong>Low upfront cost:</strong> Leverage hardware you already own instead of investing in new equipment (at least for now)</p>
</li>
<li><p><strong>Reduced power consumption:</strong> More efficient than enterprise gear, especially where electricity costs are high</p>
</li>
<li><p><strong>Hands-on learning:</strong> Ideal for building real-world skills with Linux, containerisation, and virtualisation platforms such as Docker and Proxmox</p>
</li>
</ul>
<h2>Practical Repurposing Tips</h2>
<p>When converting a desktop or laptop into a server, treat it like a production system with the following best practices:</p>
<p><strong>Optimisation</strong></p>
<ul>
<li><p>Remove unnecessary peripherals (webcams, audio devices)</p>
</li>
<li><p>Disable unused hardware in BIOS</p>
</li>
<li><p>Replace GPU (if unused): Lower idle power, less heat</p>
</li>
</ul>
<p><strong>Power &amp; Stability</strong></p>
<ul>
<li><p>Use a UPS  especially in power-unstable regions(<strong>load shedding</strong>)</p>
</li>
<li><p>Configure automatic power-on after outages in BIOS</p>
</li>
<li><p>Clean regularly  as dust kills airflow and shortens lifespan</p>
</li>
</ul>
<p><strong>Cooling &amp; Noise</strong></p>
<ul>
<li><p>Improve airflow or run side-panel open</p>
</li>
<li><p>Monitor thermals under load</p>
</li>
</ul>
<p><strong>Understand the Limitations</strong></p>
<ul>
<li><p>Software RAID instead of hardware.</p>
</li>
<li><p>Accept lack of ECC RAM  -  mitigate with backups.</p>
</li>
<li><p>No hot-swap? Keep backups and spare parts.</p>
</li>
</ul>
<p><strong>Final Thoughts</strong><br />A desktop/laptop is your personal workstation, responsive and graphics-ready. A server is designed for non-stop, multi-user workloads. But when approached intelligently(<strong>do your research</strong>), a desktop/laptop can be used as a lab-grade server  giving you a platform to learn, experiment, and even deploy services that you’d use in the real-world.</p>
<hr />
<h3>A Real Example</h3>
<p>This entire philosophy isn’t theoretical.</p>
<p>Below are some pictures of a Dell Latitude 5500 (8th Gen Core i5), that I purchased in <strong>July 2024</strong> for just 750 ZAR, has been running Proxmox VE reliably ever since.</p>
<p>It started with:</p>
<ul>
<li><p>256GB NVMe SSD</p>
</li>
<li><p>4GB RAM</p>
</li>
</ul>
<p>And was gradually upgraded to:</p>
<ul>
<li><p>512GB NVMe SSD</p>
</li>
<li><p>32GB RAM</p>
</li>
</ul>
<p>It’s neither pretty (or is it? check the last picture), nor does it look like a “real” server. However, it works and that's what matters most.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772470426/f1990c8c-c1eb-44ef-843c-b6e2859b331b.jpeg" alt="" />

<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772472104/5d697182-62b7-41de-a921-1b1ff7a05dee.jpeg" alt="" />

<h3>My creative work</h3>
<img src="https://cdn.hashnode.com/uploads/covers/68a5e6b9bf57f369891da8e0/5159531f-7803-4892-96fc-1b0b274b7a5f.jpg" alt="" style="display:block;margin:0 auto" />

<hr />
<p>In constrained environments—whether limited by budget, power, or access to enterprise hardware—waiting for the “ideal” setup only slows progress.</p>
<p>Start with what you have.<br />Observe how systems behave under real conditions.<br />Build something that works.</p>
<p>That repurposed machine on your desk may not look like enterprise infrastructure—but the experience you gain from running, breaking, and improving it is far more valuable than idle hardware sitting in a rack.</p>
]]></content:encoded></item><item><title><![CDATA[Understanding Virtualisation and Containerisation]]></title><description><![CDATA[In the ever-evolving landscape of IT, the quest for efficiency, scalability, and resource optimisation has led to the widespread adoption of two powerful technologies: virtualisation and containerisat]]></description><link>https://blog.tech-journey.co.za/understanding-virtualisation-and-containerisation</link><guid isPermaLink="true">https://blog.tech-journey.co.za/understanding-virtualisation-and-containerisation</guid><category><![CDATA[containers]]></category><category><![CDATA[containerization]]></category><category><![CDATA[virtualization]]></category><category><![CDATA[virtual machine]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Sat, 21 Jun 2025 16:46:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772369909/2dee21d4-5e6e-46dc-8a2e-0524dcaf0e6d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the ever-evolving landscape of IT, the quest for efficiency, scalability, and resource optimisation has led to the widespread adoption of two powerful technologies: <strong>virtualisation</strong> and <strong>containerisation</strong>. While often discussed in the same breath, they represent distinct approaches to abstracting applications and their environments, each with its unique strengths and use cases. Let’s delve into what they are, the problems they solve, and their fundamental differences.</p>
<hr />
<h3><strong>Virtualisation: The Power of the Virtual Machine</strong></h3>
<p><strong>What it is</strong>:<br />At its core, virtualisation is the technology that allows you to create <strong>software-based</strong>, or “<strong>virtual</strong>,” versions of computing resources, such as servers, storage devices, networks, and even operating systems. Imagine having a single powerful physical server, and being able to run multiple, independent “virtual machines” (VMs) on it, each behaving like a completely separate physical computer.</p>
<p>This magic is performed by a piece of software called a <a href="https://aws.amazon.com/what-is/hypervisor/">hypervisor</a>. The hypervisor sits directly on the physical hardware (Type-1, or bare-metal, like Proxmox VE(my favorite) or VMware ESXi) or on top of an existing operating system (Type-2, or hosted, like VirtualBox or VMware Workstation). It allocates the physical resources (CPU, RAM, storage, network) to each VM and manages their execution, ensuring they run in isolation from each other. Each VM includes its own operating system (the “guest OS”) and applications.</p>
<p><strong>Problems it solves</strong>:<br />Before virtualisation, deploying a new application often meant dedicating a physical server to it which led to:</p>
<ul>
<li><p><strong>Underutilisation of hardware:</strong> Most servers are not constantly running at 100% capacity, meaning significant computational power was often wasted.</p>
</li>
<li><p><strong>Server sprawl:</strong> A growing number of physical servers meant increased power consumption, cooling costs, and physical space requirements.</p>
</li>
<li><p><strong>Deployment complexity:</strong> Setting up a new server, installing the OS, and configuring applications was a time-consuming process.</p>
</li>
<li><p><strong>Resource conflicts:</strong> Different applications on the same physical server could have conflicting software dependencies, leading to instability.</p>
</li>
<li><p><strong>Disaster recovery challenges:</strong> Recovering a failed physical server and its applications could be a lengthy and complex ordeal.</p>
</li>
</ul>
<p><strong>Virtualisation directly addresses these issues by</strong>:</p>
<ul>
<li><p><strong>Maximising hardware utilisation:</strong> Multiple VMs can share the same physical hardware, significantly increasing the return on investment for server infrastructure.</p>
</li>
<li><p><strong>Reducing physical infrastructure:</strong> Fewer physical servers mean lower power consumption, reduced cooling costs, and a smaller data center footprint.</p>
</li>
<li><p><strong>Faster provisioning:</strong> VMs can be quickly cloned, deployed from templates, or spun up on demand, drastically reducing deployment times.</p>
</li>
<li><p><strong>Enhanced isolation:</strong> Each VM operates in its own isolated environment, preventing conflicts between applications and ensuring stability.</p>
</li>
<li><p><strong>Improved disaster recovery:</strong> VMs can be easily backed up, replicated, and restored, leading to quicker recovery times in the event of hardware failure or data loss.</p>
</li>
</ul>
<hr />
<h3><strong>Containerisation: The Lightweight Revolution</strong></h3>
<p><strong>What it is</strong>:<br />Containerisation is a more <strong>lightweight</strong> form of virtualisation that packages an application and all its dependencies (code, runtime, libraries, configuration files) into a single, self-contained unit called a “container.” Unlike VMs, containers <strong>do not</strong> include a full operating system. Instead, they share the host operating system’s kernel.</p>
<p>A “container engine” (like Docker) manages these containers, providing the necessary isolation and resource management. Think of it as a highly efficient, portable, and isolated environment specifically for running applications.</p>
<p><strong>Problems it solves:</strong></p>
<p>While virtualisation solved many problems, new challenges emerged, particularly in the realm of application development and deployment:</p>
<ul>
<li><p><strong>“It works on my machine” syndrome:</strong> Developers often faced issues where applications ran perfectly on their local development environment but failed in testing or production environments due to subtle differences in underlying configurations or dependencies.</p>
</li>
<li><p><strong>Slower deployment and startup:</strong> Even with VMs, booting a full operating system for each application could be time-consuming.</p>
</li>
<li><p><strong>Resource overhead for microservices:</strong> As applications became more modular (microservices), spinning up a separate VM for each small service was resource-intensive and inefficient.</p>
</li>
<li><p><strong>Portability across environments:</strong> Ensuring an application behaved consistently across different cloud providers or on-premises infrastructure could be challenging.</p>
</li>
</ul>
<hr />
<h3><strong>Containerisation tackles these problems head-on:</strong></h3>
<ul>
<li><p><strong>Environmental consistency:</strong> By bundling all dependencies, containers ensure that an application runs consistently from development to testing to production, eliminating environment-related bugs.</p>
</li>
<li><p><strong>Rapid deployment and startup:</strong> Containers start up in seconds (or even milliseconds) because they don’t need to boot an entire OS, enabling faster development cycles and quicker scaling.</p>
</li>
<li><p><strong>Lightweight and efficient:</strong> Sharing the host kernel means containers consume significantly fewer resources than VMs, allowing for higher density of applications on a single server.</p>
</li>
<li><p><strong>Unparalleled portability:</strong> A container can run on any system that has a compatible container engine, regardless of the underlying infrastructure, making it ideal for hybrid and multi-cloud strategies.</p>
</li>
<li><p><strong>Simplified dependency management:</strong> All application dependencies are packaged within the container, simplifying installation and preventing conflicts with other applications on the host system.</p>
</li>
</ul>
<hr />
<p><strong>Key Differences: A Side-by-Side Comparison:</strong></p>
<p>While both technologies abstract resources, their approach and scope differ fundamentally:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772367674/9061491d-03da-4e34-84f3-f6b5bb868400.png" alt="" />

<hr />
<h3><strong>Conclusion</strong></h3>
<p>Neither virtualisation nor containerisation is inherently “better” than the other. They are powerful tools that solve different problems and excel in different scenarios.<br /><em><strong>Virtualisation</strong></em> provides robust isolation and the flexibility to run diverse operating systems, making it a cornerstone of cloud computing and traditional server consolidation.<br /><em><strong>Containerisation</strong></em>, with its lightweight nature and rapid deployment capabilities, has become the de facto standard for modern, agile application development and microservices architectures.</p>
<p>Understanding their distinct characteristics allow us to choose the right tool for the job, building efficient, scalable, and resilient IT infrastructure in our homelabs or business.</p>
]]></content:encoded></item><item><title><![CDATA[ISO 27001: An Essential Guide to Information Security]]></title><description><![CDATA[ISO 27001 is an internationally recognised standard for Information Security Management Systems (ISMS). It provides a systematic approach to managing sensitive company information, ensuring its confid]]></description><link>https://blog.tech-journey.co.za/iso-27001-an-essential-guide-to-information-security</link><guid isPermaLink="true">https://blog.tech-journey.co.za/iso-27001-an-essential-guide-to-information-security</guid><category><![CDATA[ISO 27001]]></category><category><![CDATA[compliance ]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Fri, 23 May 2025 10:42:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772355024/e3c0a244-b2b5-4e8f-9f77-1c85126e8972.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>ISO 27001 is an internationally recognised standard for <strong>Information Security Management Systems</strong> (ISMS). It provides a systematic approach to managing sensitive company information, ensuring its confidentiality, integrity, and availability. This blog delves into the intricacies of ISO 27001, exploring its framework, principles, implementation process, and the benefits it offers to organisations.</p>
<h3>What is ISO 27001?</h3>
<p>ISO 27001 is part of the <a href="https://www.iso.org/standard/iso-iec-27000-family">ISO/IEC 27000 family of standards</a>, which focuses on <strong>Information Security Management</strong>. The standard outlines the requirements for establishing, implementing, maintaining, and continually improving an ISMS. It is designed to <strong>help organisations manage</strong> their <strong>information security risks</strong> effectively, ensuring that they can <strong>protect</strong> their <strong>data from unauthorised access</strong>, <strong>breaches</strong>, and other <strong>threats</strong>.</p>
<hr />
<h3>Key Components of ISO 27001</h3>
<h3>1. Information Security Management System (ISMS)</h3>
<p>At the core of ISO 27001 is the ISMS, which is a systematic approach to managing sensitive information. It encompasses <strong>people</strong>, <strong>processes</strong>, and <strong>technology</strong>, ensuring that all aspects of information security are addressed. The ISMS is designed to identify, assess, and manage information security risks, providing a framework for continuous improvement.</p>
<h3>2. Risk Assessment and Treatment</h3>
<p>ISO 27001 emphasises <strong>the importance of risk assessment and treatment</strong>. Organisations must identify potential security risks, evaluate their impact, and determine appropriate measures to mitigate them. This process involves:</p>
<p>• <strong>Risk Identification</strong>: Recognising potential threats and vulnerabilities.</p>
<p>• <strong>Risk Analysis</strong>: Evaluating the likelihood and impact of identified risks.</p>
<p>• <strong>Risk Evaluation</strong>: Prioritising risks based on their significance.</p>
<p>• <strong>Risk Treatment</strong>: Implementing controls to manage and mitigate risks.</p>
<h3>3. Control Objectives and Controls</h3>
<p>ISO 27001 includes a <strong>comprehensive set of control objectives and controls that organisations can implement to address identified risks</strong>. These controls are categorised into various domains, such as:</p>
<p>• <strong>Access Control</strong>: Ensuring that only authorised personnel can access sensitive information.</p>
<p>• <strong>Asset Management</strong>: Identifying and managing information assets to protect their value.</p>
<p>• <strong>Incident Management</strong>: Establishing procedures for responding to information security incidents.</p>
<p>• <strong>Compliance</strong>: Ensuring adherence to legal, regulatory, and contractual obligations.</p>
<h3>4. Continual Improvement</h3>
<p>A <strong>fundamental principle of ISO 27001</strong> is the concept of <strong>continual improvement</strong>. Organisations are encouraged to regularly review and update their ISMS to adapt to changing risks and business environments. This involves conducting internal audits, management reviews, and ongoing training for staff.</p>
<hr />
<h3>Implementation Process</h3>
<p>Implementing ISO 27001 involves several key steps:</p>
<p><strong>1. <mark class="bg-yellow-200 dark:bg-yellow-500/30">Define the Scope</mark></strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">:</mark> Determine the boundaries of the ISMS, including the information assets to be protected.</p>
<p><strong>2. <mark class="bg-yellow-200 dark:bg-yellow-500/30">Conduct a Risk Assessment</mark></strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">:</mark> Identify and assess risks to information security.</p>
<p><strong>3. <mark class="bg-yellow-200 dark:bg-yellow-500/30">Develop a Risk Treatment Plan</mark></strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">:</mark> Outline the controls to be implemented to mitigate identified risks.</p>
<p><strong>4. <mark class="bg-yellow-200 dark:bg-yellow-500/30">Implement Controls</mark></strong>: Put in place the necessary security measures and policies.</p>
<p><strong>5.</strong> <strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Monitor and Review</mark></strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">:</mark> Continuously monitor the effectiveness of the ISMS and make adjustments as needed.</p>
<p><strong>6. <mark class="bg-yellow-200 dark:bg-yellow-500/30">Internal Audit</mark></strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">:</mark> Conduct regular audits to ensure compliance with ISO 27001 requirements.</p>
<p><strong>7. <mark class="bg-yellow-200 dark:bg-yellow-500/30">Management Review</mark></strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">:</mark> Senior management should review the ISMS to ensure it remains effective and aligned with organisational goals.</p>
<p><strong>8. <mark class="bg-yellow-200 dark:bg-yellow-500/30">Certification</mark></strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">:</mark> Organisations can seek certification from an accredited body to demonstrate compliance with ISO 27001.</p>
<hr />
<h3>Benefits of ISO 27001</h3>
<p>Implementing ISO 27001 offers numerous benefits, including:</p>
<p>• <strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Enhanced Security</mark></strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">:</mark> A structured approach to managing information security risks leads to improved protection of sensitive data.</p>
<p>• <strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Regulatory Compliance</mark></strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">:</mark> Helps organisations comply with legal and regulatory requirements related to information security.</p>
<p>• <strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Increased Trust</mark></strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">:</mark> Certification can enhance an organisation’s reputation and build trust with clients and stakeholders.</p>
<p>• <strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Operational Efficiency</mark></strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">:</mark> Streamlined processes and improved risk management can lead to greater operational efficiency.</p>
<p>• <strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Competitive Advantage</mark></strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">:</mark> Organisations that demonstrate a commitment to information security can differentiate themselves in the marketplace.</p>
<hr />
<h3>Conclusion</h3>
<p>ISO 27001 is a vital standard for organisations seeking to establish a robust <strong>Information Security Management System</strong>. By understanding its framework, implementing its principles, and committing to continual improvement, organisations can effectively manage their information security risks and protect their valuable data assets. The journey towards ISO 27001 certification not only enhances security but also fosters a culture of accountability and resilience in the face of evolving threats.</p>
]]></content:encoded></item><item><title><![CDATA[From Overthinking to Execution: Confronting Analysis Paralysis in Tech]]></title><description><![CDATA[Analysis Paralysis
If you’ve ever gone down a rabbit hole trying to find the “perfect” resolution for a challenging ticket, rewritten your troubleshooting notes five times just to make it sound right,]]></description><link>https://blog.tech-journey.co.za/analysis-paralysis-in-tech</link><guid isPermaLink="true">https://blog.tech-journey.co.za/analysis-paralysis-in-tech</guid><category><![CDATA[decision making]]></category><category><![CDATA[analysis paralysis]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Fri, 11 Apr 2025 17:46:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772349781/0390fdc6-953f-426b-b3b3-d84a6bb47cf2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Analysis Paralysis</h2>
<p>If you’ve ever gone down a rabbit hole trying to find the “perfect” resolution for a challenging ticket, rewritten your troubleshooting notes five times just to make it sound right, or delayed starting a certification because you didn’t feel 100% ready, you’re not the only one. Most of us in IT have been there. It’s not laziness or lack of discipline. It’s analysis paralysis which is a common experience among professionals in all areas of IT.</p>
<p>This article explores how overthinking can subtly stall <strong>career growth</strong>, <strong>personal development</strong>, and <strong>technical confidence</strong>. More importantly, it offers practical strategies I’m using to break through it.</p>
<h3>What Is Analysis Paralysis?</h3>
<p>Analysis paralysis is a state of inaction caused by over-analysing choices, potential outcomes, or the fear of making the wrong decision. It’s often mistaken for diligence or caution, but in reality, it <strong>undermines progress</strong>.</p>
<p>In IT, it typically shows up as:</p>
<ul>
<li><p>Constantly switching between tools, methods, or documentation without actually resolving the issue.</p>
</li>
<li><p>Obsessing over the “perfect” solution instead of implementing a working fix.</p>
</li>
<li><p>Consuming endless content but producing no tangible results.</p>
</li>
</ul>
<p>Over time, this leads to frustration, imposter syndrome, and burnout. Not from working too hard, but from overthinking for too long.</p>
<h3>Why It’s Common in IT Roles</h3>
<p>IT professionals, especially those who are detail-oriented, introverted, and/or self-taught are particularly vulnerable to this trap. We’re trained to anticipate failure scenarios, weigh up options, and avoid misconfiguration. However, that mindset, when applied to every single task, can become counterproductive.</p>
<p><strong>Contributing Factors:</strong></p>
<ul>
<li><p><strong>Perfectionism:</strong> The belief that only flawless work is worth sharing or implementing.</p>
</li>
<li><p><strong>Fear of failure:</strong> Especially in front of a user or a manager.</p>
</li>
<li><p><strong>Tool overload:</strong> The paralysis that comes from having too many options for a given task.</p>
</li>
<li><p><strong>Imposter syndrome:</strong> Feeling the need to “know more” before helping a user or starting a project.</p>
</li>
</ul>
<p>When these factors combine, even small decisions such as choosing the best approach to a ticket or deciding on a new piece of software to test  can feel overwhelming especially when it impacts the business.</p>
<h3>How I’m Working Through It</h3>
<p>I haven’t completely overcome this, but I’ve developed practical strategies coupled with mindfulness that help me take consistent action while staying technically disciplined. The working methods listed below is what I‘ve been using daily in my work and personal projects.</p>
<ol>
<li><p><strong>Prioritise Delivery Over Perfection</strong>: Rather than waiting until something is “perfectly documented” or “fully automated,” I commit to delivering a first iteration. A working solution however basic , creates momentum, builds confidence, and uncovers genuine problems worth solving.</p>
</li>
<li><p><strong>Impose Intentional Constraints:</strong> Rather than keeping all options open, an approach that often leads to decision fatigue ,  I intentionally narrow my choices. For each project, I commit to a single scripting language, one troubleshooting methodology, or one specific tool. This practice prevents me from falling into the “<a href="https://thetreasureswithin.net/shiny-object-syndrome/#:~:text=new%20shiny%20idea.-,What%20Is%20Shiny%20Object%20Syndrome?,for%20new%20trends%20to%20follow.">shiny object syndrome</a>” trap, where chasing new technologies distracts from execution. Clear constraints reduce cognitive overhead and force deeper focus, leading to better outcomes and a more disciplined workflow.</p>
</li>
<li><p><strong>Document While You Learn :</strong> Creating documentation  whether it be a ticketing system, a team wiki, or a personal notebook  forces structure into learning. It transforms passive reading into active knowledge-building. If I can explain a concept or a solution clearly, I understand it well enough to act on it.</p>
</li>
<li><p><strong>Set External Accountability:</strong> Self-imposed deadlines are easy to ignore. External ones are harder. I’ve started sharing ticket progress or project updates with colleagues. It creates a gentle pressure to follow through.</p>
</li>
<li><p><strong>Normalise Imperfect Output:</strong> It’s tempting to compare ourselves to polished help guides or perfectly organised repositories. But what actually earns respect in the industry is consistently turning up, visibly solving problems, and documenting the journey.</p>
</li>
</ol>
<h3>The Core Skill: Comfortable Incompletion</h3>
<p>In an environment saturated with tools, knowledge bases, and endless updates, the most valuable skill is the ability to act without <strong>knowing everything</strong>(which is impossible anyway).</p>
<p>Progress usually comes from execution under uncertainty.<br />In other words: <strong>start first. Refine later.</strong></p>
<h3>A Challenge, If You’re Currently Stuck</h3>
<p>If you’ve been delaying a project, ticket or are stuck in research mode, try this:</p>
<ul>
<li><p>Choose one task you’ve postponed.</p>
</li>
<li><p>Ask yourself: <strong>“Realistically, how long would this take me if I focused on just getting it done?”</strong> This simple question helps cut through overthinking and gives you a rough timebox to work within. It brings clarity to what might otherwise feel vague or overwhelming.</p>
</li>
<li><p>Use <a href="https://pomofocus.io/app"><strong>Pomofocus</strong></a> to help you get the ball rolling.</p>
</li>
<li><p><strong>DON’T</strong> be afraid to ask for help. 🙂</p>
</li>
<li><p>Write a short summary of what you learned or struggled with.</p>
</li>
</ul>
<p>Do this not to impress anyone, but to rewire the habit.</p>
<h3>Conclusion</h3>
<p>Overthinking is a symptom of caring deeply about doing things right. This same drive, when channeled into action, becomes a powerful strength.</p>
<p>To unlock this strength, we must shift our focus from seeking perfection to embracing progress. We can achieve this by <strong>designing, documenting, testing, and iterating</strong>, even when things aren’t perfect. By <strong>normalising building before we feel completely ready</strong>, we can transform our tendency to overthink into a powerful force for creation and innovation.</p>
]]></content:encoded></item><item><title><![CDATA[Smart Monitoring  - Get Notified About Ubuntu Updates with n8n workflows]]></title><description><![CDATA[Inspired by n8n’s official documentation, this workflow ensures your Ubuntu server stays updated with the latest software patches and security enhancements  without the hassle of manual checks.

Overv]]></description><link>https://blog.tech-journey.co.za/smart-monitoring-get-notified-about-ubuntu-updates-with-n8n</link><guid isPermaLink="true">https://blog.tech-journey.co.za/smart-monitoring-get-notified-about-ubuntu-updates-with-n8n</guid><category><![CDATA[n8n]]></category><category><![CDATA[automation]]></category><category><![CDATA[workflow]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Sat, 08 Mar 2025 12:34:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772393206/674e3310-5835-4f77-8988-d2b305380c93.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>Inspired by n8n’s <a href="https://n8n.io/workflows/2925-send-email-if-server-has-upgradable-packages/">official documentation</a>, this workflow ensures your Ubuntu server stays updated with the latest software patches and security enhancements  without the hassle of manual checks.</p>
</blockquote>
<h4>Overview</h4>
<p>Keeping your server secure and up to date is crucial, but constantly checking for package upgrades can be time-consuming. This automated workflow streamlines the process by running a daily script that scans for available updates and instantly notifies you via email, ensuring you never miss a critical update.</p>
<h3>How It Works</h3>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772386348/35ac21e1-faff-4a3a-b1fd-a09416554941.png" alt="" />

<p>This workflow checks for upgradable packages and sends instant email notifications.</p>
<p><strong>Daily Monitoring</strong></p>
<ul>
<li>The workflow connects to your Ubuntu server once a day and checks for any upgradable packages.</li>
</ul>
<p><strong>Instant Email Alerts</strong></p>
<ul>
<li>If updates are available, the system triggers an automatic email notification with details of the pending upgrades.</li>
</ul>
<h3>Getting Started</h3>
<p>Setting up this workflow requires just two key steps:</p>
<p><strong>Configuring SSH Access</strong>  -  Allow the workflow to securely connect to your Ubuntu server and check for updates.</p>
<p><strong>Setting Up Email Notifications</strong>  - Enable automated alerts by providing SMTP credentials.</p>
<h3>1. Configure SSH Access</h3>
<p>Provide your Ubuntu server’s SSH credentials so the workflow can securely check for available updates. I’ll be using a <strong>username/password</strong>, but you can also <strong>import your private key</strong> for authentication.name/password but you can import your private key as well:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772387929/deebbfa7-c6d4-4398-bc4b-170d044112bb.png" alt="" />

<p>This command checks for packages that are ready to be upgraded.</p>
<h3>2. Set Up Email Notifications</h3>
<p>Enter your <strong>SMTP credentials</strong> to enable the system to send real-time email alerts whenever updates are available.</p>
<h3>Testing the Workflow</h3>
<p>Once everything is set up, it’s time to test the workflow and ensure it’s working as expected.</p>
<h4>1. Running the Workflow</h4>
<ul>
<li><p>Manually trigger the workflow in <strong>n8n</strong> or wait for the scheduled execution.</p>
</li>
<li><p>The workflow connects to your Ubuntu server and checks for available package updates.</p>
</li>
</ul>
<h4>2. Checking the Output</h4>
<ul>
<li><p>If upgradable packages are found, an email notification should be sent to your inbox.</p>
</li>
<li><p>Verify that the email contains a list of pending updates.</p>
</li>
</ul>
<h4>3. Confirming Success</h4>
<ul>
<li><p><strong>SSH Connection Successful</strong>  - The workflow successfully accessed the Ubuntu server.</p>
</li>
<li><p><strong>Update Check Completed</strong>  - The script detected available upgrades.</p>
</li>
<li><p><strong>Email Notification Received</strong>  - An alert was delivered with the update details.</p>
</li>
</ul>
<p><strong>Results:</strong> The workflow ran smoothly, automatically monitoring and notifying about pending updates.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772389515/aeb95399-371b-41db-8b62-d3409941fc1e.png" alt="" />

<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772391433/82215a17-da76-4c31-a1b5-43ac43a70ccc.png" alt="" />

<p>Email containing the list of packages ready to be upgraded.</p>
<h3>Why Use This Workflow?</h3>
<p><strong>Stay Secure &amp; Up to Date</strong><br />Receive instant notifications about available software updates, ensuring your server remains secure and performs optimally.</p>
<p><strong>Hands-Free Automation</strong><br />No more manual checks  -  This workflow continuously monitors and alerts you about upgrades, saving you time and effort.</p>
<p><strong>Fully Customisable</strong><br />Easily adjust the check frequency and fine-tune notification settings to match your specific needs.</p>
<p>With this <strong>n8n-powered automation</strong>, you can focus on critical tasks while keeping your Ubuntu servers secure and up to date.</p>
<h3>Conclusion</h3>
<p>By setting up a workflow to monitor and notify you about available package upgrades, you can ensure your server remains secure and up to date.</p>
<p>This workflow is <strong>fully customisable</strong>, allowing you to adjust the frequency of checks and notification settings to fit your needs.</p>
]]></content:encoded></item><item><title><![CDATA[Automate System Monitoring & Alerting with Uptime Kuma and Freshdesk]]></title><description><![CDATA[This guide covers setting up Uptime Kuma to monitor a system, trigger email alerts to Freshdesk on downtime, and automatically create tickets with the correct priority and assignment.



This setup sh]]></description><link>https://blog.tech-journey.co.za/automate-system-monitoring-alerting-with-uptime-kuma-and-freshdesk</link><guid isPermaLink="true">https://blog.tech-journey.co.za/automate-system-monitoring-alerting-with-uptime-kuma-and-freshdesk</guid><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Tue, 04 Mar 2025 15:35:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772463646/5f78df6d-b44f-4384-a0cf-ceca0a854e12.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This guide covers setting up Uptime Kuma to monitor a system, trigger email alerts to Freshdesk on downtime, and automatically create tickets with the correct priority and assignment.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772446909/ef937fce-6a41-41b1-ad95-f031263c3b88.gif" alt="" />

<blockquote>
<p>This setup showcases a real-world integration that streamlines incident management by ensuring downtime alerts are promptly turned into actionable tickets, reducing manual intervention and improving response times.</p>
</blockquote>
<h3>Prerequisites</h3>
<ul>
<li><p>Before we begin, ensure you have the following:</p>
</li>
<li><p><a href="https://docs.docker.com/engine/install/ubuntu/"><strong>Docker</strong></a> <strong>&amp;</strong> <a href="https://docs.docker.com/compose/install/linux/"><strong>Docker Compose</strong></a> installed on an <strong>Ubuntu 22.04 server</strong>.</p>
</li>
<li><p>A <strong>Freshdesk account</strong> (<a href="https://www.freshworks.com/freshdesk/lp/home/?tactic_id=6780391&amp;utm_source=google-adwords&amp;utm_medium=FD-Search-Brand-Broad-MEA-Tier-2&amp;utm_campaign=FD-Search-Brand-Broad-MEA-Tier-2&amp;utm_term=freshdesk%20plans&amp;device=c&amp;matchtype=e&amp;network=g&amp;gclid=CjwKCAiAw5W-BhAhEiwApv4goJ0heygRCG1iS9pt0HGjg7lLmOnCU9dvi_M7qGcBRFsHFMnMgY9KBhoCVfsQAvD_BwE&amp;audience=kwd-47765166336&amp;ad_id=718584053892&amp;gad_source=1">Free plan</a>  - simply  scroll to the bottom of their website to sign up).</p>
</li>
<li><p>A <strong>custom Freshdesk email address</strong> for ticket creation (e.g., <code>support@yourcompany.freshdesk.com</code>).</p>
</li>
<li><p><strong>SMTP credentials</strong> for sending emails from <strong>Uptime Kuma</strong>.</p>
</li>
</ul>
<hr />
<h3>What is Uptime Kuma?</h3>
<p>A self-hosted monitoring tool that allows you to track the uptime and performance of your <strong>websites, APIs, and services</strong>.</p>
<h3>Key Features:</h3>
<ul>
<li><p><strong>Supports multiple monitoring options</strong>: <strong>HTTP, HTTPS, TCP, ICMP Ping, DNS</strong>, and more.</p>
</li>
<li><p><strong>Customisable alerts</strong>:</p>
</li>
<li><p><strong>Email, webhooks, Slack, Discord</strong>, etc.</p>
</li>
<li><p><strong>User-friendly web UI</strong> for viewing uptime history and logs.</p>
</li>
<li><p><strong>Multi-user support</strong> for team-based monitoring.</p>
</li>
<li><p><strong>Easy Docker deployment</strong> for quick setup.</p>
</li>
</ul>
<h3>Problem Statement</h3>
<p>Manual system monitoring is <strong>inefficient,</strong> if your system goes down, you need to be notified immediately.</p>
<h3>How This Automation Works:</h3>
<ul>
<li><p><strong>Uptime Kuma</strong> detects <strong>downtime</strong> and sends an <strong>alert email</strong> to <strong>Freshdesk</strong>.</p>
</li>
<li><p><strong>Freshdesk</strong> automatically <strong>creates a ticket</strong> based on the alert.</p>
</li>
<li><p>An <strong>automation rule</strong> in Freshdesk ensures that:</p>
</li>
</ul>
<ol>
<li><p>The <strong>ticket priority</strong> is set to <strong>Urgent</strong>.</p>
</li>
<li><p>The <strong>ticket type</strong> is classified as an <strong>Incident</strong>.</p>
</li>
<li><p>The ticket is <strong>assigned to a specific group</strong> for streamlined handling.</p>
</li>
<li><p>The ticket is <strong>automatically assigned to a designated agent</strong>.</p>
</li>
</ol>
<p>This setup <strong>removes the need for manual intervention</strong>, ensuring that critical system outages are promptly addressed by the right team. It also <strong>enhances incident response time</strong> by guaranteeing that no downtime goes unnoticed.</p>
<hr />
<h3>Step 1: Deploy Uptime Kuma with Docker Compose</h3>
<p>Create a directory for <strong>Uptime Kuma</strong> and a <code>docker-compose.yml</code> file:</p>
<pre><code class="language-bash">mkdir -p ~/uptime-kuma &amp;&amp; cd ~/uptime-kuma
</code></pre>
<pre><code class="language-bash">nano docker-compose.yml
</code></pre>
<p>Paste the following configuration in your <code>docker-compose.yml</code> file:</p>
<pre><code class="language-yaml">---
services:
  uptime-kuma:
    image: louislam/uptime-kuma:1  # NB: ALWAYS use pinned versions
    container_name: uptime-kuma
    restart: always

    ports:
      - "3001:3001"

    volumes:
      - uptime-kuma-data:/app/data  # persistent storage

    environment:
      - TZ=Africa/Johannesburg  # set to your local timezone
      - UMASK=0022  # file permission control

    networks:
      - kuma_network

    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3001"]
      interval: 30s
      retries: 3
      start_period: 10s
      timeout: 5s

    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"

volumes:
  uptime-kuma-data:

networks:
  kuma_network:
    driver: bridge
</code></pre>
<p>Save and exit (<code>CTRL+X</code>, <code>Y</code>, <code>Enter</code>), then start the container:</p>
<pre><code class="language-bash">docker compose up -d
</code></pre>
<p>Make sure uptime kuma is running:</p>
<pre><code class="language-bash">docker ps
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/68a5e6b9bf57f369891da8e0/62370a26-76ea-45a5-9b12-9b4c04a5a111.png" alt="" style="display:block;margin:0 auto" />

<p>Container status is healthy</p>
<p>Access <strong>Uptime Kuma</strong> at:<br /><code>http://&lt;your-host-ip&gt;:3001</code></p>
<p>Create your admin account:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772449990/9e01aba1-400c-49e9-a7b5-257164ba8a04.png" alt="" />

<hr />
<h3>Step 2: Adding a system to monitor</h3>
<ol>
<li><p>Click on the <strong>“Add New Monitor”</strong> button at the top-left.</p>
</li>
<li><p>In the <strong>“Add New Monitor”</strong> window, fill in the details:</p>
</li>
<li><p><strong>Monitor Type:</strong> Choose <code>HTTP(s)</code> for website monitoring.</p>
</li>
<li><p><strong>Name:</strong> Enter a descriptive name for your monitor (e.g., <code>Network IP Scanner</code>).</p>
</li>
<li><p><strong>URL:</strong> Enter your website URL (e.g., <code>[https://mylan.com]</code><a href="https://mysite.com%29.">.</a></p>
</li>
<li><p><strong>Method:</strong> Select <code>GET</code> (default).</p>
</li>
<li><p><strong>Heartbeat Interval:</strong> Set the time interval for Uptime Kuma to check your site (e.g., 30 seconds).</p>
</li>
<li><p><strong>Retries:</strong> Configure how many times Kuma should retry if the check fails.</p>
</li>
<li><p><strong>Notification Settings:</strong> Choose the method you want to use for your alerts, in this demo, I’m using Email(SMTP).</p>
</li>
<li><p><strong>Tags:</strong> (Optional) Add tags to organise your monitors.</p>
</li>
<li><p>Click <strong>“Save”</strong> to start monitoring.</p>
</li>
<li><p>The monitor should now appear in your <strong>Uptime Kuma dashboard</strong>.</p>
</li>
<li><p>If the site is online, it should show a green <strong>“UP”</strong> status(screenshot below):</p>
</li>
</ol>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772451684/3a15effe-2af2-4d8f-a0bf-b6605b6d6fae.png" alt="" />

<hr />
<h3>Step 3: Configure Notifications (Email)</h3>
<ol>
<li><p>Go to <strong>Settings &gt; Notification</strong>.</p>
</li>
<li><p>Click <strong>Add New Notification</strong> and select a notification method (<strong>Email SMTP</strong>), this requires a valid user mailbox along with your SMTP provider settings.</p>
</li>
</ol>
<blockquote>
<p><strong>Note</strong>: I’ve added my Freshdesk email address in the <strong>To:</strong> field.</p>
</blockquote>
<p>3. Follow your email provider setup instructions and save the settings.</p>
<p>4. Link the notification to the service you want to monitor.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772453126/17341450-08c9-4f44-a936-c567663ca247.png" alt="" />

<h3>Verifying ticket creation after an outage(screenshot below)</h3>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772454699/d8e01af2-9d73-4c28-bd64-628a2c5a1a68.png" alt="" />

<p>As you can see, Uptime Kuma has successfully created a ticket. However, Freshdesk applies default settings, requiring manual updates that can be easily overlooked as more tickets come in. <strong>Let’s automate this to streamline the process</strong>.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772455966/25191b13-1210-4596-9278-4a7de6d34c56.png" alt="" />

<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772457518/3d686f0d-f465-4482-a969-ff0d89aeb7c4.png" alt="" />

<p>Uptime Kuma ticket created in Freshdesk with default settings.</p>
<hr />
<h3>Step 4: Automatically Assign Tickets and Set Priority to Urgent</h3>
<p>To ensure downtime alerts are handled efficiently, we’ll create an <strong>automation rule</strong> in Freshdesk to automatically assign tickets and set the appropriate priority.</p>
<h4><mark class="bg-yellow-200 dark:bg-yellow-500/30">Steps to Create the Rule:</mark></h4>
<ol>
<li><p>In Freshdesk, navigate to <strong>Admin → Automations</strong>.</p>
</li>
<li><p>Click <strong>New Rule</strong> and give it a name, e.g., <strong>“Uptime Kuma Alerts”</strong>.</p>
</li>
<li><p>Apply the following <strong>conditions and actions</strong>:</p>
</li>
</ol>
<h4><mark class="bg-yellow-200 dark:bg-yellow-500/30">Rule Configuration:</mark></h4>
<p><strong>Event:</strong></p>
<ul>
<li>When a <strong>ticket is created.</strong></li>
</ul>
<p><strong>Conditions:</strong></p>
<p>If <strong>Requester Email</strong> is <code>*uptime_kuma@yourdomain.com*</code></p>
<ul>
<li>AND if <strong>Subject</strong> contains <code>[? Down]</code></li>
</ul>
<p><strong>Actions:</strong></p>
<ul>
<li><p>Set <strong>Type</strong> to <strong>Incident</strong></p>
</li>
<li><p>Set <strong>Priority</strong> to <strong>Urgent</strong></p>
</li>
<li><p>Assign to <strong>Group: Technical Support</strong></p>
</li>
<li><p>Assign to <strong>Agent: Luqmaan</strong></p>
</li>
</ul>
<blockquote>
<p><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Note:</mark></strong> Before creating this rule, identify a <strong>consistent part of the subject line</strong> that never changes (e.g., <code>*[? Down]*</code>). First, test how Uptime Kuma formats its email subjects by allowing it to generate a ticket. Then, use that fixed pattern in your rule to ensure accuracy.</p>
</blockquote>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772459236/b2c181fb-2822-4022-afd1-d9da36e516cc.png" alt="" />

<p>After previewing, saving, and enabling the rule, your configuration should look like this:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772460448/1a95dc3d-f656-4389-b03b-3b23554dc23f.png" alt="" />

<blockquote>
<p>You can create multiple rules to customise ticket handling for different services, ensuring that only specific alerts have predefined priorities.</p>
</blockquote>
<hr />
<h3>Final Testing</h3>
<p>The automation is now in action. After testing, the rules worked exactly as expected where tickets were created with the correct priority, type, and assignments. Your system is now set up for seamless incident management.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772462090/9c740fd6-51c2-4545-8d60-ecdab0d926ec.png" alt="" />

<p>Our automation rules are now in action, ensuring immediate response when a critical system goes offline.</p>
<h3>Taking It a Step Further</h3>
<p>To fully automate the incident lifecycle, feel free to create another Freshdesk automation rule that <strong>automatically closes tickets</strong> when Uptime Kuma detects the service is back online. Since Uptime Kuma raises a separate alert when the system recovers, you can use a similar rule to match the subject line and set the ticket status to <strong>Closed</strong>.</p>
<h3>Conclusion</h3>
<p>Integrating Uptime Kuma with Freshdesk creates a basic incident workflow where downtime is detected, a ticket is generated, and ownership is assigned without manual handling.</p>
<p>This reduces the need for constant monitoring and keeps incident tracking consistent.</p>
<p>The same approach applies beyond this specific setup. The core value is in understanding how systems can communicate, trigger actions, and remove manual steps from operational workflows.</p>
]]></content:encoded></item><item><title><![CDATA[Why Proxmox VE Became My New Home After VirtualBox]]></title><description><![CDATA[VirtualBox served as my primary tool for desktop virtualisation for several years. It provided an accessible platform for testing operating systems, developing software locally, and exploring new envi]]></description><link>https://blog.tech-journey.co.za/why-proxmox-ve-became-my-new-home-after-virtualbox</link><guid isPermaLink="true">https://blog.tech-journey.co.za/why-proxmox-ve-became-my-new-home-after-virtualbox</guid><category><![CDATA[proxmox]]></category><category><![CDATA[self-hosted]]></category><category><![CDATA[Homelab]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Sat, 01 Feb 2025 07:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772379897/d8d8ba67-9213-4747-9c9d-802548a34c81.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>VirtualBox served as my primary tool for desktop virtualisation for several years. It provided an accessible platform for testing operating systems, developing software locally, and exploring new environments. It remains a functional, free solution for basic desktop requirements. However, as my infrastructure needs matured alongside my software development practices, the limitations of a desktop grade hypervisor became apparent. This necessitated a shift toward a more robust, enterprise level solution, resulting in my migration to Proxmox Virtual Environment (VE). This transition was driven by specific technical requirements regarding performance, automation, and scalability, marking a definitive move from a desktop utility to a bare metal hypervisor.</p>
<hr />
<h2><strong>The Architectural Differences Between Type 2 and Type 1 Hypervisors</strong></h2>
<p>VirtualBox operates as a Type 2 hypervisor, similar to VMware Workstation. It requires an underlying host operating system to function. This architecture inherently introduces a layer of software abstraction between the virtual machines and the physical hardware. Hardware resource allocation is brokered through the host operating system kernel. When running multiple intensive virtual machines concurrently, this brokering process creates noticeable performance bottlenecks. Specifically, I experienced significant latency with disk I/O operations and restricted network throughput, even when the underlying physical hardware had ample resources available.</p>
<p>Conversely, Proxmox VE operates as a Type 1 bare metal hypervisor based on Debian Linux. It installs directly onto the hardware, bypassing the need for a separate desktop host operating system. This architecture allows the hypervisor to allocate CPU cycles, memory, and storage directly to the virtual machines. The reduction in software overhead translates directly into measurable improvements in performance, lower system latency, and highly efficient resource utilisation. This bare metal approach is critical when simulating production environments accurately for application development and testing.</p>
<hr />
<h2><strong>Integrating Kernel based Virtual Machines and Linux Containers</strong></h2>
<p>A primary technical driver for adopting Proxmox VE was its native, dual support for both Kernel based Virtual Machines (KVM) and Linux Containers (LXC). While VirtualBox is restricted to full hardware virtualisation, Proxmox VE provides the flexibility to choose the most appropriate isolation method for a specific workload.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772376246/24e4ddad-dc51-44b1-bd54-d7550a795a36.png" alt="" />

<p>LXC containers offer a significantly lighter footprint compared to traditional virtual machines. Because containers share the host system kernel and <strong>isolate</strong> <strong>processes</strong> via <a href="https://blog.nginx.org/blog/what-are-namespaces-cgroups-how-do-they-work">cgroups</a> and <a href="https://blog.nginx.org/blog/what-are-namespaces-cgroups-how-do-they-work">namespaces</a>, they consume a fraction of the CPU and RAM. Boot times are nearly instantaneous. This architecture is optimal for deploying independent microservices, database instances, or continuous integration runners where the overhead of an entire guest operating system is redundant. Managing both full KVM instances for legacy applications and LXC containers for modern, lightweight services from a unified platform has drastically improved my deployment workflow and maximised hardware efficiency.</p>
<hr />
<p><strong>Advanced Networking and Storage Capabilities</strong></p>
<p>Beyond basic compute resource allocation, Proxmox VE excels in network and storage management. For application development, replicating complex network topologies is often necessary. Proxmox natively supports Linux bridges, Open vSwitch, and VLAN tagging. This allows for the logical segmentation of development, testing, and production simulation environments, mimicking real world network constraints and security policies.</p>
<p>Storage management is equally robust. While VirtualBox relies heavily on standard virtual disk images stored on a standard file system, Proxmox VE integrates deeply with advanced storage technologies. The native support for ZFS (Zettabyte File System) is particularly advantageous. ZFS provides software defined RAID, continuous integrity checking, and instantaneous snapshots. The ability to take immediate, block level snapshots of a database server before running a risky database migration script provides a highly resilient and forgiving development environment.</p>
<hr />
<h2><strong>Centralised Management and Automation via API</strong></h2>
<p>Managing a growing fleet of virtual machines via the VirtualBox GUI or basic CLI tools becomes inefficient at scale. Proxmox VE resolves this through a comprehensive web based management interface that handles clustering, storage, networking, and virtual machine state from a single control panel.</p>
<p>More importantly for a software development context, Proxmox VE is built around a fully functional REST API. Every action available in the web interface can be executed programmatically. This capability facilitates Infrastructure as Code (IaC) practices. By integrating tools like Terraform or Ansible, it becomes possible to define infrastructure requirements in code, version control those definitions, and provision identical development environments automatically. This level of automation is not feasible with standard desktop virtualisation tools and is a fundamental requirement for modern application development lifecycles.</p>
<p><strong>Enterprise Grade Features for Development Environments</strong></p>
<p>The platform also includes several enterprise grade features natively:</p>
<ul>
<li><p><strong>Live Migration:</strong> Virtual machines can be migrated between physical nodes in a cluster without interrupting service availability, which is useful for testing high availability application architectures.</p>
</li>
<li><p><strong>High Availability (HA):</strong> Critical services can be configured to restart automatically on surviving nodes if a physical server fails.</p>
</li>
<li><p><strong>Integrated Backup and Restore:</strong> Proxmox Backup Server integration or native VZDump capabilities allow for scheduled, deduplicated backups at the hypervisor level. This ensures rapid disaster recovery without relying on guest operating system agents.</p>
</li>
</ul>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772377914/f1ec58c2-cc91-4dd3-87be-61d09cfe14b8.png" alt="" />

<hr />
<h2>The Community and Future-Proofing</h2>
<p>Finally, the strong and active community surrounding Proxmox VE was a significant draw. When encountering challenges or seeking best practices, the wealth of information available through forums, wikis, and documentation is immense. This vibrant ecosystem provides a sense of confidence that the platform will continue to evolve and be well-supported in the long term.</p>
<p>While VirtualBox remains an excellent tool for quick, single-user virtualisation on a desktop, for anyone looking to build a more robust, scalable, and efficient home lab or small-scale server environment, Proxmox VE is a clear winner. The transition was a learning curve, but the benefits in performance, flexibility, and management have been undeniable. My home lab has truly found its new, more powerful home and it’s very unlikely that I’ll be turning back.</p>
]]></content:encoded></item><item><title><![CDATA[Part 2: Organising Your Digital Workspace with Homepage]]></title><description><![CDATA[Bringing Order to Your Digital Workspace
Introduction
While keeping a physical workspace tidy is essential for focus and productivity, the same principles apply to our digital environments. Over time,]]></description><link>https://blog.tech-journey.co.za/part-2-organising-your-digital-workspace-with-homepage</link><guid isPermaLink="true">https://blog.tech-journey.co.za/part-2-organising-your-digital-workspace-with-homepage</guid><category><![CDATA[Homelab]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Docker compose]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Mon, 06 Jan 2025 19:17:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772487205/214e8fad-e637-4d98-babd-d8f40abfd09d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Bringing Order to Your Digital Workspace</h2>
<h3>Introduction</h3>
<p>While keeping a physical workspace tidy is essential for focus and productivity, the same principles apply to our digital environments. Over time, I realised that having multiple browser tabs open, scattered bookmarks, and disorganised files slowed me down just as much as a cluttered desk. To streamline my workflow, I explored <a href="https://gethomepage.dev/"><strong>Homepage</strong></a>, a self-hosted dashboard that centralises access to all essential work resources.</p>
<h3>Why Use a Self-Hosted Dashboard?</h3>
<p>If you frequently juggle multiple tools, platforms, and websites, a dashboard can significantly improve efficiency. Instead of rummaging through bookmarks or typing URLs manually, everything is accessible in one place. Here’s how <strong>Homepage</strong> helped me bring order to my digital workspace:</p>
<h3>“What’s this ‘Homepage’ you speak of?”</h3>
<ul>
<li>It’s a lightweight, highly customisable dashboard that acts as a central hub for quick access to applications, services, and web links.</li>
</ul>
<p>Let’s dive into it*..*</p>
<hr />
<h3>1. Deploying with Docker</h3>
<p>Since I run most of my self-hosted services in Docker, setting up Homepage is pretty straightforward.</p>
<p>Here’s how you can deploy it:</p>
<pre><code class="language-bash"># create a directory called homepage and cd into it. 
mkdir -p ~/dashboard/homepage &amp;&amp; cd ~/dashboard/homepage
</code></pre>
<pre><code class="language-bash">#create a docker-compose.yml file 
nano docker-compose.yml
</code></pre>
<p>Paste the following configuration(be sure to read the comments):</p>
<pre><code class="language-yaml">---
services:
  homepage:
    image: ghcr.io/gethomepage/homepage:v0.9.2
    container_name: homepage
    ports:
      - 3000:3000
    env_file: .env # Make sure to create this .env file in your current working directory (use touch .env)
    volumes:
      - ./config:/app/config # Homepage will create this for you.
      - /var/run/docker.sock:/var/run/docker.sock # (optional) For docker integrations, see alternative methods
    environment:
      PUID: $PUID # Set this to your user ID (you can find it by running `id -u`)
      PGID: $PGID # Set this to your group ID (you can find it by running `id -g`)
      TZ: Africa/Johannesburg # set to your own timezone
    restart: unless-stopped
</code></pre>
<hr />
<h3>2. Accessing the default dashboard</h3>
<p>Once deployed, you can access the dashboard via <code>[http://&lt;your-server-ip&gt;:3000]</code></p>
<p>Homepage has a user-friendly YAML-based configuration file, making it easy to customise layouts, themes, categories and all sorts*.*</p>
<blockquote>
<p>Note: The icons we will be using are sourced from <a href="https://github.com/homarr-labs/dashboard-icons/blob/main/ICONS.md">Homarr Labs Dashboard Icons</a>, where you can explore a wide variety of icons to further personalise your dashboard.</p>
</blockquote>
<p>Your default dashboard should look like this:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772479181/6e1befbc-7bbc-4cba-ad16-163807cdc9d7.png" alt="" />

<hr />
<h3>3. Updating the <code>services.yaml</code> File</h3>
<p>Now let’s customise the dashboard with your own services.</p>
<h4>Step 1: Locate the <code>services.yaml</code> file in the config directory:</h4>
<pre><code class="language-bash">cd ~/dashboard/homepage/config &amp;&amp; nano services.yaml
</code></pre>
<h4>Step 2: Add Your Services</h4>
<p>Now, you can copy and paste the following configuration into the <code>services.yaml</code> file. The example config below will organise your dashboard with categories for project management, automation, cloud services, and networking but feel free to customise it according to your needs:</p>
<pre><code class="language-yaml">services:
  - Project Management:
      - Asana:
          icon: https://cdn.jsdelivr.net/gh/homarr-labs/dashboard-icons/png/asana.png
          href: http://app.asana.com
          description: Task &amp; project management
      - Jira:
          icon: https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/jira.png
          href: http://your-jira-link-here
          description: Agile project tracking
      - Notion:
          icon: https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/notion.png
          href: http://your-notion-link-here
          description: Notes, docs, and collaboration

  - Automation &amp; Workflows:
      - n8n:
          icon: https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/n8n.png
          href: http://your-n8n-link-here
          description: Workflow automation

  - Cloud Services:
      - Azure:
          icon: https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/azure.png
          href: http://portal.azure.com
          description: Microsoft cloud platform
      - Microsoft 365:
          icon: https://cdn.jsdelivr.net/gh/homarr-labs/dashboard-icons/png/microsoft-365.png
          href: http://your-microsoft-365-link-here
          description: Cloud productivity suite
      - SharePoint:
          icon: https://cdn.jsdelivr.net/gh/homarr-labs/dashboard-icons/png/microsoft-sharepoint.png
          href: http://your-sharepoint-link-here
          description: Document management &amp; collaboration

  - Networking &amp; Proxy:
      - Nginx Proxy Manager:
          icon: https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/nginx-proxy-manager.png
          href: http://your-nginx-proxy-manager-link-here
          description: Reverse proxy &amp; SSL management
</code></pre>
<h4>Step 3: Save and Close the File</h4>
<p>Save and close the file (in nano: press <code>Ctrl + O</code>, <code>Enter</code> to save, then <code>Ctrl + X</code> to exit).</p>
<h4>Step 4: Reload the page or restart the container:</h4>
<pre><code class="language-bash"># make sure you're in the homepage directory
docker compose restart
</code></pre>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772480729/865c9845-dc82-44b3-b394-3c185a39ede5.png" alt="" />

<p>Tools and services all neatly organised into categories.</p>
<hr />
<h3>4. Adding a Background Image</h3>
<p>Homepage allows you to easily add a custom background image via the <code>settings.yaml</code> file. This is a great way to personalise your dashboard further. Follow these steps to add your background image:</p>
<h4>Step 1: Locate the <code>settings.yaml</code> file:</h4>
<pre><code class="language-bash">cd ~/dashboard/homepage/config &amp;&amp; nano settings.yaml
</code></pre>
<h4>Step 2: Add the Background Image Setting:</h4>
<pre><code class="language-yaml">background:
  image: https://images.unsplash.com/photo-1502790671504-542ad42d5189?auto=format&amp;fit=crop&amp;w=2560&amp;q=80
</code></pre>
<p>You can also adjust filters (blur, brightness, saturation) to achieve the look you want. These settings are based on Tailwind CSS classes, so you can use the values described in the <a href="https://tailwindcss.com/docs/filter">Tailwind CSS documentation</a>.</p>
<blockquote>
<p>P.S I’m no expert in this regard, just following the documentation <em>🙂</em></p>
</blockquote>
<p>For more detailed information on background image settings, including filter options, check out the <a href="https://gethomepage.dev/configs/settings/#background-image">Homepage docs</a> and <a href="https://unsplash.com/">Unsplash</a> (make sure to <strong>copy the image URL</strong> when referencing it in your settings.yaml file):</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772482421/fb7036a6-5e34-4992-a29e-451597f9a392.png" alt="" />

<h4>Step 3: Save and Close the File</h4>
<h4>Step 4: Restart Homepage:</h4>
<pre><code class="language-bash">docker compose restart
</code></pre>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772485021/8b8c6d09-2772-46ed-931f-b7519a1d6216.png" alt="" />

<hr />
<h3>Conclusion</h3>
<p>Just as a tidy desk improves focus, an organised digital environment boosts productivity.</p>
<p>Don’t shy away from exploring more customisations, integrations, and features to truly tailor your dashboard to fit your needs. The possibilities are endless, and with Homepage, you have the control to make your workspace as productive and personalised as you want it to be.</p>
]]></content:encoded></item><item><title><![CDATA[Runtipi: Self-Hosting Made Incredibly Simple]]></title><description><![CDATA[Why Self-Hosting Shouldn’t Be a Headache
Have you ever considered self-hosting your own apps but got overwhelmed by endless Docker commands, YAML files, and troubleshooting? You’re not alone. While th]]></description><link>https://blog.tech-journey.co.za/runtipi-self-hosting-made-incredibly-simple</link><guid isPermaLink="true">https://blog.tech-journey.co.za/runtipi-self-hosting-made-incredibly-simple</guid><category><![CDATA[Homelab]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Docker compose]]></category><category><![CDATA[self-hosted]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Sun, 05 Jan 2025 21:18:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772405176/0f10dc2b-cc3b-4c0c-a6d4-5511e30c1fad.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Why Self-Hosting Shouldn’t Be a Headache</h3>
<p>Have you ever considered self-hosting your own apps but got overwhelmed by endless Docker commands, YAML files, and troubleshooting? You’re not alone. While the reward of getting everything to work is great, the journey can be frustrating. I know the pain of trying to set up a simple service, only to hit roadblocks that eat up hours of time.</p>
<p>That’s where Runtipi comes in. It eliminates the hassle and lets you deploy self-hosted apps with ease, even if you’ve never touched Docker before(you’re missing out though 😄).</p>
<h3>What is Runtipi, and Why Should You Care?</h3>
<p>Runtipi simplifies self-hosting by handling container management, reverse proxy setup, SSL certificates, and app installations , all from an easy-to-use web dashboard. No need to get lost in config files or manually install dependencies. Just choose an app, hit install, and you’re good to go.</p>
<h3>Who’s Runtipi For? (Spoiler: Almost Everyone 🙂)</h3>
<h3>1. Newbies Who Want a Frictionless Start</h3>
<p>Want to self-host without feeling like you need a degree in DevOps? Runtipi has your back.</p>
<h3>2. Techies Who Don’t Want to Waste Time</h3>
<p>You know your way around a terminal, but you don’t always have time to manually configure <code>docker-compose.yml</code> files for every app.</p>
<h3>3. Home Lab Enthusiasts Who Value Simplicity</h3>
<p>Tinkering is fun, but spending hours debugging container issues is the worst. Runtipi keeps things simple so you can spend more time <strong>USING</strong> your apps.</p>
<h3>Getting started:</h3>
<h3>Step 1: Installing Runtipi</h3>
<p>Run this single command on your Linux server:</p>
<pre><code class="language-bash">curl -L https://setup.runtipi.io | bash
</code></pre>
<h3>Step 2: Open the Web Dashboard</h3>
<p>Once installed, navigate to <a href="http://your-server-ip">http://your-server-ip</a> in your browser to access the Runtipi web dashboard (it uses the default HTTP protocol, so there’s no need to specify a port).</p>
<h3>Step 3: Demo</h3>
<blockquote>
<p>For a quick overview, check out the demo <a href="https://drive.google.com/file/d/1bv-KBvgVZq4Yty9hkOktcwBui1MgLjT4/view?usp=drive_link"><strong>here</strong></a>. Note that during this demo, I wasn’t prompted to set up an account, as I had already tested the setup prior to posting this blog. If you’re setting it up fresh, you might be asked to create an account, which is part of the usual process.</p>
</blockquote>
<h3>Step 3: Deploy an App</h3>
<p>Choose any app from the list and install it with just one click. It’s that easy! Of course, you can always add extra layers of security if you want.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772403232/c983eeac-32ae-4a0a-911f-bb4b634b5cd1.png" alt="" />

<p>Example of how easy it is to install an app.</p>
<h3>Runtipi Makes Self-Hosting Easier, But Challenges are Worth Embracing</h3>
<p>While Runtipi simplifies the process of deploying self-hosted applications with its one-click install feature, it’s important to remember that there’s value in tackling challenges. <strong>Efficient workflows</strong> are essential, but <strong>embracing complex problems and finding innovative solutions also leads to growth</strong>. This setup makes it easier for everyone, from newcomers to seasoned pros, but don’t shy away from opportunities to expand your skills by tackling more intricate tasks.</p>
<h3>Ready to Give It a Shot?</h3>
<p>Head over to <a href="https://runtipi.io/">Runtipi’s official website</a> and see for yourself. Self-hosting doesn’t have to be complicated! 🙂🕺💃</p>
]]></content:encoded></item><item><title><![CDATA[Part 1: The importance of a Neat and Tidy Office Space to boost Productivity and Organisation]]></title><description><![CDATA[The Impact of an Organised Workspace
Introduction
When the shift to hybrid and remote work took place during COVID, I found myself working from home more often than not. While this offered flexibility]]></description><link>https://blog.tech-journey.co.za/the-importance-of-a-neat-and-tidy-office-space-boosting-productivity-and-organisation</link><guid isPermaLink="true">https://blog.tech-journey.co.za/the-importance-of-a-neat-and-tidy-office-space-boosting-productivity-and-organisation</guid><category><![CDATA[Productivity]]></category><category><![CDATA[Time management]]></category><category><![CDATA[stress management]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Fri, 03 Jan 2025 17:32:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772398866/5c04a844-72fc-44a3-aa87-5556648eeff4.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>The Impact of an Organised Workspace</h3>
<h4>Introduction</h4>
<p>When the shift to hybrid and remote work took place during COVID, I found myself working from home more often than not. While this offered flexibility, it also presented the unexpected challenge of keeping my workspace tidy and organised. Over time, I discovered just how much my physical environment affected my focus, productivity, and overall well-being.</p>
<p>In this blog, I’ll share my personal experience with maintaining an organised workspace, the negative impact of clutter, and how I extended this organisation into my digital workflow.</p>
<blockquote>
<p><strong>Note: Part 1</strong> is aimed at a general audience, but <strong>Part 2</strong>, which will be published in my upcoming blog, is more intended for a technical audience or anyone curious about exploring the technical side of things. In Part 2, we’ll dive into setting up <a href="https://gethomepage.dev/installation/docker/"><strong>Homepage</strong></a>, a self-hosted dashboard to help streamline navigation across work tools so stay tuned! 🙂</p>
</blockquote>
<hr />
<h3>Why a Tidy Office Space is so important</h3>
<p>A clean workspace isn’t just about aesthetics; it directly impacts productivity and mental clarity. Here’s how keeping an organised office has personally helped me and how it can benefit you:</p>
<h4>1. Increased Productivity</h4>
<p>A clutter-free desk means fewer distractions, allowing for deeper focus. When everything is in its place, you don’t waste time searching for documents, cables, or notes.</p>
<h4>2. Reduced Stress and Anxiety</h4>
<p>At one point, my workspace became overwhelming. Papers, filled with mostly scribbled notes and research books I use, began to pile up. Cables were tangled (and still are to some extent, though I’m making progress with better cable management 🙂), and my desk just felt chaotic. After decluttering, I immediately noticed a reduction in stress.</p>
<p>Studies show that cluttered environments contribute to anxiety, and I can certainly attest to that (<a href="https://www.verywellmind.com/how-mental-health-and-cleaning-are-connected-5097496">Verywell Mind</a>).</p>
<h4>3. Better Time Management</h4>
<p>Once I organised my space, I realised how much time I had been wasting searching for misplaced items. Keeping things tidy naturally led to better time management.</p>
<h4>4. Improved Professionalism</h4>
<p>Working remotely means being on video calls often. A clean background not only makes a strong impression but also reflects an organised approach to work.</p>
<h4>5. Better Physical Health</h4>
<p>Dust and clutter can lead to allergies and poor air quality. Cleaning regularly improved my workspace’s air quality, which in turn boosted my energy levels.</p>
<hr />
<h3>The Negative Impact of a Cluttered Office</h3>
<p>From my experience, a disorganised office can lead to:</p>
<ul>
<li><p><strong>Reduced efficiency  -</strong> Wasting time searching for things.</p>
</li>
<li><p><strong>Mental overload  -</strong> Clutter can overwhelm your brain, making it harder to focus.</p>
</li>
<li><p><strong>Missed deadlines</strong>  -  Important tasks get buried in the chaos.</p>
</li>
<li><p><strong>Increased frustration</strong>  - A messy environment can create unnecessary stress.</p>
</li>
</ul>
<p>By tidying up my workspace(both physical and digital), I felt more in control of my day.</p>
]]></content:encoded></item><item><title><![CDATA[G-Suite Migration: Automated Credential Distribution Using Python and GAM CLI]]></title><description><![CDATA[We recently migrated our organisation from G-Suite(Google Workspace) to Microsoft. While we used a third-party migration tool to shift the historical Google data across, we hit a practical roadblock: ]]></description><link>https://blog.tech-journey.co.za/g-suite-migration-automated-credential-distribution-using-python-and-gam-cli</link><guid isPermaLink="true">https://blog.tech-journey.co.za/g-suite-migration-automated-credential-distribution-using-python-and-gam-cli</guid><category><![CDATA[Google Workspace]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[Python]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Sat, 22 Jun 2024 08:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/68a5e6b9bf57f369891da8e0/951cf691-8a0f-4f90-bdab-164b109f01fd.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We recently migrated our organisation from G-Suite(Google Workspace) to Microsoft. While we used a third-party migration tool to shift the historical Google data across, we hit a practical roadblock: how do we <em><strong>securely</strong></em> distribute new Microsoft credentials to over 100 users?</p>
<p>The initial plan proposed by the team was entirely manual. We were going to create the Microsoft accounts with temporary passwords, divide the user list into batches, and have the engineering team manually share the credentials one by one via a password manager.</p>
<p>With a workload full of higher-priority tasks, spending a day manually distributing credentials felt like a poor use of time. The goal was to find a secure way to handle distribution without the manual overhead.</p>
<h3>Scripting the Process with Python and GAM CLI</h3>
<p>Instead of generating and distributing secure links, I used an environment users were already authenticated into: their existing Google Drive.</p>
<p>The approach was straightforward. Generate the credentials, write them to a per-user text file, and upload each file directly into the corresponding Google Drive account under the appropriate ownership.</p>
<hr />
<h3>Generating the Data</h3>
<p>The first step was preparing a clean CSV mapping staff to their new email addresses and temporary passwords. Simple scripts often fail when dealing with multi-part surnames, so I used a small Python script to parse the <code>names.txt</code> file, normalise the formatting, and generate a 16-character temporary password for each user.</p>
<pre><code class="language-python">import csv
import random
import string

# Generate a normalised email using first initial + last name and a random domain
def generate_email(first_name, last_name):
    domains = ["tech-journey.co.za", "example.com", "test.com", "company.com"]
    username = f"{first_name[0].lower()}{last_name.lower()}"
    username = ''.join(e for e in username if e.isalnum())  # sanitize username
    return f"{username}@{random.choice(domains)}"

# Generate a 16-character password using letters and special characters
def generate_random_password():
    characters = string.ascii_letters + "!@%^$&amp;*#"
    return ''.join(random.choices(characters, k=16))

# Read and clean input names
with open('names.txt', 'r') as file:
    names = file.readlines()

data = []
for name in names:
    name = name.strip()
    if not name:
        continue  # skip empty lines

    parts = name.split()
    if len(parts) &lt; 2:
        print(f"Skipping invalid line: {name}")  # enforce first + last name
        continue

    first_name = parts[0]
    last_name = ' '.join(parts[1:])  # support multi-part surnames

    data.append({
        "email": generate_email(first_name, last_name),
        "password": generate_random_password()
    })

# Write results to CSV with fixed schema
with open('generated_email_password_pairs.csv', mode='w', newline='') as file:
    writer = csv.DictWriter(file, fieldnames=["email", "password"])
    writer.writeheader()
    writer.writerows(data)

print("Email and password pairs added to generated_email_password_pairs.csv")
</code></pre>
<p><strong>Example Output:</strong></p>
<p>The script outputs a <code>generated_email_password_pairs.csv</code> file, which is then used in the next step.</p>
<img src="https://cdn.hashnode.com/uploads/covers/68a5e6b9bf57f369891da8e0/e030c5e6-ccf6-4879-9edc-f02674fa786c.png" alt="" />

<hr />
<h3>Delivering the Credentials via GAM</h3>
<p>With the CSV prepared, I used the GAM CLI (Google Apps Manager) to handle distribution.  </p>
<p>GAM is an incredibly powerful command-line tool for managing G-Suite. If you haven't got it set up yet, you'll need to configure your API access and service accounts first. I highly recommend following the <a href="https://github.com/jay0lee/GAM/wiki">official GAM setup instructions</a> to get that running.</p>
<p>By iterating through the CSV using a simple Bash loop, I used GAM to generate a text file containing each password and upload it directly into the user’s Google Drive root directory, assigning ownership to the corresponding account.</p>
<hr />
<h3>The Payoff</h3>
<p>Writing and testing the script, along with the GAM <a href="https://github.com/GAM-team/GAM/wiki/GoogleDriveManagement/dfcfd0f15b1ca272392f7297df72bde7d20e0942#creating-and-uploading-drive-files-for-users">commands</a> , took a few hours. Rolling it out to 100+ users took around 15 minutes.</p>
<p>There was no manual copying into a password manager, no credentials passed through Slack or email, and no additional handling once the process was in place. Users signed into Google and retrieved their Microsoft credentials directly from their own drives.</p>
<p>The full Python script is available on GitHub <a href="https://github.com/luqmarthinus/email-password-gen-scripts">here</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Spice up your Github profile to stand out from the rest]]></title><description><![CDATA[In this guide, we’ll explore how to create a well-crafted GitHub profile that you can use as your gateway to new opportunities and career growth. Whether you’re transitioning to a higher role in tech or aiming to inspire others, this guide will help ...]]></description><link>https://blog.tech-journey.co.za/how-to-create-your-github-profile</link><guid isPermaLink="true">https://blog.tech-journey.co.za/how-to-create-your-github-profile</guid><category><![CDATA[GitHub]]></category><category><![CDATA[profile]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Fri, 07 Jun 2024 00:55:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755773833318/959188ee-5f41-467b-8ab4-e3376717b064.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this guide, we’ll explore how to create a well-crafted GitHub profile that you can use as your gateway to new opportunities and career growth. Whether you’re transitioning to a higher role in tech or aiming to inspire others, this guide will help you create a profile that truly reflects your tech journey and capabilities.</p>
<h3 id="heading-getting-started">Getting started</h3>
<ul>
<li>In order to create a <em>special</em> repository i.e having the README.md file to appear on our Github profile, we need to ensure our repo name <strong>matches</strong> our username:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772527189/c42c12e1-47c5-476d-81c7-d89a9377e0f0.png" alt /></p>
<ul>
<li><p>As seen in the message displayed above, we need to set the visibility of our repo to <code>Public</code>.</p>
</li>
<li><p>Add a <code>README.md</code> file to this repository which we’ll use to showcase our skills &amp; link our projects.</p>
</li>
</ul>
<p>Note: Even if you don’t have any projects to showcase yet, setting up an attractive profile is still a good starting point so let’s keep going 🙂.</p>
<ul>
<li>Click Create repository.</li>
</ul>
<p>Once done, it’s time to spice up our README.md file:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772528712/f4f5b8cf-2712-4614-9967-93cb6d27bcb2.png" alt /></p>
<p>When clicking on Edit README or the pen (🖊️) icon, you will see an auto-generated template we can use to add our info. However, if you’re looking to make it more eye-catching, I’ve included a few sites to help you generate your Github Readme according to your needs: 🙂</p>
<p><a target="_blank" href="https://profile-readme-generator.com/">Profile Readme Generator(option 1)</a></p>
<p><a target="_blank" href="https://rahuldkjain.github.io/gh-profile-readme-generator/">Profile Readme Generator(option 2)</a></p>
<p><a target="_blank" href="https://github.com/abhisheknaiidu/awesome-github-profile-readme">Profile Readme Generator(option 3)</a></p>
<p>If you want to get the image URLs or GIFs, you can use your preferred search engine or even utilise ChatGPT to help generate the proper URL that you can add to your README.md file.</p>
<p>Another useful site to check out is <a target="_blank" href="https://shields.io/badges/static-badge">Shields.io</a> to edit static badges with customised text and colors.</p>
<p>Once you’re happy, go ahead and commit the changes to see how your profile looks (this is what your potential employer would be seeing and I’m sure they would be impressed 🙂):</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772530267/1b9e14af-58f6-4a4b-8c7a-812110e64efd.png" alt /></p>
<p>Github profile(preview)</p>
<p>Below is what the code looks like so please feel free to use it and customise it according to your needs:</p>
<pre><code class="lang-markdown"><span class="hljs-section"># Hi there, I'm [Your<span class="hljs-emphasis">_Name] 👋


## About Me
As a Desktop Support Manager, I am deeply motivated to progress into more technical roles within the industry whilst actively assisting others in their career transitions. With a blend of foundational and advanced experience across various tools and technologies, I am here to share my journey, skills, and projects, aiming to inspire and support anyone seeking to maximize their potential.

## 🛠 Tools &amp; Technologies

![<span class="hljs-string">AWS</span>](<span class="hljs-link">https://img.shields.io/badge/AWS-232F3E?style=for-the-badge&amp;logo=amazon-aws&amp;logoColor=white</span>)
![<span class="hljs-string">Azure</span>](<span class="hljs-link">https://img.shields.io/badge/Azure-0078D4?style=for-the-badge&amp;logo=microsoft-azure&amp;logoColor=white</span>)
![<span class="hljs-string">Google Workspace</span>](<span class="hljs-link">https://img.shields.io/badge/Google_Workspace-4285F4?style=for-the-badge&amp;logo=google&amp;logoColor=white</span>)
![<span class="hljs-string">GAM Tool</span>](<span class="hljs-link">https://img.shields.io/badge/GAM-00A82D?style=for-the-badge&amp;logoColor=white</span>)
![<span class="hljs-string">imapsync</span>](<span class="hljs-link">https://img.shields.io/badge/imapsync-24A1C1?style=for-the-badge&amp;logo=imapsync&amp;logoColor=white</span>)
![<span class="hljs-string">Zoom</span>](<span class="hljs-link">https://img.shields.io/badge/Zoom-2D8CFF?style=for-the-badge&amp;logo=zoom&amp;logoColor=white</span>)
![<span class="hljs-string">Asana</span>](<span class="hljs-link">https://img.shields.io/badge/Asana-FC636B?style=for-the-badge&amp;logo=asana&amp;logoColor=white</span>)
![<span class="hljs-string">Freshdesk</span>](<span class="hljs-link">https://img.shields.io/badge/Freshdesk-2B79D8?style=for-the-badge&amp;logo=freshdesk&amp;logoColor=white</span>)
![<span class="hljs-string">Bitdefender</span>](<span class="hljs-link">https://img.shields.io/badge/Bitdefender-FF1100?style=for-the-badge&amp;logo=bitdefender&amp;logoColor=white</span>)
![<span class="hljs-string">AnyDesk</span>](<span class="hljs-link">https://img.shields.io/badge/AnyDesk-EA1F24?style=for-the-badge&amp;logo=anydesk&amp;logoColor=white</span>)
![<span class="hljs-string">TeamViewer</span>](<span class="hljs-link">https://img.shields.io/badge/TeamViewer-0E8EE9?style=for-the-badge&amp;logo=teamviewer&amp;logoColor=white</span>)
![<span class="hljs-string">Docker</span>](<span class="hljs-link">https://img.shields.io/badge/Docker-2496ED?style=for-the-badge&amp;logo=docker&amp;logoColor=white</span>)
![<span class="hljs-string">Python</span>](<span class="hljs-link">https://img.shields.io/badge/Python-3776AB?style=for-the-badge&amp;logo=python&amp;logoColor=white</span>)
![<span class="hljs-string">Bash</span>](<span class="hljs-link">https://img.shields.io/badge/Bash-4EAA25?style=for-the-badge&amp;logo=gnu-bash&amp;logoColor=white</span>)
![<span class="hljs-string">UniFi</span>](<span class="hljs-link">https://img.shields.io/badge/UniFi-0559C9?style=for-the-badge&amp;logo=ubiquiti&amp;logoColor=white</span>)
![<span class="hljs-string">Ubuntu</span>](<span class="hljs-link">https://img.shields.io/badge/Ubuntu-E95420?style=for-the-badge&amp;logo=ubuntu&amp;logoColor=white</span>)
![<span class="hljs-string">Debian</span>](<span class="hljs-link">https://img.shields.io/badge/Debian-A81D33?style=for-the-badge&amp;logo=debian&amp;logoColor=white</span>)
![<span class="hljs-string">Windows</span>](<span class="hljs-link">https://img.shields.io/badge/Windows_10-0078D6?style=for-the-badge&amp;logo=windows&amp;logoColor=white</span>)
![<span class="hljs-string">Windows</span>](<span class="hljs-link">https://img.shields.io/badge/Windows_11-0078D6?style=for-the-badge&amp;logo=windows&amp;logoColor=white</span>)
![<span class="hljs-string">macOS</span>](<span class="hljs-link">https://img.shields.io/badge/macOS-000000?style=for-the-badge&amp;logo=apple&amp;logoColor=white</span>)
![<span class="hljs-string">pfSense</span>](<span class="hljs-link">https://img.shields.io/badge/pfSense-336791?style=for-the-badge&amp;logo=pfsense&amp;logoColor=white</span>)
![<span class="hljs-string">OPNsense</span>](<span class="hljs-link">https://img.shields.io/badge/OPNsense-342B0E?style=for-the-badge&amp;logoColor=white</span>)
![<span class="hljs-string">Grafana</span>](<span class="hljs-link">https://img.shields.io/badge/Grafana-F46800?style=for-the-badge&amp;logo=grafana&amp;logoColor=white</span>)
![<span class="hljs-string">Zabbix</span>](<span class="hljs-link">https://img.shields.io/badge/Zabbix-00BFFF?style=for-the-badge&amp;logo=zabbix&amp;logoColor=white</span>)
![<span class="hljs-string">OpenLDAP</span>](<span class="hljs-link">https://img.shields.io/badge/OpenLDAP-2C2255?style=for-the-badge&amp;logo=openldap&amp;logoColor=white</span>)
![<span class="hljs-string">Prometheus</span>](<span class="hljs-link">https://img.shields.io/badge/Prometheus-E6522C?style=for-the-badge&amp;logo=prometheus&amp;logoColor=white</span>)
![<span class="hljs-string">Pi-hole</span>](<span class="hljs-link">https://img.shields.io/badge/Pi--hole-F60D1A?style=for-the-badge&amp;logo=pihole&amp;logoColor=white</span>)
![<span class="hljs-string">Traefik</span>](<span class="hljs-link">https://img.shields.io/badge/Traefik-24A1C1?style=for-the-badge&amp;logo=traefik&amp;logoColor=white</span>)

## 🔧 Projects
Here are some of the projects I've worked on:

### [<span class="hljs-string">Project Name</span>](<span class="hljs-link">https://github.com/yourusername/projectname</span>)
![<span class="hljs-string">Project Screenshot</span>](<span class="hljs-link">https://via.placeholder.com/600x400.png?text=Project+Screenshot</span>) <span class="xml"><span class="hljs-comment">&lt;!-- Replace with your image URL --&gt;</span></span>
- <span class="hljs-strong">**Description:**</span> A brief description of the project.
- <span class="hljs-strong">**Technologies Used:**</span> List the technologies used in the project.
- <span class="hljs-strong">**Features:**</span> Highlight key features of the project.

### [<span class="hljs-string">Another Project</span>](<span class="hljs-link">https://github.com/yourusername/anotherproject</span>)
![<span class="hljs-string">Project Screenshot</span>](<span class="hljs-link">https://via.placeholder.com/600x400.png?text=Project+Screenshot</span>) <span class="xml"><span class="hljs-comment">&lt;!-- Replace with your image URL --&gt;</span></span>
- <span class="hljs-strong">**Description:**</span> A brief description of the project.
- <span class="hljs-strong">**Technologies Used:**</span> List the technologies used in the project.
- <span class="hljs-strong">**Features:**</span> Highlight key features of the project.

## 🌱 Currently Learning
I'm currently expanding my knowledge in:
- Advanced Docker
- Cloud Security
- Python
- Working with APIs
- Powershell

## 📫 How to Reach Me

![<span class="hljs-string">Contact Me</span>](<span class="hljs-link">https://via.placeholder.com/1200x300.png?text=Contact+Me</span>) <span class="xml"><span class="hljs-comment">&lt;!-- Replace with your image URL --&gt;</span></span>

- <span class="hljs-strong">**Email:**</span> your.email@example.com
- <span class="hljs-strong">**LinkedIn:**</span> [<span class="hljs-string">Your LinkedIn Profile</span>](<span class="hljs-link">https://linkedin.com/in/yourprofile</span>)
- <span class="hljs-strong">**Twitter:**</span> [<span class="hljs-string">@yourhandle</span>](<span class="hljs-link">https://twitter.com/yourhandle</span>)
- <span class="hljs-strong">**Blog:**</span> [<span class="hljs-string">Tech-Journey</span>](<span class="hljs-link">https://yourblog.com</span>)

## 📈 GitHub Stats
![<span class="hljs-string">Your GitHub Stats</span>](<span class="hljs-link">https://github-readme-stats.vercel.app/api?username=yourusername&amp;show_icons=true&amp;theme=radical</span>)
![<span class="hljs-string">Top Languages</span>](<span class="hljs-link">https://github-readme-stats.vercel.app/api/top-langs/?username=yourusername&amp;layout=compact&amp;theme=radical</span>)

## 🏆 Achievements
![<span class="hljs-string">Achievements</span>](<span class="hljs-link">https://via.placeholder.com/1200x300.png?text=Achievements</span>) <span class="xml"><span class="hljs-comment">&lt;!-- Replace with your image URL --&gt;</span></span>
- GitHub Star
- Open Source Contributor

## 🖥️ Setup
Here are some of the tools and environments I work with:
![<span class="hljs-string">Setup</span>](<span class="hljs-link">https://via.placeholder.com/1200x300.png?text=Setup</span>) <span class="xml"><span class="hljs-comment">&lt;!-- Replace with your image URL --&gt;</span></span>

- <span class="hljs-strong">**Editors:**</span> VSCode
- <span class="hljs-strong">**Version Control:**</span> Git, GitHub, GitLab
- <span class="hljs-strong">**CI/CD:**</span> GitHub Actions

Thank you for visiting my profile! Feel free to check out my repositories and connect with me.</span></span>
</code></pre>
<h3 id="heading-well-done">Well done!</h3>
<p>We’ve covered how to enhance our Github Readme profile to showcase our skills, experience and projects effectively. It’s important to note that your README is not just a technical document but also an opportunity to share your journey which will most likely open more doors of opportunity to climb up the ladder either at your current workplace or at a new company and as an added bonus, you’re inspiring others to do the same.</p>
]]></content:encoded></item><item><title><![CDATA[Setting up Google Workspace with a custom domain]]></title><description><![CDATA[What exactly is a Google workspace?
In short, Google workspace is a popular business suite consisting of world class productivity & collaboration apps and tools which includes a professional business ]]></description><link>https://blog.tech-journey.co.za/setting-up-google-workspace-with-a-custom-domain</link><guid isPermaLink="true">https://blog.tech-journey.co.za/setting-up-google-workspace-with-a-custom-domain</guid><category><![CDATA[Google]]></category><category><![CDATA[Google Workspace]]></category><category><![CDATA[dns]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Fri, 03 May 2024 18:31:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772431634/cd2b7bd8-0676-4ebf-b438-efeb5b30e922.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>What exactly is a Google workspace?</h3>
<p>In short, Google workspace is a popular business suite consisting of world class productivity &amp; collaboration apps and tools which includes a professional business email service that lets you use your own domain name, your own email addresses in the Google ecosystem.</p>
<p>To get started, you need to have an existing domain or you would have to purchase one from a domain registrar, however, in this guide we’ll assume you already have one.</p>
<p>Once you have our domain, head to <a href="https://workspace.google.com">https://workspace.google.com</a> .</p>
<p>Next, follow the on-screen prompts as seen below:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772411130/9d46f384-b483-4849-89ba-99de139676de.png" alt="" /></p>
<p>Create your <strong>Business name</strong>, add the <strong>Number of employees</strong> &amp; select your <strong>Region</strong>:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772412473/41e11ff9-56a5-4b59-8ef8-32db5962669b.png" alt="" /></p>
<p>In the next step, you’ll need to provide your <strong>contact info</strong> and then click <strong>Next</strong>:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772414108/f4a2a9fd-8530-41aa-ba71-ed91e824aa51.png" alt="" /></p>
<p>Since you have an existing domain, select <strong>Yes</strong>:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772415645/92ec0bde-6041-415e-98a0-68f69b812d40.png" alt="" /></p>
<p>You will then go ahead and add your existing domain and click <strong>Next</strong>:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772417098/4ae3fc58-dbd8-4cd3-b557-7b5bed0a72bd.png" alt="" /></p>
<p>As you can see, emails sent to our domain won’t be affected until you’ve set up your email with your new account:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772418315/00c3e088-524f-471b-92b0-108f82de3b89.png" alt="" /></p>
<p>Create a username to sign into your Google Workspace account:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772419976/8a56c7f0-6a5f-4a39-ac23-074f496085cd.png" alt="" /></p>
<p>After agreeing to the Ts &amp; Cs, you’ll be asked to sign into your Google workspace account.</p>
<p>Next, you’ll need to <strong>verify</strong> your Identity:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772421405/e5150eb9-9704-4a2a-b6fd-f9400f3624d4.png" alt="" /></p>
<p>After you’ve verified your number, agree to the Ts &amp; Cs and select the 14 day free trial which requires your credit card information.</p>
<blockquote>
<p><strong>Important note</strong>: If this is for testing purposes, I would strongly suggest setting a reminder to delete your account before the trial period expires.</p>
</blockquote>
<p>Now it’s time for the “fun” part 🙂</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772423146/99205ee2-d134-468b-9ef3-508a363bba7f.png" alt="" /></p>
<p>Before proceeding to step two, follow the on-screen prompt that’s instructing you to log into your Domain registrar’s website where you’ve purchased your domain from.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772424695/05803984-343d-40ba-9e00-94d8cf04cc03.png" alt="" /></p>
<p>Next, you’ve guessed it, you need to <strong>add Google’s MX records</strong> as per the instructions seen below under your domain:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772426640/df260b6b-ff42-4493-b505-076155c6f257.png" alt="" /></p>
<p>Here’s an example of what it should look like:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772428032/640a5372-45f2-412c-9c1c-05664fbc4252.png" alt="" /></p>
<p>As per the instructions, you need to <strong>add another MX record</strong> with the <strong>priority set to 15</strong> and the unique Verification code.</p>
<p>Once you’ve added these two entries, go back to the admin console and click <strong>Activate Gmail</strong>. If all is correct, you should see the following window and then click <strong>Finish</strong>:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772429808/245db634-e1f2-4943-b35a-9d90be817e04.png" alt="" /></p>
<h3>Next steps</h3>
<p>Now that you’ve completed setting up your Google Workspace account with your own domain, you can go ahead and create users, groups etc. and explore all the features this suite has to offer.</p>
]]></content:encoded></item><item><title><![CDATA[UniFi Controller Deployment on Ubuntu 22.04 Using Bash]]></title><description><![CDATA[Encountering hurdles during software installations can be a frustrating experience, especially when outdated instructions lead to dependency issues, errors as well as security concerns. Recently, whil]]></description><link>https://blog.tech-journey.co.za/how-to-install-unifi-controller-using-bash</link><guid isPermaLink="true">https://blog.tech-journey.co.za/how-to-install-unifi-controller-using-bash</guid><category><![CDATA[Ubiquiti]]></category><category><![CDATA[Ubuntu]]></category><category><![CDATA[Bash]]></category><category><![CDATA[automation]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Luqmaan Marthinus]]></dc:creator><pubDate>Tue, 18 Oct 2022 20:21:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772521363/04296b6c-3a54-4dca-a6ba-a06106e1eba8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Encountering hurdles during software installations can be a frustrating experience, especially when outdated instructions lead to dependency issues, errors as well as security concerns. Recently, while setting up my Unifi software on Ubuntu 22.04, I faced such challenges. The latest Unifi controller version required MongoDB 3.6 (up to v4.4), yet the <a href="https://help.ui.com/hc/en-us/articles/220066768-Updating-and-Installing-Self-Hosted-UniFi-Network-Servers-Linux">instructions</a> referred to an expired GPG key for MongoDB 3.6. Additionally, encountering <a href="https://askubuntu.com/questions/1403619/mongodb-install-fails-on-ubuntu-22-04-depends-on-libssl1-1-but-it-is-not-insta">dependency</a> problems added to the complexity.</p>
<p>Overcoming these obstacles required thorough research &amp; perseverance, qualities I strongly encourage you to embrace when confronted with similar challenges. Demonstrating resourcefulness in this manner can lead to successful outcomes as long as you don’t give up.</p>
<h2>Purpose</h2>
<p>Eager to get this system up &amp; running with minimal steps involved, the goals I had in mind was being able to streamline the setup process, ensure consistency &amp; easily reproduce the installation on multiple servers to manage my network infrastructure.</p>
<p>With that being said, let’s move on to the good stuff :)</p>
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">Script overview:</mark></h3>
<p>Our script performs the following actions:</p>
<ul>
<li><p>Temporarily adding the Focal security repository to resolve unmet dependencies for <code>libssl1.1</code>.</p>
</li>
<li><p>Update package lists &amp; upgrades installed packages on our system.</p>
</li>
<li><p>Adds the Unifi repository to the sources list for easy access to Unifi packages.</p>
</li>
<li><p>Download the Unifi repository GPG key to verify the authenticity of the downloaded packages.</p>
</li>
<li><p>Install MongoDB 4.4 to meet the Unifi controller’s dependencies.</p>
</li>
<li><p>Install the Unifi controller package.</p>
</li>
<li><p>Initiate &amp; enables the Unifi service to ensure it runs automatically.</p>
</li>
<li><p>Verifying the status of the Unifi service to ensure it’s running properly.</p>
</li>
<li><p>Writes the output to the screen as well as a log file in our current directory.</p>
</li>
</ul>
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">Running the script:</mark></h3>
<p>To utilise the script, we need to:</p>
<ol>
<li><p>Copy/paste the script into your preferred text editor.</p>
</li>
<li><p>Customise the script as needed(if you know what you’re doing).</p>
</li>
<li><p>Make the script executable by running <code>chmod +x name_of_your_script[.sh](http://script.sh)</code>.</p>
</li>
<li><p>Run the script with <code>./name_of_your_script[.sh](http://script.sh)</code>.</p>
</li>
<li><p>Follow the on-screen prompts for installation.</p>
</li>
<li><p>Ensure port 8443 is open for Unifi controller access.<br />The Unifi controller runs on port 8443 so make sure the port is open (if ufw is active, simply run <code>sudo ufw allow 8443 &amp;&amp; ufw reload</code>)</p>
</li>
</ol>
<pre><code class="language-bash">#!/bin/bash

#Write the output to a log file (Optional)
LOG_FILE="unifi_installation.log"

#Set the script to exit immediately if any command returns a non-zero status
set -e

#print error message &amp; exit
print_error_and_exit() {
echo "Error: \(1" | tee -a "\)LOG_FILE"
exit 1
}

#check whether the command was successful.
check_success() {
if [ $? -ne 0 ]; then
print_error_and_exit "$1"
fi
}

#Here we create a function to log messages which can be very useful for troubleshooting.
log_message() {
echo "\(1" | tee -a "\)LOG_FILE"
}

#Add Focal security repository temporarily to fix unmet dependencies.
log_message "Adding Focal security repository temporarily to fix unmet dependencies..."
echo "deb http://security.ubuntu.com/ubuntu focal-security main" | sudo tee /etc/apt/sources.list.d/focal-security.list | tee -a "$LOG_FILE"
check_success "Failed to add Focal security repository."
sudo apt update
check_success "Failed to update packages."

#Install dependency(update this line if any additional dependencies are required or remove or comment it out if you already have this library installed.)
sudo apt install -y libssl1.1
check_success "Failed to install libssl1.1."
sudo rm /etc/apt/sources.list.d/focal-security.list
check_success "Failed to remove Focal security repository."
sudo apt update
check_success "Failed to update packages after removing Focal security repository."

#Update package lists &amp; upgrade installed packages
log_message "Updating package lists &amp; upgrading installed packages..."
sudo apt update &amp;&amp; sudo apt upgrade -y | tee -a "$LOG_FILE"
check_success "Failed to update and upgrade packages."

#Add Unifi's repository to sources list.
log_message "Adding Unifi repository to sources list..."
echo 'deb https://www.ui.com/downloads/unifi/debian stable ubiquiti' | sudo tee /etc/apt/sources.list.d/100-ubnt-unifi.list | tee -a "$LOG_FILE"
check_success "Failed to add Unifi's repository to sources list."

#Download Unifi's repository GPG key.
log_message "Downloading Unifi's repository GPG key..."
wget -qO - https://dl.ui.com/unifi/unifi-repo.gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/unifi-repo.gpg | tee -a "$LOG_FILE"
check_success "Failed to download Unifi's repository GPG key."

# Install MongoDB 4.4
log_message "Installing MongoDB 4.4..."
sudo apt install gnupg -y
wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/mongodb-org-4.4.gpg
echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list | tee -a "$LOG_FILE"
sudo apt update
sudo apt install -y mongodb-org
check_success "Failed to install MongoDB 4.4."

# Install the Unifi package
log_message "Installing UniFi..."
sudo apt update &amp;&amp; sudo apt install unifi -y | tee -a "$LOG_FILE"
check_success "Failed to install Unifi."

# Start &amp; enable the Unifi service
log_message "Starting &amp; enabling Unifi service..."
sudo systemctl start unifi
check_success "Failed to start Unifi service."
sudo systemctl enable unifi
check_success "Failed to enable Unifi service."

# Check the status of the Unifi service
log_message "Checking the status of the Unifi service..."
if systemctl is-active --quiet unifi; then
log_message "Unifi service is running."
else
print_error_and_exit "Unifi service failed to start."
fi

# Output the web interface URL to the screen.
log_message "Unifi installation complete. Browse to https://localhost:8443/ to access the Unifi web interface."
</code></pre>
<h3>Output &amp; results:</h3>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772492627/a7313e84-d660-4af9-aabf-335c8f6ef4cb.png" alt="" />

<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772494888/e894a863-ccf2-41e1-8cd8-d289b9a7cd89.png" alt="" />

<p>After our script has finished, we should see a log file in our present working directory that provides transparency throughout the installation process:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772496270/8c0ac9ac-1b04-475b-b389-9e3f6d4f9b9d.png" alt="" />

<p>If the script executed successfully, browse to <a href="https://localhost:8443/">https://&lt;your_server_ip&gt;:8443/</a> to access the Unifi web interface.</p>
<h2>Configuring the unifi controller</h2>
<p>Since this is a Homelab, you might have your own requirements so I won’t be diving too deep but it will be enough to get you up &amp; running.</p>
<p>The first few on-screen prompts is self explanatory but I’ll add some screenshots as we go through the setup process.</p>
<hr />
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">Step 1 — Setting up your controller</mark></h3>
<p>Complete the below &amp; click Next.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772497617/190a2a6d-a595-436e-b75f-c105aa4392e2.png" alt="" />

<hr />
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">Step 2 — Sign-in options</mark></h3>
<p>Here I’ll be creating a local account but feel free to read up on the options listed below or check out: <a href="https://help.ui.com/hc/en-us/articles/11444786290071-Connecting-to-and-Managing-UniFi-Deployments">Connecting to and Managing Unifi Deployments</a>.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772498906/7306b977-db03-44eb-b612-dae00184d267.png" alt="" />

<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772499975/91a8c825-a5c0-4480-9f55-a4cb7e448484.png" alt="" />

<hr />
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">Step 3 — Set credentials</mark></h3>
<p>Click Finish after filling in your credentials.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772501524/e01bcf2c-4c82-464b-b68c-36fedb8d297b.png" alt="" />

<hr />
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">Step 4 — Configuring our network</mark></h3>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772502911/c7eb8311-360e-436b-acef-b1f4f9ed4668.png" alt="" />

<p>The steps below is where we’ll set our <a href="https://www.techtarget.com/searchmobilecomputing/definition/service-set-identifier">SSID</a>(Wifi Name) &amp; <a href="https://www.techopedia.com/definition/22921/wi-fi-protected-access-pre-shared-key-wpa-psk">Pre-Shared Key</a>(Wifi Password).</p>
<p>Under the <strong>Advanced</strong> options, the only change I’ve made was <em>deselecting</em> <strong>BandSteering</strong> as not all my devices are in close range to where my access point is placed.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772504815/f7423e60-f7ad-4d68-bfed-1bec68f022af.png" alt="" />

<p>As you scroll down, make sure to select <strong>WPA2</strong> as the <strong>security protocol</strong> or <a href="https://www.pandasecurity.com/en/mediacenter/wpa-vs-wpa2/">WPA2/WPA3</a> depending whether your clients supports it.</p>
<p>There’s a list of options you can read up on &amp; play around with to figure out what works best for you.</p>
<p>After successfully creating our Network, it should look like this:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772506373/9e86e01a-4767-4fd2-8aeb-34a4d9ac194c.png" alt="" />

<hr />
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">Step 5 — Adopting our unifi devices</mark></h3>
<p>In this example, I will be connecting a <a href="https://store.ui.com/us/en/collections/unifi-wifi-flagship-high-capacity/products/uap-ac-pro">UAP-AC-PRO</a>.</p>
<p>In order to <a href="https://help.ui.com/hc/en-us/articles/360012622613-UniFi-Device-Adoption">adopt</a> the device, we need to reset it by navigating to the left pane under <em><strong>Unifi Devices</strong></em> =&gt; <em><strong>Click to Learn More</strong></em>:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772507866/f4d352ed-b4b0-415f-a437-58062f3750a6.png" alt="" />

<p>After successfully resetting our Unifi device/s, proceed with the adoption process &amp; wait until it finishes.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772509566/13ef6179-0dec-458d-b9b3-740643b87a57.png" alt="" />

<p>Once done, you should see the following:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772511339/25bec62b-6c28-4683-bd0d-608e43ae19cf.png" alt="" />

<hr />
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">Step 6 — Testing our access point</mark></h3>
<p>Use your Mobile/PC to connect to your access point &amp; run a speed test:</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772512913/020e1ea3-2acc-4c3e-bb35-f86b5efca1e1.png" alt="" />

<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772516517/27672907-60af-48d1-a801-e2a6c2dc76d8.png" alt="" />

<p>To view connected devices, click on <strong>Client Devices</strong>.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772518088/4e309ed2-bddc-4dd1-928f-151a506cd300.png" alt="" />

<p>Well done! you’ve successfully installed &amp; configured your Unifi controller as well as setting up your Unifi device/s.</p>
<hr />
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">Step 7 — Assigning a static IP Address(Recommended)</mark></h3>
<p>If you don’t want dynamic IPs assigned to your Unifi device/s, you can set a static one which is what I’ll be demonstrating as an optional but recommended step: There’s more options available in the <strong>Settings</strong> menu that you can adjust based on your specific requirements.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755772519268/0af5d6b4-9a5e-4ceb-96c4-e9558c5a8c67.png" alt="" />

<p>After applying the changes, it’s safe to say our Unifi controller now stands ready for deployment &amp; offering a scalable solution to manage our network infrastructure.</p>
<hr />
<h3>Final thoughts</h3>
<p>As technology evolves &amp; software dependencies change, it’s essential to adapt &amp; find innovative solutions to navigate these challenges. If you’re looking into expanding your horizons &amp; dedicated in becoming the IT wizard your were destined to be, <em><strong>DON’T</strong></em> stop learning/exploring or back down from any challenges that could help further your career.</p>
<p>By sharing my experience on the challenges I encountered, I hope to empower other IT professionals or someone looking to break into the Tech industry facing similar obstacles in their IT endeavours.</p>
<p>With the proper tools, right attitude &amp; <strong>determination</strong>, any obstacle can be overcome, paving the way to success in the ever-evolving landscape of technology.</p>
]]></content:encoded></item></channel></rss>