<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Ateeq’s Substack]]></title><description><![CDATA[My personal Substack]]></description><link>https://ateeqend.com</link><generator>Substack</generator><lastBuildDate>Mon, 06 Apr 2026 20:28:11 GMT</lastBuildDate><atom:link href="https://ateeqend.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Ateeq]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[ateeqend@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[ateeqend@substack.com]]></itunes:email><itunes:name><![CDATA[Ateeq]]></itunes:name></itunes:owner><itunes:author><![CDATA[Ateeq]]></itunes:author><googleplay:owner><![CDATA[ateeqend@substack.com]]></googleplay:owner><googleplay:email><![CDATA[ateeqend@substack.com]]></googleplay:email><googleplay:author><![CDATA[Ateeq]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The 3 Phases of My API Optimization]]></title><description><![CDATA[This post shares what I learned about speeding up an API by improving the database and the tradeoffs that came with it.]]></description><link>https://ateeqend.com/p/the-3-phases-of-my-api-optimization</link><guid isPermaLink="false">https://ateeqend.com/p/the-3-phases-of-my-api-optimization</guid><dc:creator><![CDATA[Ateeq]]></dc:creator><pubDate>Thu, 28 Aug 2025 07:41:32 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/296ab76e-34ab-4e9d-b4a3-d96d83d1d5a0_6000x4000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I work at a CRM company where one feature lets a user see their own details along with the details of everyone in their family, the users who created them, the user they have created, plus the users created by those people, and so on. In other words, it shows a full &#8220;family tree&#8221; of accounts, parents, grandparents, children, and siblings.</p><p>To build this, we use an API that runs a recursive query in PostgreSQL. It starts from a user, follows the <code>parent_id</code> links up until it reaches the very first ancestor, and then walks back down through all the children. With millions of records, this becomes heavy work for the database. Even on our powerful servers, the query was taking more than 1.5 seconds to run. The frontend team flagged this as a serious issue, because with thousands of users making the same request, the whole system could slow down.</p><p>Time to debug. I figured I must have missed something obvious. Maybe the parent_id column isn&#8217;t indexed. But it was a foreign key, and the index was already there, so no luck. Next, maybe the ORM I was using wasn&#8217;t generating an efficient query. I rewrote it in raw SQL to see if that helped. It was faster, but only about 72% better. That still fell far short of my target of 500%. At this point, it was clear I needed to change the way the data was structured. Since I was pulling user info by joining multiple tables, I thought of using a materialized view. A materialized view is basically a read-only table that stores the results of a complex query with all the joins, so everything I need is already in one place. The view then gets refreshed every 15 minutes with updated data. When I tried this, the difference was huge. The query dropped to around 200 ms, a 750% improvement. Great! I hit my performance goal and then some. But then I started thinking: is this really the best approach? I was duplicating a lot of data, and every refresh put extra load on the database. It worked, but it didn&#8217;t feel clean or sustainable.</p><p>I started thinking there has to be a simpler way. What if I just add a root_parent_id column to every record in the users table? That way, each user would always point to their top-level ancestor, no matter where they sit in the family tree. Then, if I need the full bloodline, I can just query all users with the same root_parent_id. I tried it out. After updating the database and running some tests, the response time averaged under 300 ms. That&#8217;s fast enough, and it didn&#8217;t involve as much data duplication or heavy refreshes on the server. But soon I hit problems. With this setup, I could get the full bloodline, but what if I only needed parents, children, or siblings later in another feature? For that, I&#8217;d still have to use the same expensive recursive query I was trying to avoid. Keeping the `root_parent_id` up to date was another issue. Every time a user was added or updated, the system would need to recalculate it. Doing that during a save would slow things down and could cause timeouts, so I&#8217;d have to push it into a job queue. It worked, but it felt like too much effort for something that still wasn&#8217;t flexible. I knew I needed an even simpler fix.</p><p>What if, I just let it run like it was previously running through a time taking, complex CTE and cached a response for the same payload for a given user? It took 1.6 seconds on the first API call and subsequent ones were giving me a response in less than 300 milliseconds. Great, I just need to make sure the response is accurate. I set the cache timeout to one hour only and If a user gets updated, there is a chance that their parent has been changed which could affect not just this user but a lot of others as well. So, I just simply need to invalidate the entire user cache in such a case. The user is not being updated very often so I will be fine.</p><p>I communicated to the frontend team that the time taking part has been fixed and to this day, I have&#8217;t received a complaint so I guess it is working.</p>]]></content:encoded></item><item><title><![CDATA[How I Landed My 3 Jobs Over the Last 7 Years]]></title><description><![CDATA[Plus, what is the secret of a successful job hoper]]></description><link>https://ateeqend.com/p/how-i-landed-my-jobs-over-last-7-years</link><guid isPermaLink="false">https://ateeqend.com/p/how-i-landed-my-jobs-over-last-7-years</guid><dc:creator><![CDATA[Ateeq]]></dc:creator><pubDate>Thu, 18 Jul 2024 16:26:32 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f9bd4a06-2318-4613-bc3f-c336b133d96a_988x801.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>7 years ago, I emailed my resume to a company and received a call within the next 5 minutes. My interview was set for two days later, and I was hired on the spot. This was in 2017, the interest rates were low, there were not enough CS graduates and AI wasn&#8217;t mainstream.</p><p>Everything has changed today. Interest rates are at an all-time high, there are many CS graduates and AI has already taken over a few jobs.</p><p>How we apply also need to change. There are thousands of applicants and many many more tools that can help us build a better application than there were in 2017.</p><p>After spending 5 years in PHP, I wanted to change something, I wanted to go into a cutting edge technology. AI seemed a lot expensive (this was before ChatGPT API). The only viable option was to move to blockchain. But there was a problem. Who would hire me to do the same work as a freshie but for the pay of a 5 years experienced developer. A few interviews gave me the answer, no one.</p><p>I had to build a portfolio to land a job and needed a job to build a portfolio. I decided documenting my learning. I made a daily streak of Github commits to a Readme file, I would learn new thing daily and document it. After 4 months, I was hired in Tkxel. The team size was 85, and I was the only blockchain developer among them. I had a big responsibility and the next year was as much challenging as it was fulfilling. Jamal, CEO of that project, later told me that he had interviewed 19-20 people and I was the least experienced but most motivated.</p><p>This made me realize that having a hunger to learn and showing that hunger would help my chances of getting a gig.</p><p>After a year, the project was ended and I was transferred to the biggest project of Tkxel, as a NodeJS developer. I had no experience with the NodeJs but that wasn&#8217;t a problem after a year in blockchain. The past year had given me confidence that I could perform well in unknown territories and the next one confirmed that I was not wrong.</p><p>During this time, a lot of recruiters reached out to me in my Linkedin dms and after many interviews, I had a couple of offers in my hand. I was not able to decide which one to pursue, so thought to get help in deciding. I called my former manager and asked her for her advice. She asked if I would be willing to work with her in her new company. I sent my resume over, had a few rounds of interviews and was offered a position which I accepted. Having a recommendation made me bypass all those ATS tools (resume scanners).</p><p>A former colleague is an expert in finding remote jobs. I asked him to share his secrets and he told me that all I do is mention their mission and vision in my cover letter. Moral of the story is that personalizing has a better chance of standing out from the crowd.</p><p>But it does come with a cost. You would dedicate a few hours daily to apply to a handful of jobs (because personalizing takes time) only to receive automated rejections. In the defence of the recruiters, they are receiving far more applications than they used to. The key is to not be let down. Job applications are a form of sales and sales are a numbers game and good salespersons are not bothered by rejections and even mean replies. They just move on to the next lead.</p><p>tldr;</p><ol><li><p>Just Apply</p></li><li><p>Build a portfolio yourself if you don&#8217;t have any</p></li><li><p>Talk to former colleagues</p></li><li><p>Personalize each application</p></li><li><p>Don&#8217;t give up too soon</p></li></ol>]]></content:encoded></item><item><title><![CDATA[Stub APIs: A Tempting Shortcut with Hidden Costs]]></title><description><![CDATA[What to consider when providing stub APIs to the frontend teams. Why should you (not) do it in the first place.]]></description><link>https://ateeqend.com/p/stub-backend-rest-apis-for-idle-frontend</link><guid isPermaLink="false">https://ateeqend.com/p/stub-backend-rest-apis-for-idle-frontend</guid><dc:creator><![CDATA[Ateeq]]></dc:creator><pubDate>Thu, 28 Dec 2023 04:10:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qleN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff08d5036-94bb-4996-a7bb-36fe3354405a_500x560.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>tldr is at the end.</p><p>A couple of months ago, I had a long list of APIs that were to be provided to the mobile and web frontend teams. 9 people were dependent on me to continue their work (and would often stare from across the table) which was anticipated to take about a month. So, I chickened out and provided stub APIs so that they don&#8217;t sit idle. It was first for me, and I learned a lot during this process. Here&#8217;s a summary of my experience.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://ateeqend.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Ateeq&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qleN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff08d5036-94bb-4996-a7bb-36fe3354405a_500x560.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qleN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff08d5036-94bb-4996-a7bb-36fe3354405a_500x560.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qleN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff08d5036-94bb-4996-a7bb-36fe3354405a_500x560.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qleN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff08d5036-94bb-4996-a7bb-36fe3354405a_500x560.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qleN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff08d5036-94bb-4996-a7bb-36fe3354405a_500x560.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qleN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff08d5036-94bb-4996-a7bb-36fe3354405a_500x560.jpeg" width="500" height="560" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f08d5036-94bb-4996-a7bb-36fe3354405a_500x560.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:560,&quot;width&quot;:500,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:78172,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qleN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff08d5036-94bb-4996-a7bb-36fe3354405a_500x560.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qleN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff08d5036-94bb-4996-a7bb-36fe3354405a_500x560.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qleN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff08d5036-94bb-4996-a7bb-36fe3354405a_500x560.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qleN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff08d5036-94bb-4996-a7bb-36fe3354405a_500x560.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>Stub APIs are temporary placeholders that provide mock data to frontend teams while backend development is still ongoing. While this technique might seem like a win-win at first glance, it comes with a hidden cost of increased complexity and potential technical debt</p><h4>1. The URL Format</h4><p>I was working with NestJS, and circular dependencies between modules are a bit complex. For example, if the "Employee Management" module relies on services from the "Leave Management" module, and there's an endpoint primarily associated with leave management, it might be tempting to place that endpoint in the Leave Management. However, if that endpoint later requires access to employee information, it could lead to complications.</p><ul><li><p>Including references to "Employee Management" module in the "Leave Management" module would create a circular dependency between the two.</p></li><li><p>Moving the endpoint to a different URL would require changes for frontend teams.</p></li></ul><p>This brings me to my next point.</p><h4><strong>2. Ask Frontend Teams to Store URL Endpoints in env File</strong></h4><p>All endpoints should be stored in the dotenv file so that the code does not get changed later on.</p><h4>3. Use a folder to store all the stub responses and use them in your Service File</h4><p>Even if a lot of endpoints are just returning a simple response like the following</p><p><code>{</code></p><p><code>   &#8220;message&#8221;: &#8220;success&#8221;,</code></p><p><code>   &#8220;status&#8221;: 200<br>}</code></p><p>Store them separately for each endpoint and require them in the service file (not your controller). This way, you will have a list of APIs that need to get implemented.</p><h4>4. Conclusion</h4><p>Don&#8217;t do it. I do not see much benefit of this technique. Plan ahead so that the backend and frontend are always in sync.</p><h4>tldr;</h4><ul><li><p>Think thorough of all the services an endpoint might require later on and use the module that is a super set of these services.</p></li><li><p>The frontend teams should be coached to not use endpoints directly in their code. Instead they should use environment variables that hold them so that a change in the API URL does not mean a change in the React code.</p></li><li><p>Store all the stub responses in files in a folder dedicated to them. This will serve as a checklist of your technical debt.</p></li><li><p>Try as much as possible that the stubbing is not required.</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://ateeqend.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Ateeq&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>