2023 Week 03in retro
Wow this week just flew by! It's a short week, therefore we got an extra day to organize home. We also took a couple of tours to day cares next by, and only found that one school has a wait list to 2024 already. It's definitely a lot harder to find a day care than we thought!
I also had a productive week at work. At the end of the week, we have a draft implementation plan for the project that everyone seems to be happy with. The only thing I didn't manage to do is the annual performance review, which I imagine will be the only thing I will be doing next Monday.
My side projects are apparently not making any progress this week. And honestly at this point, I forget 90% about what I was working on. The good thing is that I started this weekly journal so I am able to find some clues. The bad thing is that I am not sure if I will be able to pick them all up where I left off.
Given the amount I have at hand, and consider the fact that I would even have less time in a month, the only solution is only work on one thing at a time. With that in mind, I will continue goraytracing to implement Ray Tracing in One Weekend in Go.
I have a bad habit of building huge and impossible backlog, therefore this time I will follow this rule: only two items in a backlog. With that, the next project I want to pick up is Practical Deep Learning. And I will keep the final slot open for now, and I really should use it when I am committed to a project.
The economics of non-competes makes a good point that non-competes are not always bad, and is a good reminder that a change in interconnections of a complex system can lead to unexpected results.
In the absence of non-compete agreements, firms would be more likely to “silo” information — becoming less efficient and less able to pay higher wages.
How does it know I want csv? — An HTTP trick a reminder of the
accept header, which is not new to me. But I do learn Parquet and JSON Lines from the post.
AI Homework perfectly describes how I normally uses Copilot and ChatGPT in my daily work. It seems to me that there is plenty of room for improvement in the current AI-assisted workflow.
What’s common to all of these visions is something we call the “sandwich” workflow. This is a three-step process. First, a human has a creative impulse, and gives the AI a prompt. The AI then generates a menu of options. The human then chooses an option, edits it, and adds any touches they like.
In other words, the role of the human in terms of AI is not to be the interrogator, but rather the editor.
From Planet Money: Big Rigged (Classic) I learned that the trucking industry is very low margin, which is different from decades ago when it was one of the best blue collar jobs. That leaves me wonder if auto-driving trucks will be a good thing for the industry - in theory it might free up the drivers to do other jobs with a higher margin, and let the robots do the low margin work. But at what cost would we be able to achieve that?
Planet Money SUMMER SCHOOL 1: Recessions & Rap Battles. Oh man, so good.
Small teams reminds us that adding more people is not the only way to build a successful business.
Deep work. Essentialism in asynchronous culture offers some useful techniques to do deep work. I like the tip of essentialism a lot.
It is the ability to say ‘no’ when we know that something is not worth our time, or we simply do not have enough time to do it in the right way, even if it is a tempting opportunity. “If something is not definitely ‘yes’, it is definitely ‘no’”.
The Myth of the Myth of the 10x Programmer is a good answer to the 10x program puzzle. Instead of looking for speed, the productive developer is those who can navigate through not-well-defined problems and find the optimal solution. This also provides me with an answer to compete with the youngers: I might not as fast as pumping out code (I am not slow though), and I might not need to work crazy hours. My advantage is that I have more experience and I can navigate through the unknowns.
The most productive developers are solving big problems, and are bringing judgment and experience to bear on essential design, architecture, and “build vs use existing library” decisions.
Ways to stand out doesn't seem to apply to me, but I like the idea of getting (more) involved in the open-source community to learn new things.
How to Version an API I like the list of considerations. And as someone that runs the API platform at work, I can say you should definitely consider these questions carefully. It's VERY HARD to change anything once your API is out there and used by clients!
- Will clients need to upgrade?
- Will changes be backward compatible? Will v2 endpoints accept v1 requests?
- Will the entire API be versioned or specific routes?
- What happens when clients send a v2 request to a v1 endpoint? Vice versa?
- Semantic versioning? Deprecation policy?
Choosing a Postgres Primary Key summarizes the pros and cons of different primary key strategies, but I don't find it insightful nor solving any design challenge I have faced. I did find a comment on Hacker News that is more helpful.
- auto increment id: leaks information about the system as a whole. Users can attack each other. Might be suitable for an internal-only application or an application that doesn't care about leaking this information and goes to great effort to be resilient to users attacking each other.
- timestamp + random id: leaks information about the time the individual record was created. An attacker can attempt to learn sensitive information about an individual. Suitable for a record that is already publicly shared with its time (e.g. a tweet). Might be suitable otherwise if ids are not public. That is only the record creator can view the id and you don't send out links with the ids to the user (particularly over insecure channels such as email).
- random id: does not leak information. suitable for any use case that is okay with the performance implications (of a non-sortable fragemented index).