Learning every week – 10-Jan-2020

It’s the second week of the new decade! And the second time I’m writing this type of post this decade. There’s a nice ring to that, isn’t it? Definitely, as any learning is good, and always a sign of growth.

I am also very happy to keep sticking to my habit of being able to write this every Friday, or latest by Saturday. I write this for myself but over the last couple of weeks, many people have told me that they read this every week. I am deeply gratified and flattered that you find value in this. Thank you! Let’s jump in this week’s learnings.


As a programmer, I come across more programmer tools than I care to count. Last week, I found Devilbox which is a Docker-based development environment for PHP applications. It supports a massive list of related technology (think databases, message queues, caching services, admin tools, and more). I like that it is simple. You just need to edit a .env file to configure your stack and run docker-compose.

This, of course, can get back to bite you as you may not need all the items in the stack and the configuration can’t disable anything. There is a way to specify what you want to start but it is hard to do that which means you will probably need another shell script or a Makefile to wrap that. I am also uncertain about how to keep it cleanly in my project repository. The whole package adds about 100 MiB. I have to try to trim it down and actually use it somewhere. My thought is to use it for DruStats as I am not really happy with Vagrant there.

Faster Docker for Mac

While looking into Devilbox, I found a reference to using NFS for Docker for Mac and quickly got lost down the rabbit-hole of posts about how to do that. At one of the ends of the rabbit-hole, I found documentation on caching options for Docker for Mac. In a nutshell, there are different mechanisms using which Docker maintains consistency between files in a container and on the host. Like everything, there’s a tradeoff with any of these options. performance options at https://docs.docker.com/docker-for-mac/osxfs-caching/. I am not sure at this point which consistency mechanism would give the best performance while not affecting development experience. The discussion started from this medium post (also, for macOS Catalina).

The Value of Being Uncomfortable

This is from one of my favourite podcasts: Coaching For Leaders. In this episode, they discuss how being uncomfortable is necessary for growth. They share their own experiences where they describe their early struggles and how that shaped their thinking today. Some of the snippets (all paraphrased):

“I thought it was a PowerPoint job and it turned out to be an Excel job.”

“learning to be comfortable with being uncomfortable.”

“All our business courses and case studies show people who are most successful. But they’re the people who had most failures. In fact, we’re seeing the most resilient person.”

They also discussed the spotlight effect–where we feel we that everything is about us because we are the centre of our own worlds. This makes us feel that the spotlight is always on us, even though that’s not the case. We tell ourselves a story that makes it many times worse as it actually is.

What are you thinking?

This is from another insightful podcast. In this episode, they discuss the ways our thoughts affect our everyday life. The discussion is around four cognitive distortions: Catastrophic thinking, using “should”, all-or-nothing mentality, and mental filters. One of the things that struck me was the power of the word “should” and how it could imply negativity. The mantra is ‘we shouldn’t “should”‘. Using “should” when having an internal dialogue creates negative thoughts. It is rarely useful and we should instead tether our thought process to efforts instead of outcomes.


I relooked at Botman and Bot Framework to build chatbots. The use case right now is an internal project at work where I am interested to build one but also expand the usage with ML and NLP on other projects I have in mind. A simple project to start with would give me enough hands-on experience to integrate it with NLP libraries down the line on other projects. I know Bot Framework would let me use Azure’s AI services but I am not sold on that just yet. That is something I will reconsider when the time comes. For now, it will be Botman along with Drupal.


This is from a podcast on Drupal. In this episode, they discuss various types of documentation in an organisation and on a project. This particularly appealed to me as it is something I am actively focused on at work at the moment. I have my own ideas but it was very useful to listen to their ideas as well which helped me structure my thoughts better. If you are on the fence about the value of documentation, do listen to this and see why you need it and how you go about it.

In researching around this, I also looked at what could be similar to Github’s issue templates for Gitlab (we use Gitlab at work) and found documentation related to this. It is not exactly the same and it is better in some ways and worse in others, but it’s definitely a start.

A year of completed posts without misses

Okay, that title wasn’t really descriptive. Let’s fix that–Assumptive Goal Setting. This is from an episode of Manager Tools podcast where they discuss the importance of goals and how do you set goals that motivate completion rather than intimidate. In my opinion, the theme centres around this:

You don’t predict the future. You invent it.

Assumptive goals look like things you have already done, not planning to do. For example, rather than say “We will increase the test coverage to 80%”, say “We have increased the test coverage to 80%.”

When setting assumptive goals, don’t worry about how to do it. Don’t put the “how” at the centre of your attention. Essentially, separate the goal from the “action”. This doesn’t limit creativity and helps visualise strategies to fulfil that goal because, in your mind, you have already completed the goal and it is not a fantastical possibility anymore. It is a fact of real-life what has already happened. They also discuss other tricks related to visualisation which sound really interesting. Picture completing the goals. Picture achieving the objectives. Picture getting the raise. Picture the look on your spouse’s face. Picture all of that. Build a firm picture of the future we want.

For some reason, the way the brain works, imagining the future is much harder than reliving the past. So, assume that you have already completed the goals. Then ask, what did we do to get here?

Searching for a better search

In an internal webinar this week at work, we discussed Algolia and how to integrate it with Drupal. The webinar discussed a project where we integrate this with Drupal to index product data and show it rapidly on the front-end by directly talking to Algolia from the front-end, which was built with React. The webinar was moderately detailed but the really impressive part of it was the search speed. It is highly encouraging to look at Algolia integration for future projects.

PHP 7.4

In a PHP meetup in Toronto yesterday, we looked at new features in PHP 7.4. I knew some of them and learnt about others, like numeric separators with literals (writing 100_000_000, which is the same as 100000000, but more readable), weak references (niche use-case which would be used by library authors but would help everybody), arrow functions, and unpacking arrays inside arrays (no more array_merge, maybe 🙂). It was fun to meet new people and the discussions were enlightening and fun. I am looking forward to the next meetup there.


In a Practical AI episode (essentially a round-up of the year’s notable AI news), I found out about transformer models for NLP called Huggable Face. As I mentioned in the section on bots above, this is something I am interested in and I am very excited to get started with either this or spacy to start doing more of this. One of these days…

This also reminded me of an earlier episode where they discussed the environmental impact of deep learning or AI. There was one example where they found that training a large model (was it BERT?) resulted in carbon emissions equal to that from five cars for their whole life. That’s a jaw-dropping amount of carbon emissions and makes us think more of why it is important to think about efficiency in training these models.


Last thing for this week: I just found from a Syntax episode about Pika. The episode covered what it is in more detail which, if I understood it correctly, is a modern package manager designed from ground-up that utilizes the current web development standards in using JavaScript modules. This addresses the complexities in bundlers and the modular system in Node replacing it with what is more directly supported in browsers. Listen to the episode for more details because I clearly didn’t do a good job of describing it. I am trying to understand it better myself.

That’s a wrap. If you read this far, thank you!





Leave a Reply

Your email address will not be published. Required fields are marked *