It's all about science, Week 4
As that famous saying goes: “Yeah Science!”.
Complex ideas, creativity, science, testing and development were the main takeaways from this week. To start with, we continued diving into the world of quantum computing, on one hand focusing on the more technical aspects of it (the power of qbits in reducing complexity, the way to solve difficult problems through lineal algebra and vector), on the other on the more metaphysical side of it (how complexity has evolved, from the Big Bang, to sexual variation, to the written word and speech, to modern computers). I still struggle a lot with these topics, but I believe that it is through difficulty and discomfort that we better ourselves, so that every time we encounter the topic that once made us stumble, we’re a bit more prepared each time. It’s also great food for thought.
Next up, one of two topics that consumed most of my time this week: pretotyping (not prototyping, pretotyping). Related to the idea of a prototype, a pretotype is an even simpler concept: before you invest any of your time and effort to make something, verify if this idea has any traction at all. The idea is quite simple but extremely brilliant, as it takes advantage of the fact that most ideas never succeed, and it expedites the process of weeding out bad ideas before we get to invested on them (this also prevents us from falling in the sunk cost fallacy). After applying some of the principles here discussed, I´’ll make an active effort to continue applying this technique in my future endeavors.
Within the science portion of the week, we also had a lecture that I would rather categorize in the human development department of our education. This lecture talked about the black box, which is the idea of, much the same as airplanes do, to have a black box of our own where we record everything that leads to our failures. The idea of keeping such record isn’t to scold ourselves or to seek punishment, but rather, to understand what, where and why things went wrong, and use this knowledge in our favor in order to not fail in that same manner. This form of seeing failure is related to a dynamic growth mindset, where we recognize that we are capable of improving and changing through our effort, rather than a fixed mindset, where we have to live with the cards we’ve been dealt.
To close the science section of lectures, we had multiple videos examining the life of renowned scientist Richard Feynmann, an extroverted, eccentric and brilliant scientist who made great contributions in a variety of fields during the last century. Rather than focusing on examining every single contribution Feynman did during his career, I got the impression that the main thread that connected each video was an examination on the character of Feynman, that which made him remarkable. He was a tenacious and curious scientist, but didn’t exactly fit the picture of the recluse academic in a lab. He spoke his mind, was incredibly creative and articulated in the way he presented his ideas, and even in his final days, he remained in good spirits and inquisitive. Yes, his contributions to modern science and technology cannot be understated, but the essence of his character is as impressive as his work itself.
Moving on, the next portion of lectures is the second topic that took most of my time this last week: testing. Testing seems quite simple, quite logical, quite straightforward, but as with most simple things in life, when you get down to it, it is complicated. On one hand we saw various videos where Google software engineers shared the details on how they test, develop and integrate their code within the massive network of code and integrations that comprise Google. These videos contained rather useful insights on how this company manages and facilitates the process of code deployment within its enormous infrastructure, as well as some of its limitations (flaky tests are pain, Apple’s restrictions on developers are as well).
Another interesting topic regarding testing comes from Netflix and their simian army, a prime example of what is referred as chaos engineering. The idea is to introduce situations where our code might fail on purpose and see how our system reacts. By doing so, we can see what failed and what survived, and be better prepared when the real failures might come. The idea sounds insane in theory, but when you accept the reality that, no matter how prepared we are, system failures will arise sooner or earlier, you might as well simulate in environments and hours when you know the negative effect will be minimized, so that when it happens without your previous knowledge, either the system is already prepared or you can respond more readily to whatever problem might be present.
Last couple of days I’ve been delving into testing with a bit of a more hand on approach, as I’ve been researching a couple of testing frameworks for JavaScript; these are: Mocha and Chai, Jasmine and Karma, and Jest. I’ve never implemented testing in a real project, but I’m actually kind of excited to soon implement testing in a future project.