The "Skillset"
March 2023
I've been writing code at CoinMetrics for about 8 months - feels like a good time to reflect a bit on what I've learned.
Of course there's technical knowledge - that events are fired at the time of render, not the time of DOM update. That browsers start painting pixels on the screen sooner than I sometimes want. Object.keys()
vs Object.getOwnProperties()
. Gitlab CI/CD configuration. The list goes on.
But those resume-worthy knowledge-bites have been far less profound (from my perspective anyway) than the skill of navigating complex code. When I started I remember saying I felt like I was dropped into a very big maze with a very small headlamp. At this point, I don't necessarily feel like the headlamp is that much bigger, but I do feel more adept at navigating the maze.
A lot of this involves increased comfort with my text editor. I use Vim's code folding a lot. This helps me go from 10,000ft view of a file to individual lines of code, and back again. It helps mitigate the feeling of being "lost" in a large file.
Vim also helps me be more comfortable being "lost." I use marks to keep track of places I know I'm going to want to come back to in a few minutes. I know in the past I was very concerned about where I was in a file, whether I needed to scroll up or down to find some function. Now I'm more often thinking about "what" I need rather than "where" it is, and using search or quifix list or marks to jump right there.
I'm learning - one painful Google search at a time - how to get RegEx
to navigate the text like it's shapes of characters rather than written prose. It's just one of the ways I'm learning to tell my text editor to keep track of things so I don't have to.
As I continue to code, I find myself leaving little breadcrumbs for myself. I'll append console.log
's with "----"
so I can find log statements that are specific to development. I'm sure that practice will evolve, but it's just cool to see how practices are developing the more code I write.
Techniques and text-editor tricks aside, I can just keep more plates spinning now. Like I can feel that my brain is more able, less to fully comprehend complexity, more to navigate complexity. I can't remember everything, but I feel much more comfortable following crumbs around and around.
I also feel more comfortable reading code. In the past it was quite painstaking to translate code into English. In a way it feels not entirely dissimilar from learning to read music. More and more, the JavaScript itself is the best description of "what's going on?" This applies to documentation too - the ability to read a sentence, realize I understood none of it, zoom in, then back out, until I have the answer I need.
In the past, all of my projects were pretty much solo endeavors. They weren't that complex and mostly I was stating from scratch and adding shit. Sometimes at my job I have to do this - write a UI component from scratch - but most of the time I'm messing around in a massively complex set of pre-existing functionality that I did not write and mostly do not understand. Tighten a screw over here. Connect a hose to a nozzle over there. Open this latch, and replace this lever, without touching the wall behind you, all in the dark. I really underestimated how challenging this can be, no matter how well I know the features of JavaScript.
And let's not forget the emotional skills. Writing code is fucking frustrating and intellectually overwhelming. I have a bachelor's degree in abstract Mathematics, but that coursework could not replicate the challenge of keeping track of all the screws I've inadvertently loosened, all the plates I've dropped on the floor, all the minutiae, all the open tabs. It is absolutely rewarding when something works. And, it's hard as fuck, it can be tedious as fuck. I'm getting much better at staying sane, and a little better at being relaxed.
Writing code feels more like "bungee wilding". I'm getting more comfortable not having my feet on the ground.
Exploration of some async-ness
March 2023
I am rewriting the API for a toy Air BnB clone from Rails to Node.js. While working on seeding the database, I ended up having a good 1-on-1 with ChatGPT, and wanted to discuss some of the takeaways.
The situation: each seed user
has some number of properties that they "manage." The code looks like this:
console.log('seeding user: ', usr.username)
const userId = await pgClient.query(`
INSERT INTO users (id, username, email, password_hash, password_salt)
VALUES (DEFAULT, $1, $2, $2, $3) RETURNING id
`, [
usr.username,
usr.email,
[...hashAndSaltPassword(usr.password)],
])
usr.properties.forEach(async prop => {
const propId = await pgClient.query(`
INSERT INTO properties (id, title, description, ...)
VALUES (DEFAULT, $1, $2, ...) RETURNING id
`, [
prop.title,
prop.description,
...
])
// stuff with the propId
Despite it being the 2nd sentence of the docs, I did not realize that await can only be used inside an async function or at the top level of a module. The quick solution is to mark the forEach
callback as async
, but before I realized this I decided to refactor the function using promises:
console.log('seeding user: ', usr.username)
const userId = await pgClient.query(`
INSERT INTO users (id, username, email, password_hash, password_salt)
VALUES (DEFAULT, $1, $2, $2, $3) RETURNING id
`, [
usr.username,
usr.email,
[...hashAndSaltPassword(usr.password)],
])
usr.properties.forEach(prop => {
pgClient.query(`
INSERT INTO properties (id, title, description, ...)
VALUES (DEFAULT, $1, $2, ...) RETURNING id
`, [
prop.title,
prop.description,
...
])
.then(propId => // stuff with the propId)
I don't think there is anything wrong with this approach, but I began to wonder about subsequent iterations of the forEach
loop - specifically in the context of executing concurrent INSERT
queries on the same table in a database.
ChatGPT told me that await
prevents the JS engine from even starting the subsequent iteration of the loop until the remainder of the loop's body code is executed. I don't think that's correct, but we'll come back to that. At this point, I was wondering if I could prevent a forEach
loop's subsequent iteration without await
. ChatGPT gave me 2 options:
One option, termed the "continuation-passing style," favored recursion over loops:
function fetchUrls(urls, callback) {
let index = 0;
function next() {
if (index >= urls.length) {
callback();
return;
}
const url = urls[index++];
fetch(url)
.then(response => response.json())
.then(data => {
next();
});
}
next();
}
Another option was to force the loop's body to be synchronous, such as xhr.open("GET", urls[i], false)
. I asked: "what is the difference between blocking execution of a for loop via const result = await fetch(...)
and blocking execution of a for loop with a synchronous xmlhttp request xhr.open(..., false)
?" - and the response was:
The main difference between blocking execution of a for loop via await fetch and blocking execution with a synchronous XHR request (xhr.open(..., false)) is that await fetch is non-blocking and does not block the main thread, while a synchronous XHR request is blocking and can cause the main thread to freeze.
When you use await fetch, the JavaScript engine can continue executing other code on the main thread while the fetch request is in progress. Once the response is available, the await keyword causes the JavaScript engine to pause execution of the current function until the fetch request completes and the promise resolves.
In contrast, a synchronous XHR request blocks the main thread until the response is received. This means that the browser cannot respond to user input or execute any other JavaScript code until the XHR request is complete. This can cause the user interface to become unresponsive and negatively impact the user experience.
Therefore, it's generally better to use asynchronous requests with callbacks or promises, or async/await if it's available, instead of synchronous XHR requests."
This is in a sense obvious - blocking the main thread vs blocking the execution of some subsequent lines of code in a particular function in a particular file - but I found that explanation insightful. Remember: I am trying to block the execution of the loop (and am still under the impression that await
will do this for me). In any event, - duh, one way to block execution of the next loop iteration is to literally block the entire engine.
Aside: one additional avenue provided by ChatGPT involved "tying" the next iteration's body to the resolution of the previous':
function runLoop(numIterations) {
let promise = Promise.resolve();
for (let i = 0; i < numIterations; i++) {
promise = promise.then(() => {
return myCustomFunction();
});
}
return promise;
}
Still though - this does not prevent the next iteration from starting, but it does force each iteration's functionality to wait for the resolution of that of the previous iteration. Ok so still not able to block iteration. So an experiment:
[1, 2, 3, 4, 5].forEach(async el => {
console.log('hello from the next iteration of the forEach loop, el: ', el)
const res = await new Promise(resolve => {
console.log('making a new promise!')
return setTimeout(() => {
console.log('timeout is finished')
resolve(el)
}, 1000)
})
console.log('done "await"ing, res: ', res)
})
... resulting in:
hello from the next iteration of the forEach loop, el: 1
making a new promise!
hello from the next iteration of the forEach loop, el: 2
making a new promise!
hello from the next iteration of the forEach loop, el: 3
making a new promise!
hello from the next iteration of the forEach loop, el: 4
making a new promise!
hello from the next iteration of the forEach loop, el: 5
making a new promise!
timeout is finished
done "await"ing, res: 1
timeout is finished
done "await"ing, res: 2
timeout is finished
done "await"ing, res: 3
timeout is finished
done "await"ing, res: 4
timeout is finished
done "await"ing, res: 5
Foiled again!! ChatGPT then told me to use a for...of
loop:
async function example() {
for (const el of [1, 2, 3, 4, 5]) {
console.log('hello from the next iteration of the for loop, el: ', el);
const res = await new Promise(resolve => {
console.log('making a new promise!')
return setTimeout(() => {
console.log('timeout is finished')
resolve(el)
}, 1000)
})
console.log('done "await"ing, res: ', res) // res should === el
}
}
// RESULTS:
hello from the next iteration of the for loop, el: 1
making a new promise!
timeout is finished
done "await"ing, res: 1
hello from the next iteration of the for loop, el: 2
making a new promise!
timeout is finished
done "await"ing, res: 2
hello from the next iteration of the for loop, el: 3
making a new promise!
timeout is finished
done "await"ing, res: 3
hello from the next iteration of the for loop, el: 4
making a new promise!
timeout is finished
done "await"ing, res: 4
hello from the next iteration of the for loop, el: 5
making a new promise!
timeout is finished
done "await"ing, res: 5
Apparently this works because for...of
is built on the iterator protocol, and the way "moving on to the next iteration" relies on the invocation of the next()
method. Time for further reading.
Is JavaScript pass-by-value or pass-by-reference?
April, 2023
This is a question I've asked and researched probably 3 times already. Maybe this time it will stick.
My understanding of the situation is this: that JavaScript is purely pass-by-value, with an odd caveat.
In the case of an object, say const obj = { prop1: 1, prop2: 2 }
, the value assigned to the variable obj
is not actually the object we instantiated - it is, under the hood, a reference to the memory location of the object. I think it would be as if we said struct MyObj *p = &someObj
.
So when you pass obj
to a function like foo(obj)
, we are passing "the actual value of the variable obj
" - but it just so happens that this value, sneakily, is a reference.
So when we say obj[prop2: 'changed']
, we are really saying "change the value of prop2
of the object stored at the memory address stored in the obj
variable."
The pass-by-value nature of JS is more apparent when passing a primitive, like const n = 5; foo(n)
. In this case, the value of the argument will be the value of n, which is simply 5.
I think it's worth pausing to point out that naming a function parameter the same as the variable passed to it can confuse the situation:
let x = 4
const foo = x => {
x = 5
}
foo(x)
console.log(x)
and that there's a difference between that example and:
let x = 4
const foo = () => {
x = 5
}
foo()
console.log(x)
This clears up some confusion I had around the following example, modified from this SO post:
const foo = x => {
x = { prop3: 3, prop4: 4 }
}
const x = { prop1: 1, prop2: 2 }
foo(x)
console.log(x) // { prop1: 1, prop2: 2 }
Here, we are passing by value, and that value is a reference to the object with prop1 and prop2. Modifying x
in the function body does NOT modify x
outside the function: if we said x[prop3] = 3
, under the hood, JS engine interprets that as an instruction to modify the prop3
value of the thing located where x
points. In this case, it matters not whether the x
inside the function is "the same" as the x
outside the function - both variables now have, as their value, the location of some object, and making modifications to the properties of the object stored at said location will... well it will modify that single object.
BUT, when we say, inside a function, x = {...}
, we are saying "change the value of the variable x - which exists only within the contexts of the execution of this function, and which shadows the variable x
from outside the function - to a reference to a new object. For example:
const foo = obj => {
obj['prop3'] = 3
const obj2 = Object.assign({}, obj)
obj2['prop4'] = 4
return obj2
}
let obj = { prop1: 1, prop2: 2 }
let objMod = foo(obj)
console.log(obj)
console.log(objMod)
Now, FWIW, all of this exploration was inspired by curiosity about how to best modify an array parameter like const shuffledArr = shuffleArr(getSomeArray())
. Here we don't have to worry so much about modifying the wrong array - there never is any variable outside the function. This means no need to make some copy like tempArray
and modify that. But, for much the same reason, we do have to ensure shuffleArray()
returns the modified array. This was a gotcha for me - another way would be:
const shuffleArray = array => {
// stuff
array[i] = array[j]
// stuff
}
let array = getSomeArray()
shuffleArray(array)
In this example, the array declared outside the function is modified by the execution of the function.
Another illustration:
const modifyArray = array => {
array.push('added')
}
let a1 = [1, 2, 3, 4, 5]
modifyArray(a1)
console.log(a1) // [1, 2, 3, 4, 5, 'added']
I tried this with like 100 modifyArray
calls - for some reason I was under the impression that greatly increasing the size of the array would cause a reassignment of the function-scoped array.
So then here's a question: we said the "value" of a variable that was assigned to an object is in fact a reference to that object. How can I see said reference?
This says it's not possible, as did ChatGPT:
Q: ok but how can i see the specific memory address stored as x's value
A: In JavaScript, there is no way to directly access the memory address of an object. This is because JavaScript uses a garbage collector to manage memory allocation, which abstracts away the underlying memory addresses from the developer.
This was also an interesting read.
What happens if you pass a string to server.listen()? - and other questions you probably never wanted to have to find the answers to in the first place.
May 9, 2023
This post is to summarize rabbit hole that spanned 6 hours and these two StackOverflow posts.
The setup: I needed to spin up a node server. In a previous project, I had included the following line directly in the server code:
const PORT = process.env.PORT || "8080"
I have been moving toward putting environment variables in a .env
file - remembering to add them all at the command line was becoming tedius. Without thinking too much, I threw an additional line in my .env
to specify the port:
SOME_KEY=aaaaaaaaaaaaaaa
SOME_SECRET_KEY=bbbbbbbbbbbbbbb
SOME_OTHER_THING=ccccccccccccccccc
PORT="8080"
For context, here are some relevant parts of my server file. First, how I parse .env
:
const dotEnv = fs.readFileSync(path.join(__dirname, '.env'), 'utf8')
dotEnv.split('\n').forEach(line => {
const [ key, val ] = line.split('=')
process.env[key] = val
})
And here is the server code itself:
const main = () => {
log(`Starting api server`)
const exit = signal => {
server.close(() => process.exit())
log('Server is closed.')
}
const server = http.createServer(handleRequest)
server.listen(process.env.PORT, process.env.HOST, e => {
if (e) {
log(`server.listen() returned error: `, e, process.env.PORT)
return
}
log(`Server is listening on ${process.env.HOST}:${process.env.PORT}`)
})
process.on('SIGING', exit)
process.on('SIGTERM', exit)
process.on('uncaughtException', (err, origin) => {
log(`Process caught unhandled exception: ${err} ${origin}`, 'ERROR')
})
}
I popped over to the command line to run the server:
$ node server.js
[ 2023-05-09T05:45:16.009Z | INFO ] Starting api server
[ 2023-05-09T05:45:16.012Z | INFO ] Server is listening on 127.0.0.1:"8080"
I thought this was a little odd... had I ever seen the port in quotes before? shrug
As one does, I ctrl-c
'd, made some changes, and tried running the server again:
$ node server.js
[ 2023-05-09T05:45:16.009Z | INFO ] Starting api server
[ 2023-05-09T05:19:06.464Z | INFO ] Process caught unhandled exception:
Error: listen EADDRINUSE: address already in use "8080" uncaughtException ERROR
Huh... I've seen this type of thing before if I don't handle SIGINT
properly - no big deal, quick google search, lsof -i :8080
, should be good to go.
Except lsof
didn't show anything. No. Matter. What. I tried a dozen different lsof
invocations with different flags. I tried various netstat
commands. fuser
. ps
, ps aux
. I tried npx kill-port
, and some utility called htop.
I restarted the computer.
Nothing was working. Node kept insisting something was using that port, yet I couldn't get anything to show me what was running on the port.
Thanks to this comment, I did a quick experiment - what would the server output be if I set process.env.PORT = "8080"
directly in the server file itself?
$ node server.js
[ 2023-05-09T05:45:16.009Z | INFO ] Starting api server
[ 2023-05-09T05:45:16.012Z | INFO ] Server is listening on 127.0.0.1:8080
Well THAT'S interesting. So when I set process.env.PORT
to a string (8080
), then pass said string to server.listen()
, everything works as expected. Running lsof -i :8080
shows the process.
Ok so at this point I'm kicking myself - duh, when I read the variables from .env
, they are strings. BUT for PORT
I read the string "8080"
- with quotes embedded in the damn string.
At this point the fix is clear - don't put fucking quotes in the .env
file. Like all the other variables in there that I failed to look at when adding PORT
. Whatever.
But now I'm scratching my head - so all this time I've been passing strings to server.listen()
?? What happens when I pass it a string as the first param??
First things first: ask ChatGPT. Their answer was that server.listen()
performs parseInt()
on its inputs. Experiments time.
When setting the environment variable in the server file via process.env.PORT = "8080"
:
console.log(process.env.PORT, typeof process.env.PORT, parseInt(process.env.PORT))
When reading the erroneously double-double-quoted variable (PORT="8080"
) from .env
:
console.log(process.env.PORT, typeof process.env.PORT, parseInt(process.env.PORT))
Hm ok interesting. So initially I'm thinking "when I pass a non-parsable string to server.listen, it should... not work." AND YET! It did "work". The server started listening, per the execution of my logging statements. Inspecting things further:
When setting in server file:
console.log(server.address(), server.listening)
// { address: '127.0.0.1', family: 'IPv4', port: 8080 }
// true
When reading from env file:
console.log(server.address(), server.listening)
// '"8080"'
// true
Dumbfounded at this point. I'm under the impression that when I pass the string "8080"
to server.listen()
, parseInt
returns NaN
, so there should be an error. I'm also under the impression that server.address() should be the object I see in the first case, not the string I see in the second.
What the fuck is port "8080" anyway???
This answer was very helpful - apparently Node.JS has multiple signatures for server.listen(). Wow. I'm still not sure the exact mechanisms but I inadvertantly instructed node to start an IPC server with the path "8080".
If I said server.listen('fake', ...)
, same situation - server listens with server.address()
being a string '"fake"'
.
Woof.
So my next question was: ok how do I kill these damn things? I would start an IPC server with string "8080"
, kill it, then try again, and get the EADDRINUSE
error over and over again no matter what. Incrementing the port to "8081"
again works only once, then EADDRINUSE
.
ChatGPT suggested lsof -U
which selects the listing of UNIX domain socket files.
I saw one in there called 'fake' so I killed that process. But still couldn't restart the server on that "address" a 2nd time.
ChatGPT suggested running find
to look for any files named fake
and lo and behold I found some - along with files named 8080
, 8081
, 3000
, 3001
, 3002
. In my project directory. They were listed with a trailing =
so '"8080"'=
and fake=
. Permissions were new to me as well:
srwxr-xr-x 1 aaronparisi staff 0 May 8 21:45 fake=
Deleting these... erm... "entities" finally cleared up the EADDRINUSE
error. I'll have to spend some time reading about IPC servers.
The other issue was a fucking typo - I had SIGING
instead of SIGINT
. Lol.
Lessons learned: be careful with quotes in the .env
file! Also that software development is hard for reasons I don't always expect.
Will this be on the test?
May 9, 2023
In this post, I discussed the technical details of a particularly aggravating debugging session. I also needed an emotional debrief.
For context, I had spent quite some time tinkering with node-pg
and @aws-sdk/client-s3
to seed a postgresql database with some fake data and dump some images into an S3 bucket. I really should just take some AWS courses. In any event, I was really excited to start thinking about api design - and to take a quick vacation from the land of "no idea what I'm doing."
People describe software development as a creative process - and I can unequivocally say that I haven't been this intellectually stimulated since college. It really is nice to have the opportunity to think creatively on a daily basis. It is certainly creative - I can see the results of my brain-work in the browser. Very satisfying.
But let's be clear - it's not a creative process like drawing is. Sure, drawing involves the development and honing of skills, and that takes time. But we never sit down to draw, only to spend 3 days learning about the manufacturing process of a fucking pencil. If the pencil doesn't work we sharpen it. If it still doesn't work we just get a new one.
I think on some level it's cool that there is so much to learn about computers, that every door opens up to a hallway of a thousand doors. Endless stuff to stimulate curiosity.
But let's be honest, sometimes that's fucking annoying. I didn't want to learn about IPC servers when I sat down to think about the api design for my toy project. It can be downright disheartening, after weeks of baby steps, to finally get to something I actually want to think about, something I'm anticipating is going to be relatively simple and familiar, only to stumble into a room I frankly didn't want to know about, that doesn't get me any closer to doing the thing I actually want to do. Just so I don't accidentally go back into this room (which I still know next to nothing about).
In retrospect that was a major challenge for me - at some point, my enthusiasm to "learn shit that doesn't help me actually manifest my intention because for some random ass reason it's in my way right now" ran out and I began to perceive the challenges less as "learning opportunities" and more as simple obstacles, broken pencils that I unfortunately couuldn't just discard. Sure, I learned something, and that in itself is valuable - anyone who knows me would tell you I am almost annoyingly driven to learn the minutae and the details and the "why's". But... I really didn't want to learn that thing at that moment.
The skill: redirecting my creative brain juices from stuff I wanted to think about to stuff that I had no intention of learning about, and may not find interesting or valuable anyway - and doing this over and over and over, even when I'm not expecting to have to do it. I think it's important to be honest about how challenging this can be. Being consistently unable to express myself as I intend can be tiring, especially when I thought there were, finally, a few hours of smooth sailing ahead of me.
This made me think back to being a math teacher. It's the classic question - "Do I need to know this? Will this be on the test?" I would somewhat wryly allude to the value in the intellectual exercise, even if the specific knowledge is not particularly interesting or immediately useful. In my case, it's definitely not on the test, but not knowing it almost made me miss the test entirely because I spent 6 hours trying to figure out why the door to the custodial closet wasn't opening.
In any event, I think this is all part of the growing pains - not just increasing technical acumen, not just amassing knowledge, but also building up the endurance to keep chugging along after the thousandth surprise side-quest.
(meandering & disorganized) Reflections after 1 year of Vanilla JS
May 2023
I have been writing frontend code in vanilla JavaScript at CoinMetrics for about a year, and I want to spend some time reflecting on the approach.
I really enjoy using (verbose) vanilla JavaScript - I find syntactic sugar confusing and annoying a lot of the time. JSX tastes more salty than sweet to me - I don't find it helpful to have what is ultimately a function call look like quasi-html. Maybe at some point I'll get tired of typing the word document
but it's fine for now.
I really enjoy the opportunity to learn lower level stuff. It's equal parts annoying and rewarding to deal with the intricacies of the cookie API. At this point in my career, I'm happy to learn anything.
I really enjoy using Node.js directly - I find it helpful to know what I'm actually instructing the computer to do. No I don't think that means "think about memory busses and microprocessors every time you write a helper function" but it does mean understanding how data moves through a filesystem or a network. Abstractions are cool, but often leave me scratching my head - app.get(...)
- what did I just do? Having to manually parse URLs and write routing logic is informative - ohh, ok this is what we're doing here - makes sense. Ditching abstraction demystifies the process - it's just information being manipulated and passed around.
I appreciate, to some extent, the freedom to organize a UI as I please, although this is a double edged sword. I think mostly the educational benefit is to consider "how I would do this in React" vs how I am seeing it done in our code, and to, at the very least, think about it.
All that said, there are a few things I'm growing wary of. The first is our almost complete lack of dependencies. As much as I enjoy learning the minutae of the cookie API... there may be knowledge that's, idk, more worth my time.
On that note, I'm starting to think it's impossible to "keep up" without at least dependencies, if not a full-on framework. I was tasked with creating a virtualized list a-la react-virtualized. It works fine, but it was challenging to implement, and there's just no way I can compete with version 7 of an open source library with issue tracking and >2M weekly downloads.
One other reality is that, for better or worse, most people don't want to write web software without frameworks or dependencies. So it's hard to hire people. This is less a philosophical argument against vanilla, more a practical one. I get it - the number of people who do a thing doesn't necessarily imply its superiority. But sometimes I think it's tempting to take a contrarian point of view - that something's popularity implies its inferiority.
One of the strongest arguments I heard initially for vanilla: if you know what you're doing, you don't need a framework. I think there's 2 things to explore with this.
First is the feasibility of "knowing what you're doing" (aka knowing basically everything about web APIs, protocols, browser rendering engines, ...). I really, really do think there is tremendous value is deepening our knowledge, understanding more about what is actually happening, the where's and the why's of things. I love that type of learning.
But... idk, there is plenty to learn. Using a date library may mean I don't have to stumble through documentation (yet again) to half-solve a problem that someone was already kind enough to iron out. I suppose there's a missed opportunity for learning here. Or I could just... spend time learning something else.
The other half of the argument is "... then you don't need a framework". The tacit implication is that, equipped with sufficient knowledge, you wouldn't even want a framework because of how they can complicate things. Honestly I don't know if I agree with this. Knowing how to manually query and manage the DOM doesn't necessarily make me prefer doing it that way. Knowing more about CSS rules doesn't necessarily make me want to think about styles according to its cascading paradigm. Knowing the math behind spring-based animations doesn't mean I'm excited about doing it by hand every time. The jury is still out. (FWIW I understand there are distinctions, even within React, between libraries and frameworks - but my current professional work is largely devoid of both.)
Something I reflect on a lot: how deep ought my knowledge be? I don't know that memorizing bit transmission rates through various metals will be that helpful. I also think it's unhelpful to passionately give no shits about anything I don't explicitly "need" to know. But perhaps that's just more a question of time allocation - using React in my professional work doesn't prevent me from reading more about web security or operating system design.
Above I discussed app.get()
, and how ditching server-side frameworks has helped me learn a lot more about what's going on in the request-response ballet. I think however there are 2 important caveats: 1. is a job really the place for classroom-style "do it the long way for a while then I'll show you a shortcut once you understand it" methods? 2. is there not something extremely valuable about being able to look at a function or a tool or an API, understand its signature, and use it without understanding how it's working? Sometimes I think my desire to know how everything works stems from a lack of skillfullness with the craft - it's crucial to be able to operate without total knowledge, to be able to pick up a tool and use it as it is intended to be used despite the vague discomfort inherent in not knowing what's going on inside.
One additional argument about vanilla - "we write 100% of the code so we know what's in there." This point has become less convincing - we may have wrote all the code but that doesn't mean it's good, secure, or memorable. I suppose it's true that npm packages can have malicious code or security bugs - but in all honestly they're probably much less likely to have them than our own code. And from a business perspective, sure, we may be the 1 website that doesn't experience problems eminating from a widely-used library, but we may not have any clients by that point anyway. Not to say we should be reckless with libraries.
All in all, seems like there's a healthy balance somewhere between "I don't know the first thing about the internet but I can make so much cool shit" and "I write websites in assembly." There may be serious issues with web frameworks and reckless, mindless use of dependencies - but perhaps it's not necessary to throw the baby out with the bath water.