Adding collaborators to a repository

Aside from forks and pull requests, another way to have someone contribute is with adding them as a collaborator, and giving them write access to your repository. You can use organizations or just add someone individually. I am testing out code review, but it turns out that someone needs to be a collaborator with write access to a repo in order to be allowed to participate in code reviews.

Developer Member Program on GitHub

I recently signed up for the developer member program on GitHub. I haven’t looked into every feature of it just yet, but it seems better than not signing up for it. Apparently GitHub has an API, which I might learn later. Yet another thing to add to my ever-increasing backlog of things to learn and play with.

Code review on GitHub

I learned more about code reviews today, and even did one on my gitUdemyLearning repository.

Pair programming

A topic similar to code review is pair programming. I guess I did that for a hackathon, but it was purely accidental. I’ve also worked on group assignments with people, and attended some workshops related to infosec and programming. Pair programming is good for many reasons, and it’s like code review but while the code is being written rather than after-the-fact. It can help you interact with other programmers more, learn to justify your own design decisions, and learn about how other people think and code. If you only look at your own code, you’ll think that it’s normal – but everyone has their own unique life experiences that lead to them having different coding styles. People go to different universities, each with classes and professors who teach slightly different things. People have varying levels of ability. People have different preferences. There are different social norms in different groups. People will bring different skills and experiences to the table, and they might be different from yours. That’s perfectly fine, and it can help you grow as a developer to see how people do things differently.

Some techniques aren’t better or worse than others. Of course, big O notation can measure algorithm performance, but some things are not as easy to quantify. There’s a lot of what I would call lateral choices, rather than something being completely better or worse. Maybe you could use switch/case, or perhaps if/elif/else. Maybe one person prefers camelCase while another prefers snake_case. One person might have their own favorite design patterns, while another person might not be so keen on shoehorning design patterns everywhere.

Mixing programming language ideas

Just like how someone who speaks Spanish and English might occasionally speak Spanglish, a developer who knows Python, JavaScript and C++ might occasionally think in PythonScript++. Each programming language has its own design philosophy, and it can be good to know multiple languages to see how things are done differently. Not all languages have the same things, like strong vs. weak typing, dynamic typing, built-ins vs. third party libraries, boilerplate, verbosity vs. brevity vs. terseness, and so on.

Number of GitHub contributions isn’t the best way to measure the accomplishments of a developer

I have a pretty high number of GitHub contributions so far this year, and it’ll only increase as time goes on. But I’m making rather small apps, and with frequent commits/issues, and more recently, pull requests and code review. But I’m starting to realize that, just like how lines of code isn’t a great measure for software, Github contributions aren’t a great way to measure the worth of a software developer. Quality over quantity.

Barrier to entry for contributing to other projects

It can be tough to start contributing to someone else’s project. It might use build tools you’re not familiar with, multiple programming languages, which you’re not equally familiar with, and they might have their own guidelines and stylistic preferences for contributions from other developers. Not only that, but even after reading their code, forking the project, learning the ins and outs of the repo, and then submitting a pull request to address something posted in an issue, it might still get rejected. That’s a risk because you’re putting a lot of time and effort into something that might not pan out. This is one of the reasons why I am hesitatnt to get into contributing to other projects, but I think I’ll have to do it eventually to grow my skills and get out of my comfort zone in order to learn new things and increase my programming repertoire.

Open source licenses

I release all my code as GNU GPLv3, at least for now (well, at least for my public projects). At some point, I’d like to learn more about the pros and cons of the Apache license, MIT license, BSD, GPLv2, GPLv3, and so on. Not all open source licenses are the same, so maybe there are reasons why you’d want to use one over the other.

Proprietary software and feeling self-conscious

When you don’t share your source code, you can get away with very poor design choices because no one will be critical of them. When something is open source, people can see your code – and assess how good or bad it is. Never letting people see your source code can mean you can be doing things really wrong without even knowing about it. One important part of open source is code review, even if it can be harsh sometimes.

More SSG progress

I worked on my static site generator project more.

Naming software

Naming conventions are pretty straightforward for variables, functions, etc. But what about projects themselves? I don’t think I’m very good at naming things. I tend to name my projects based on what they are. How did the creators of Jekyll decide to call it Jekyll? I guess it’s a cool name, but it has nothing to do with what the project does. It gives no indication of what it does. Not only that, but the icon for Jekyll is a beaker with red liquid in it. What does that have to do with generating a static website? I decided to call my project static site generator because that’s exactly what it is. But maybe I should use cool-sounding words or make up names instead of being so literal.

PGP email encryption

Today I decided to learn more about mail encryption with PGP. In looking up PGP, I came across GPG.

GPG4win and Keybase

Today I set up GPG4win, which is GNU Privacy Guard for Windows. Keybase lets you manage keys. GPG is used for email encryption.

I will have to get someone else I know to use GPG4win in order for me to be able to use GPG. There are public and private keys, and you need someone else’s public key and your private key in order to encrypt a message.

People act like you’re dead if you don’t check for phone 24/7

I’ve found that the best way for me to be productive is to either turn off my phone, or at least not check it very often. I turn it to silent or vibrate instead of having sounds on, and nowadays I will have it farther away from me, instead of right next to me or in my pocket. I don’t check social media as often either. I also don’t check the news as much. Because of all these things, I am able to achieve longer uninterrupted periods of productivity. Small distractions here and there can ruin your workflow. But some people dislike that. They want you to be available 24/7, able to respond instantly, and if you don’t, then people will sometims take that personally.

People think phones and apps are cheap, but they really aren’t. A free app demands your most valuable resource of all: your time. I know some people who practically live on Facebook and Twitter. I’d never get anything done if all I did was check app notifications all day. In fact, I’ve disabled a lot of notifications, or just flat out ignore some.

It’s not that I dislike people or don’t want to talk to them, but maybe we should rethink the idea of being instantly available for everything all of the time. I need fewer distractions in order to get more work done.

It feels liberating to spend less time checking your phone. I’m glad I’ve made this decision, but some people think it’s a bad thing.

Knowing and teaching are two very different things

I’ve had plenty of professors who were very smart, but some of them weren’t good at explaining the material to other people. I think knowing how to do something, and knowing how to teach it to other people, are very separate skills. It’s great when an educator is good at both, but in higher education, especially in more technical fields, you often get stuck with people who are professors because they have to be (some professors want to do research but are forced to teach on the side), and it’s clear that they know what they’re talking about. But being able to transfer knowledge from their brain to yours isn’t their strong point.

That being said, I think professors are often smart because the act of teaching someone helps them to learn as well.

University vs. self-paced learning

I’ve taken my fair share of college courses, but I’ve noticed that many college classes are seemingly useless. Why do I really need to know statistics, calculus, or philosophy? Why can’t I just take more programming classes? When you take a class, you are worried about grades and deadlines, and less about the material itself. You are also often learning things that aren’t going to be useful in your life, and you’ll just forget a lot of it after the class is over. By contrast, when you decide to learn things on your own, you are doing it because you’re genuinely interested in it, which is very helpful for motivation and determination. I think someone who wants to learn code on their own will have good success purely because of their interest, as opposed to someone only going through the motions in the class because they feel like they have to. Of course, with self-learning, you often miss out on important topics.

There’s also the issue of context switching, and I’m not talking about CPUs here. I mean switching from one class to another instead of being able to dedicate a bulk of your time to a single subject. When you have to juggle foreign language, math, computer science, and English, you’re less likely to get really good at any one of these topics. But when you concentrate on fewer subjects, you can learn them much better – not just learning for the short-term for an exam, but long-term learning that will stick with you.

I think that non-traditional learning is underrated, and coding bootcamps or online self-paced non-accredited classes can have the potential to be really good. But there’s less quality control, so that’s why many people have their doubts about it. I don’t regret going to college, but you need to supplement college education with self-learning. Technology always changes, so someone in tech always has to learn new things. If you don’t like the idea of learning new things for the rest of your career, even after college, then tech is not for you.

Is coding the new literacy?

I’ve heard some people say that coding is the new literacy. Is it really? I’m not so sure. It’s still important though, regardless. But I think tech isn’t the only field to have its own jargon and technical literacy. I’m sure there’s a level of trade literacy in many fields.

Is web development replacing desktop software?

It seems like, these days, there are really three options for any “app” – the Android app, iOS app, and web app. A web app is different from a website in the sense that it’s more interactive and functional, whereas a website is usually just a lot of information or has the ability for people to do basic stuff like log in and post messages. But in any case, these three things seem to be the main emphasis for modern software development – not desktop programs (I don’t count backend server development as the same thing), not embedded systems, nothing like that. Are web apps and web wrappers the only modern options for Windows/macOS/Linux? And is this a good or bad thing?

I’m thinking of how I used to primarily use a desktop email client, desktop mail app, LibreOffice, and things like that. Now? I can use webmail, Google Calendar, Google docs, Office 365 web, and things like that. You can do so many things in a browser now.

Uncool to like your field of work/studying

It seems like, whenever you meet another software developer, they might tell you how they’re not “like other software developers” or they tell you about their hobbies – aside from programming, that is. I think programming can be a hobby as well as something you study in college or do for a job. But many developers see that as being uncool. You need to tell people about your travels abroad, camping trips, and things like that – even if they’re things you only do every now and then. People don’t want to be seen as uncool dorks.

Gatekeeping and imposter syndrome

What does it mean to be a software developer? Do you have to be wildly successful? Do you need years and years of experience? Do you need a computer science degree? Are you not a real software developer if you don’t memorize answers to technical interview questions, like in Cracking the Coding Interview by Gayle Laakmann McDowell? Are you a software developer if you only write short programs and have maybe 1-2 git repos? Is someone who makes multiple open source projects more or less of a developer than an app developer who makes proprietary software that they make money from? Is someone a developer if they only do it in their free time and have an unrelated job? Is someone a developer for customizing Wordpress? Are frontend-only people developers?

Back when I first started making very bad software, I never would have called myself a developer. Now? Well, I know multiple programming languages and have a few years of education and experience, but I’d still say I’m a novice developer, or maybe a pseudo-developer. I think I’ll be able to call myself a developer once I have more experience with databases, full stack development, and also finish my degree. Then I think it’ll be safe to call myself a developer. But on the other hand, I know people who have no formal IT or CS education who still call themselves developers. Should I gatekeep them and tell them they’re not real developers? No, I think not. Gatekeeping is when someone says you’re not a real ABC unless you do XYZ. Maybe I should just change how I view myself and my skills. Easier said than done though.

Imposter syndrome is when you think you’re faking it and everyone else is better. This can be due to you not knowing everything someone else knows. But that’s the thing – everyone has a slightly different set of skills. Just because you’re not as good at a certain language as someone else doesn’t mean you’re less of a developer. It just means you don’t have the exact same knowledge. Maybe you’re better at Ruby and they’re better at PHP. Maybe you have a degree in biology and you’re just now learning coding, whereas someone else who seems to be a better coder has a computer science degree and nothing else. Everyone’s different. But the problem is when we compare ourselves to others in such a way where we’re comparing our flaws to someone else’s highlights. This is especially bad on social media.

A lot of people in tech mention imposter syndrome. It’s a common phenomenon. But sometimes I wonder if people mention it to sound more humble. Or maybe it’s genuine. I’m not really sure.

I don’t think I have imposter syndrome. I think I have a realistic idea of what my skills are like. I’m not brand new, but I’m far from an expert.

Just keep this in mind: some technology is so new that nobody can have a ton of experience with it. Everyone’s new, even the people who have been using it since it came out. Computers are such a new concept that it’s a field that doesn’t have the same lineage as other ones. People have been doing things like writing, painting, and cooking for a very long time. But it wasn’t that long ago that computers didn’t even exist. In a way, we’re all new to it.

Interdisciplinary software developers

It’s good to know software development, but it’s good to know other things too. Unless you only want to make programming tutorials, IDEs, compilers, and other dev tools (“systems programmer” stuff), you need to know another field too. Maybe banking, or chemistry, or multimedia. But the people who write apps for companies that aren’t tech companies can’t just know tech and nothing else. Most software is developed in-house and never used outside of the company that produces it. Commercial software like Windows 10 is not the norm. Ever seen the software used at a computer in a parking garage, or at point-of-sales equipment? Sometimes that’s either developed in-house, or just sold by speciality developers. When I was in SIUE, I lived in Edwardsville, IL. It’s a suburb of St. Louis, MO (the metro area spans multiple states). I would sometimes drive past a company with a sign that said American Medical Software. In that case, they have to know not only software development skills, but also medical-related information, and they have to know what medical professionals do and what they want out of software. In that case, computer science skills alone aren’t enough. Sometimes, you might work at a tech company with lots of other tech professionals. In other cases, you might be the only developer there.

These days, all companies use software, even if they are not software companies. For some companies, they can just get by using general purpose software that another company developed. But in many cases, they need software that is custom-tailored to their exact needs. And in cases like that, they will hire a developer (or have a contract with a development company) to write software that will do things related to their industry, so the people writing it need to know about it. Even if junior level programming positions might just have a tech lead or project manager telling people what to do, there are still plenty of times where you’re learning about an unrelated field in addition to software development.

I know about IT stuff due to my previous experience with computer repair, building computers, consulting services, and my IT classes I took in community college before transferring schools and majors. I know a lot of what IT people do, from Active directory, to backup servers, to networking equipment, to SIEM and IDS/IPS stuff. That’s why I’ve decided to write an IT ticketing system after I finish my static site generator (and the new unexpected hearing aid project for my dad). The goal of the IT ticketing system project isn’t to make the best IT ticketing system in the world – rather, it’s an educational project that will force me to get real-world experience with Python/Django/SQL, but more than only doing web development, it will force me to think about the needs and wants of IT professionals who use ticketing systems. I am even planning on asking people on social media what they want in a ticketing system. I won’t have enough time to implement all the bells and whistles people want, but I can at least make a basic ticketing system that has the most essential features, like the ability to submit a ticket, log in as an admin or support tech, ability to view and respond to tickets, create accounts, edit account info, delete accounts, delete tickets, etc. When you create software that serves a certain purpose, it really forces you to learn a lot not just about the languages and tools you’re using, but also what that kind of software is and what features it can have. Even if I don’t end up adding all of the optional features to my static site generator project in Python, it still forced me to learn about static site generators and Python. When I write a ticketing system, it will force me to learn about databases, login systems, and IT support tickets.

Adding unnecessary tech to a project so you can learn it

I might have written about this in the past, but I will mention it again. Sometimes, you might want to add something to your project so that you can learn it. It’s something you have little or no experience with. Maybe I can add a REST API to my ticketing system so that I can get experience with designing APIs rather than just using other people’s APIs. Maybe I’ll add TensorFlow to a project in order to learn about neural networks, even if the product could technically be completed without it. Or in the case of my static site generator, I decided to write it in Python – not because Python is necessarily the best choice of a language to use here, but because I wanted to get better at programming in Python instead of sticking with something I’m more familiar with already, such as Java.

This strategy is good for personal projects, but only if you don’t overdo it. And this is not necessarily a good strategy to have on the job.

But when I came across a Quora account of someone who only programs in Java, because that’s all they use for their job, I thought to myself: I can’t be like that. I can’t afford to. I’m not anti-Java, I’m just saying I need to learn more than that. I’m already halfway decent as programming in Java, but I don’t want to plateau, so I need to learn other languages, tools, and areas of computer science.

The functional programmer’s dilemma: usefulness vs. personal preference

I currently only know procedural and object-oriented languages. Well, and markup languages, but that’s not really programming. But anyway, a lot of people in academia talk about how the functional programming paradigm is the best thing since sliced bread. But is it really? I’m having a hard time thinking of many big projects written in Lisp or Haskell. It’s the kind of thing that seems cool to people in academia, but it doesn’t seem all that useful in the real world. I came across an Erlang programmer’s blog once, but even though he personally said it’s a great language, I thought about all the time you need to invest into learning a language with a paradigm that is different from the one you’re the most familiar with. Going from Java to Python is relatively easy because they’re both object-oriented languages. This blogger I came across was also the author of a very popular Erlang programming book, so of course he’s going to be a little biased, because to him, Erlang is the language that gave him success in publishing. He found success in an under-utilized niche, where there was demand for better authors and learning material. But not everyone can be like that. Additionally, moving from OOP to functional programming is a big leap. Is it worth the time investment?

For now, I think not. Maybe later in my career I can try different programming paradigms. And it’s true that many languages these days are multi-paradigm, though many “multi-paradigm” languages are object-oriented first and everything else second. I am aware of basic functional programming concepts, but not enough to really write any meaningful projects in a functional language.

Lisp and its derivatives (Scheme, Racket, Clojure) are bizarre languages to me. They have small cult followings, but I still fail to see how they’re good languages. Just for jobs alone, you don’t ever really see any listings for them. The most popular things you see in job listings are Java, PHP, MySQL, JavaScript, Python, Swift, Kotlin, and things like that.

MIT, the best university in the entire world for computer science, used to teach introductory computer science classes with Lisp. Maybe that ivy league prestige is what convinced some people to try Lisp. But now, even MIT has jumped ship and they teach freshman CS classes in Python instead. So why are functional programmers so die-hard about their languages and paradigm? I really don’t get the appeal of it, unless you’re just curious or like the “we’re not like those OTHER programmers” angle to it.

With that all being said, speaking more generally, I’ve never understood tech tribalism, or tribalism in general. That might contradictory, since I just said I’m basically anti-functional programming (at least for now), but I’m in favor of learning most languages, just as long as they can be useful and are widely-used. Those two things often go hand in hand, since tech really needs intertia and community development. As Steve Ballmer once said, “developers developers developers developers developers developers developers developers.” Developers are the reason why iOS and Android succeeded while Windows Mobile failed. But I disgress.

People have a favorite editor, OS, or programming language, and then they go online to disparage people who use other tools. I am a firm believer in “right tool for the right job” and learning many different things. Just because I’m learning Python now doesn’t mean I’ll stop here. I want to learn many more prorgamming languages in the future. I use many different editors, compiler suites, and operating systems. I don’t really have favorites. If they are widely used, I will use them, simply to make my skills more appealing to most employers. But I try to avoid really niche stuff that might sound cool but isn’t used much in the real world.

If I ever found a career opportunity that involved learning functional programming – for a job, not a personal project – I’d jump on the opportunity to do it. But because that’s not very likely, I’ll stick mostly to OOP, but without getting tribal about a particular language or set of tools. It’s good to have an open mind, but also be aware of which languages are the most desired for jobs.

Should we let bad developers do things incorrectly?

In many cases, you’re allowed to do the wrong thing. Browsers have “quirks mode” rendering, which will attempt to render semi-invalid HTML. It doesn’t just give you some error message – it lets it happen anyway, despite the flaws. Sure, the browser’s developer console will show you warnings and errors, but to the average person browsing the site, they won’t notice anything at all. And while some invalid HTML is bad, other times, it’s completely benign.

My PyCharm IDE will complain if I don’t conform to PEP 8 standards, even for seemingly silly things like the number of line breaks in between functions, variable capitalization, or the number of spaces after code before you’re allowed to add a comment. It gives you little squiggly lines, almost like spell check, but for coding best practices. Red lines indicate errors, but gray lines are just suggestions – and you can ignore them. These are neither compile-time errors nor run-time errors. They are merely suggestions for best practices. The code will still run anyway, even if it’s not perfect.

There are many languages that have multiple ways to do things. “goto” was an issues from before my time, but it was still a feature of languages, even when everyone pretty much unanimously agreed that it’s harmful and thus shouldn’t be used. But you can use it anyway, even with all the complaints and warnings.

Is this a bad thing? Why do we let people write bad code? Is it for issues of legacy support? Or is permissiveness of bad code somehow better for novice developers? I really don’t know one way or the other here. It’s just something to think about.

Technology shelf-life: modern vs. timeless

When you come across a very new programming tutorial, you can be impressed with its quality. It’s so current. So up-to-date. But if you encountered that same exact tutorial years later, you’d think it’s bad. A lot of tech documentation (and tech in general) seems to age very poorly.

One example of this I had was someone who made a Python/Qt tutorial, but their instructions only worked for old version of Python and Qt. The syntax of newer versions had dramatically changed, and I was unable to get my project to work when I used the same code they did, because newer versions had differences that meant the old code wouldn’t work anymore. Even something as simple as where to download the dependencies changed. The download links they provided no longer worked, though you could google it to find the current ones.

I am also reminded of Chuck Klosterman’s books about pop culture. They seemed topical when they were new, but now they just feel obsolete – relics of a bygone era. I wish more thought was put into making tech tutorials – and technology in general – more timeless.

Music videos and movies featuring old tech such as slider phones seem amusing these days, but ones that don’t feature era-specific technology seem good even decades after their release. One of the great benefits of studying computer science in college rather than going to a coding bootcamp is that you will learn more timeless tech concepts, as opposed to here-today-gone-tomorrow frameworks.

Making tech timeless is worse in the short-term, but better in the long run. The issue is that with modern planned obsolescence, agile development, and faster overall release cycles, we tend to favor short-term things. Social media gets people to value instant gratification instead of long-term choices.

I’m not really sure where I was going with this, but I think all aspects of technology should consider longer-term things.

YML/YAML

I am trying to get into CI eventually, using Travis CI. I haven’t started it just yet. I like to get some background info about something before diving right in. Travis CI uses .yml files, so today I learned more about YML. I didn’t just look up stuff about YAML itself, but I also looked up how to use it in Python.

Software updates: will they ever be solved?

Software updates are important for security, stability, and more. But sometimes, software updates can cause new problems. I’ve heard of Windows 10 updates that would make it so that the computer would only show a blank screen after rebooting. There was even a very bad update that caused data loss, and it made the rounds on tech news sites and social media for a while. But a lot of security breaches are because of a lack of updates. When you use old and oudated software, it might have security vulnerabilities. I know someone who recently got an old hand-me-down iPad, and they’re excited about finally getting an iPad – but it’s not secure to use old ones which no longer get iOS updates. They don’t seem to want to pay for a new iPad though. These days, hardware can outlive software.

Android phone manufacturers often don’t give customers Android updates – OTA or otherwise – because it makes people less likely to buy a new one. Why support sold products when instead you can just tell people to buy new ones? So economics also plays into software updates.

Today, Ninite Updater is telling me there’s a new update for Chrome, but I really don’t want to close Chrome to update it right now. The Nvidia control panel is telling me there’s a new driver update for my graphics card, but the last time I installed a graphics driver update, it had to restart my computer, and only gave me 60 seconds’ worth of time to save and wrap things up.

Software updates are inconvenient. They can cause more problems than they solve. They can be time-consuming. Not everyone has a test environment where they can test updates before pushing them to production. Some updates can even be malicious, like that one time Notepad++ got hacked. But old and insecure software is bad too.

There are some interesting solutions for updates, such as WSUS, which will make it so that only the WSUS server has to download an update once, and then it can push it out to all the workstations in the LAN. So if there’s 1GB of updates, and there’s 100 workstations, you won’t need to download the same thing from the WAN for a total of 100GB of remote downloads. Instead, it would still only be 1GB, and the rest would just be LAN traffic, which is cheaper and faster.

Linux has a built-in package manager, and Android and iOS have app stores. Wnidows has Ninite Updater, which isn’t great, but it’s better than nothing. macOS has Homebrew. Python has pip, Ruby has gem, and Node has npm. Lots of ways to keep track of updates and packages. But it’s still not a seemless experience.

It’s a very complicated issue. How can we solve it? Automatic updates? Maybe, maybe not.

The Erlang programmer I mentioned earlier blogged about a certain feature of his favorite language: self-healing. It’s the ability for a program to install updates without even restarting the program or the computer. Is this the future? I’m not so sure, because it basically sounds like data execution to me, which is not good for security.

As I was writing this, WordFence sent me an email notification saying there is a new update available for one of the Wordpress plugins on one of my WP sites (this site doesn’t use WP though, just some of my other ones). If you don’t update Wordpress and its plugins, people might hack your site and use it to distribute ransomware, or other kinds of malware too. Maybe even cryptocurrency miners.

Desktop publishing

I’m starting to look into desktop publishing because I’m going to turn a lot of my software development essays into a free e-book once I’m done with #100daysofcode.

Valgrind

I’ve never used Valgrind before, and I don’t know if I ever will, but I looked it up anyway. I am trying to learn more about debugging-related tools, since a lot of the time, you’re fixing problems in existing code rather than adding new features. GDB is another important tool. I also like the built-in debuggers in JetBrains IDEs, such as PyCharm.

Things I should add for future GitHub projects

The community tab on a repo says you should have a lot of different things, including a license, readme, etc. I already do basic stuff. But I don’t currently add the following stuff to my repos:

  • Code of conduct
  • Contributing
  • Issue templates
  • Pull request template

More in-depth issues

So far, I’d been doing pretty basic GitHub issues stuff. Now I add labels, lock issues, resolve them, add comments/updates, emoji reactions, etc. It seems like a lot of my issues are ‘wontfix’ or ‘enhancement’ (or both for issues I decided against implementing due to time and effort constraints).

Fixing templates

I fixed some template stuff for the static site generator project. I also added a transparent 1x1 pixel PNG image called placeholder.png, which can be used if the user does not include a lead image for an article.

Trello/kanban (project management)

I started using Trello again. It’s a kanban board thing that helps you figure out what you need to do (low priority), what you need to do (high priority), what you’re currently doing, and what’s done.

I already made a lot of good things to do in a new Trello board specifically for my 100daysofcode