Comments on blog postings go here

Information Currency

I recently found myself searching for a nugget of technical information related to Prometheus, although the specifics are rather pedestrian to this discussion. The function I needed to implement for a client wasn't seemingly obscure although it hadn't been available in early versions of the product: time based alert routing options such that certain alerts would be treated differently based purely on time of day. There were numerous articles, blogs and technical forums with a variety of solutions including code snippets to address this requirement. Thoughtful of technical folk to take their valuable time to post these ideas. The only trouble was: none of the code works with the current version (2.54.1) and so the various web pages were of little value in solving my requirement.

Now I am in no way implying that these generous souls were/are under any social contract to keep re-visiting their posts following testing each new release from the vendor. Nor am I insinuating that the vendor failed in some way. In the software world we use the word deprecated to describe a feature or function or method that previously was available but has been replaced or even eliminated altogether.

And this is the nature of the beast, progress means coming to some consensus by the supplier of software, be it a commercial enterprise or a band of open source developers, to define and refine the features/APIs, etc. that will survive and which will be removed as the software evolves. This is required by any who would be good stewards of the code base. These decisions, of course, come with ramifications for those who wish (or in the case of commercial software may be required) to use the updated version. The result tends toward requiring some change in the manner of interacting with the software: the way the payroll software presents the person interface has subtly (suddenly?) changed. The library a developer wrote her code to work against doesn't compile anymore. The systems engineer's ansible playbook yaml fails to deploy following an ansible release.

Suppliers of software may provide release notes to indicate significant changes. Well funded suppliers may update entire document streams to account for these changes and, ideally, link the version of the documents to the version of the software.

In other cases, there may be other documentation that articulates the changes. This can be a single document "pack", other times a version release is a collection of bug fixes that have separate associated issues or tickets. Unfortunately, sometimes there is very little in the way of documentation released along with a version.

One result of the 'move fast and break things' methodology is version releases arrive more quickly than some segment of the user base can consume the documentation and modify their usage of the code or system to correspond with the release flow. In the traditional enterprise Information Technology department of yesteryear, the approach was simple: the release to production of any code (vendor or in house) went through a "rigorous" test cycle. If the software interacted with any other software, all the permutations needed to be properly identified, documented, tested. A detailed and tested rollback plan may be developed alongside the rollback plan. And a version change to a major piece of software in the enterprise may only be considered a few times a year at most.

So on the one hand we have the rapidfire, "small" release cycle from software vendors being received by traditional enterprises capable of precious few deployment cycles. Now, one could argue that the dinosaurs needed to be, shall we kindly say 'culled from the herd.' And few could successfully argue that maintaining IT practices in general from the eighties and nineties would serve the industry indefinitely.

Aside from the sheer duration that a deployment to production took, it meant Senior Managers/Associate VPs could explain away the cost to VPs as evidence of rigor. And the VPs could in turn speak reassuring words to the Ps who could salve the C suite, who did their job to powerpoint the Board into calm reassurance. Because no one wanted to be the one to have to quote the press release that the Marketing/Optics team wrote explaining why the critical customer facing system took an unplanned outage during business hours. Because Production was a sacred icon, to be revered and respected.

One could rightly infer that deploying a version upgrade to production was a messy, expensive proposal, fraught with many fatal pitfalls and dangers lurking around every bend. Many a decomposing body lay on the path, dead by either negligence or political malice.

This was not the path of younger enterprises driven by entrepreneurial hutzpa, where production was a malleable object to be reincarnated with every week or even day a new creation. In these orgs it was often a case of smaller releases, smaller deployments and even before the days of Continuous Integration.

So, yes, I just wrote paragraphs of text to poorly summarize agile versus traditional IT. And you may be of the opinion that the Internet is all the poorer for it. But the point (at least as it relates to this discussion) is: there are lots of docs describing past, present and future code releases of all kinds of software floating around the Internet. And there is lots of it, everywhere. Search engines will gladly provide pages of links to documents for all kinds of software. Some of those documents may be related to the version you are struggling to deploy, manage, patch, use or otherwise interact with in some manner.

And that's a problem. Some people may be running older versions and want to or even need to keep doing so. They may even have to deploy an old version because of some technical or political reason, like acquisitions or because Frank was the only guy who knew how this goofy thing worked and Frank left the company last year as part of that 'cost cutting' exercise.

Others have the current version and need to configure something a little outside the norm or just didn't catch early enough in their internal testing that this software needs to communicate with that software over there and the interaction between the two is a bit of a mystery at the moment. So now it is in production and everyone is standing around scratching their heads wondering what to do and Management is yelling to "Just fix the thing!"

And our third group ( defining a fourth or more is left as an exercise to the reader) is just trying to stay current with a vendor release cycle to maintain their support contract.

All 3 groups are likely going to need to reference a document of some kind. Now, it would ideal if the document(s) provided by the supplier cover everyone's needs and explain the correct method to do everything and everyone is trying to achieve on every version. In some cases, this is exactly what happens. Huzzah!! if you find yourself as one of these folks. In the land before time, excuse me, IT before the internet, vendor documentation came in fancy binders, binders so updates to the documents could be changed a page at a time. Major releases usually came with whole new plastic shrink wrapped document packs. Sometimes you could gauge a technical person's acumen by the quality of said binders on their desk or the little shelf above it.

But in the last 25 or more years, documents are available on supplier web sites. BTW I purposely choose to use the word supplier rather than vendor to include both commercial and non-commercial entities. And with user forums, hosted various and sundry locations, reddit, the *stack web sites, personal blogs, archived social media posts there are many, many nooks and crannies where that nugget of knowledge may be hidden away for and possibly from you. You hope it is out there, somewhere, waiting to be discovered so you can [insert task here] and go home or least complete one of the 1,538 tasks in your queue. Search engines are happy to oblige and supply a slurry of results for you to manually sift through, panning for the nugget.

With each click of the mouse on a link with a summary, you alternate between optimism and negativity. As the list of links slowly shrinks, you struggle to keep your focus. Might this one have the answer? At first it looks promising but is applicable to your installed version? Nope. You attempt to fine tune the search parameters to weed out the chaff for this particular query. Your heart quickens as you explore another discussion in an obscure corner of the Internet with potential. But no, yet another dead end. You look at the browser tabs that represent your futility of the search for the nugget. You feel a slight sense of shame. No trouble finding others who posted their similar query but each one ends either with non helpful answers from no doubt well meaning netizens (do we still say netizens?). Particularly troubling are the questions on the vendor forums with exactly zero responses. As if the question hangs on the event horizon of a black hole. Forever asked, forever silent. A question remarkably like your own.

What to do?

So what is the answer to the documentation dilemma. I believe that like most complex problems, there are no easy answers. In other words, for every complex problem there is a simple answer....that is wrong. For suppliers, spending time on documentation should be given equal weight to enhancing the code itself. If "no one" can use your code because you haven't documented what "the people" need to know about it, it doesn't matter how "awesome" the code is, it is of little value. But more than prodigiously produced documents is the need to link docs to code versions, especially when functions are being deprecated.

Personal blogs, comments, etc. can be enhanced by specific references to software versions. I would put more emphasis here on blog posts, where someone has taken the time to produce a document for a piece of software they have some affinity to speak about publicly. If you supply installation, configuration or other technical information, try to include what version(s) the hints apply to; this may require a little more research and/or testing. Even a simple comment along the lines of, "I did this on version 2.3.97. Your mileage may vary."

Supplier forums should have fewer questions with zero responses. Even if there isn't someone whose job it is to monitor the forum for these questions, the regular posters can add something to these even if it is simply "I am not aware of any way to do this. I am running version 4.2.5.3". Now I will grant that there are, in fact, contrary to what presenters often make a point of telling their audience "stupid questions." On a forum for cars, "I am currently trying to get a flux capacitor working in my Impala to get to Mars. I can't find proven plans for the correct OBD II interface. Can you help me?" Now, of course, someone would be well within the bounds of polite Internet banter to point out that flux capacitors may in fact be relevant at some point in the future. I will update the blog when that happens.....because versions.

But these are just two small, rather insignificant ideas. I believe one of the other issues that needs addressing is deprecating documentation. What I mean by that is just as software versions are sunset, either from a support contract or "we ain't supporting that version no mo" point of view, there needs to be a corresponding archiving of all the comments and blogs that are only applicable to that version. A way for this information to be available for those that really need it with the archived software version. Such that someone searching for the nugget that only applies to version > 5.2 isn't inundated with information that doesn't.

With each new day, new software releases are happening: github, docker, app stores are just a few examples thereof. And there are new docs, postings, blogs being written too. Separating the nuggets from the mud needs to get easier. Just as there is no central body that controls software, there is no central body that controls documents. But maybe there needs to be an accepted set of documentation standards that people, bodies, groups and orgs can adopt that will have a meaningful impact on the slurry flow.

Of Grey Beards and Purple Hats

As I look in the mirror this morning, I see a grey bearded man staring back. Of course, that isn't a recent phenomenon in this particular mirror of mine but today it reminds me of how young UNIX admins spoke in hushed tones of the older admins who had been there 'since the beginning.' As did I for many years, because I was one of those young admins, "Pimple Faced Youth" in the register's nomenclature. 'Grey beard' was the term of respect for the guys (and occasionally gals) who had cut their teeth on early C and knew the esoteric ways of commands like ed and never needed to use cursor keys and certainly not a mouse. It had little to do with a person's calendar age, it had everything to do with their level of experience and expertise. They could talk at length about PDP-11s and mastering the ways of the olden times. They spoke of tapes and punch cards. Of working late at night simply because that was the only time they could get time for a compilation on the system. They could tell stories of when Big Iron meant room sized systems with jet engine like noise. Their fingers had individual minds of their own while on the keyboard, such was the power of their muscle memory. The particularly cantankerous ones would simply utter harrumphs when thorny subjects like vi versus emacs were tabled and refuse to engage in such frivolous chatter. If you are unaware, this is the equivalent of GM versus Ford debates among the passionate car culture. Or perhaps Subaru versus Honda. Or Yamaha versus Ducati in the land of Moto GP. I digress.

But allow me to back up, unless of course you have already navigated away or closed this tab. My induction into UNIX really started with Linux. I remember installing Slackware on a cobbled together system on a dining room table, feeding disk after disk into the floppy drive. Those diskettes came in a plastic envelope stuck to the cover of a magazine, as I remember it. I don't recall the exact year, but I would say around 1995. Once the base operating system was installed and the disks safely stored away, the system presented a login prompt. Now I really came from the DOS world. There is no concept of logging into a DOS machine. Already this was a wonderous place - a true multi-user system!

The dichotomy of systems

Later that year, the world was full of computers running Windows 95 endlessly playing Weezer's Buddy Holly music video. On the one hand, there was this multimedia powerhouse of an operating system, for the time remember, and everyone was busy with their SoundBlaster cards and IRQ settings, with configurations managed in a binary "registry." And on the other hand, this strange place with logins and commands that could be stitched together in a "shell" with pipes and redirects to manipulate text in complex ways. And everything was a text file.

Both were completely new ways of doing things to someone coming from a DOS world with a smattering of Windows for Workgroups. One was free in every sense of the word and the other was the very definition of commercial. One quiet and unassuming, the other boisterous and loud. One open ended with endless complexities to be discovered, the other very clicky (which my spell checker wants to change to colicky).

A few more years transpired and I continued learning as much as I could about both platforms, but was drawn more and more to the elegance of Linux. Like many at the time, particularly once Windows NT 3.5 was released, I sought and obtained Microsoft certifications. Because that was what companies were hiring. And Windows NT was a very different animal from 95, it was a serious operating system for serious work. So it was Windows all day and Linux at night.

Both presented their challenges. Windows applications and to a degree the operating system itself could be unpredictable (maybe the spell checker was right). Whereas, Linux stubbornly did exactly what it was instructed to do, even if that wiped the system with a single command. There was no: "Are you sure you'd like to do this incredibly boneheaded thing that you obviously don't really understand?" message because:

1) Linux = the operator/admin is in control and his/her commands are final, to be obeyed without question, and

2) there couldn't be any popup because I'd spent the last number of hours (Ok probably days) trying to get /etc/X86Config properly configured to actually have any kind of graphical interface whatsoever trying various options that weren't well documented for this particular graphics chipset and the CRT's capabilities.

Learning has a cost, as well it should. I am fond of the quote: "Experience is an excellent teacher. Unfortunately, it kills all of its students." Eventually, I was hired at a company that had real UNIX systems. Well, a couple of places but I was never allowed to do any sysadmin work on them. I made friends with the admins, though, and eventually cajoled my way into having a login. A grey beard had taken pity on me. Getting a terminal emulator running on my PC was a feat all its own. This company did serious business in a serious industry where system stability had direct implications to loss of life. The level of rigor and discipline among the grey beards wasn't really anything I had witnessed before: documenting every little change, using a version control system (SCCS and then RCS) to manage configuration files. Being able to see the record of system changes even if another admin had performed them right on the command line was a new thought for me. That is not to say that everything in the Windows admin world was cowboy style but in comparison it kinda was.

When I left this company and that industry, I took the lessons I had learned from the grey beards to heart. I landed at a small company that was primarily a development shop. Many of the devs had linux workstations and there were a couple DEC TRU64 and Solaris servers. I was a sysadmin on a very small team of sysadmins.

There was one grey beard I will call Bob, primarily because his name was Bob and he sported a grey beard. Bob taught me many things about the ways of UNIX. He was generous with his knowledge and had a wicked sense of humor. The company was an Emacs stronghold but Bob was equally fluent in the ways of vi and could espouse the virtues of both without resorting to attacks on the other camp. His measured, methodical manner made a bigger impression on me than I am sure he realized. Nothing ever seemed to phase Bob. Soon I was administering the Tru64 cluster and doing kernel upgrades on some of the Solaris systems, managing NFS automounts and writing small shell scripts to automate some simple tasks. Being a sysadmin in a company full of techies presents its own challenges because almost everybody understands the systems quite well and could certainly administer them in a pinch even though this was many years before the term DevOps was coined.

Y2K hysteria hits and I am working for a large corporation in the energy industry. I am responsible to duplicate the entire enterprise but at a small scale for each and every department to test their critical applications in anticipation of "the day when the clocks and systems all go crazy." I have almost unlimited budget to requisition "whatever is needed." I have UNIX systems strewn around the lab along with x86 servers and an AS/400. It was the very definition of a hardware geek's dream job. I frequently joke that I can order a new mainframe with no questions asked but getting a number 2 pencil requires more approvals. As I setup each testing scenario, I get to meet business people from every business unit and hear their trials and travails with the various applications. We discover some systems/applications that misbehave with the century roll over but either internal devs or external vendors make changes and Y2K itself is a non-event as the company. job This birthed a new appreciation in my mind, computers had always been wonderous machines just sitting there patiently waiting to bend to the operator's will at least once the operator figured out the right set of instructions to provide. From the very first computer I used, a PCjr my father brought home one day early in the eighties, it was a wonderous thing to write a program and have that thing play a little tune or perform some math dynamically with numbers entered on the command line, even if it took untold hours to get to that point. But I was beginning to understand the symbiosis of business service, business data, business application, operating system, computer. It would become a go to expression later in my career: computers are only there to run applications, but neither apps nor OSes matter, it is business value, customer value that matters. Yes, an infrastructure person gets excited about infrastructure. A developer gets excited about applications. But a business person cares naught for either, they just want to get their job done for the customer. Or, more succinctly, systems don't matter, apps don't matter, business value matters. Show how what you are proposing adds business value.

Time passes and I find myself working in a financial institution (which I shall not name here) a couple of thousand miles away. Now the stakes are high because money. Again some Solaris systems, some AS/400 and a mainframe in addition to the usual Windows servers and workstations. The teams are quite a bit larger and the systems more diverse. There is a more centralized change and performance management process in place. Project delivery too has more rigor and process and yes, bureaucracy. The IT department is the vast majority of the company. The sysadmin teams were mostly composed of PFYs when I join in the early 00s. But there are one or two grey beards.

A pivotable moment

Some team changes and some promotions later and I find myself in a discussion regarding which Linux distribution we are going to adopt as the standard for the enterprise. The decision had already been made to migrate the main bread and butter software from the mainframe to Linux as part of a project primarily because of cost - a recurring theme with Linux adoption across industries at the time. This would not be a port but a wholesale rewrite with a very aggressive performance target.

So what to adopt as the standard for the organization? Although we discussed debian there really wasn't much in the way of commercial support available at that time. There were a few smaller contenders but really only two distributions with support organizations behind them: RedHat and SUSE.

The conversation proved to be brief: Red Hat had North American presence and SUSE really did not. Decision made: we will work with Red Hat. No big RFP, no committee debating endlessly, no non-technical people weighing in. It really wasn't a complicated decision with a bunch of angles to consider but in an organization of any size decisions of this nature are not usually quick nor easy. But this one was: Linux deployed on HP servers it is - RHEL 5 specifically. As an aside, or to make this post (essay?) even a sentence or two longer....this was about the time that blade servers in the x86/x64 world were becoming popular.

And Red Hat was great to work with. They brought senior technical people to the table as we formed the architecture, as we built deployment scenarios and scripts, as we struggled with some HP driver idiosyncrasies because this was brand new hardware and there weren't many companies deploying RHEL on these systems at that time. Because of the objectives for the project, we also needed new networking standards across the critical part of the enterprise. So basically everything was changing, and changing all at once. There aren't too many opportunities in a large organization (or at least large-ish) to make this kind of wholesale, fundamental, changes to systems architecture.

Now the previous application running on the mainframe was not sexy, it was not modern by architectural standards of the day or perhaps any day, some would call it clunky. But it was reliable. It was rock solid. Primarily it was reliable because the mainframe took care of the availability. The redundant systems had redundancy, so to speak. When the IT organization was getting all excited about deployment on x86 systems running Linux, there wasn't a whole lot of thought that went into the fact that high availability was moving from the vendor platform to the application itself. And that is not to imply a lack of anyone's part: the blade systems and RHEL were really an unknown in many ways. The infrastructure teams were developing build, deployment and management systems and processes. The development teams were tooling up and learning their way around linux builds. It was exciting but everything, everywhere was new. And that teacher Experience had much work to do.

I remember quite well the first full system outage in a test environment that paralleled production. The system had gone down and gone down hard, this mere weeks before the production cutover had been announced to customers. And the resulting hair pulling and "aggressive discussions" from the senior project management team demanding that RedHat and HP be dragged to the table by the nose to explain this egregious sin and what they were going to do so that IT NEVER HAPPENED AGAIN. In true, "canary song into the hurricane" (A more polite reference to the usual), we tried explaining to the assembled lofty ones, that we would be happy to escalate the issue but also reminded everyone present that apps crash in the distributed compute world, especially brand new ones hot and fresh out of the coding, OS and hardware ovens, and there is not high availability built into the platform that the organization had chosen "to save the money from the mainframe." But Red Hat and HP were brought to the table and escalations were made and angry phone calls outbound lit up our PBX.

Once the dust settled and the organization was satiated and some changes on both the systems and the applications were implemented and tested, the roll out and cutover were successful (stressful as expected but successful). High availability was not yet particularly designed into the system in any traditional sense but the uptime was markedly better.

Over the years after that project, numerous other projects and migrations particularly from Solaris were implemented successfully. The discipline around managing RHEL systems grew and grew and before long as I looked around, I realized for the first time that there were Linux grey beards in our midst. Men and women who had earned their stripes in the trenches of deploying, managing, and, yes, fixing, Linux systems. Now there were Linux systems everywhere, throughout the data center, running on virtualization platforms, being hosted by AWS and Azure. We had come from a world where Linux was viewed as a second class citizen, an anomaly, something to try if you couldn't afford "the real thing" to the standard. And I know that this story has been repeated in countless other organizations, I do not attempt to record what is unique but for me this was my personal experience and, hey, this is my blog and no one is making you read this - assuming there is anyone who ever reads this besides a search engine robot.

Of Purple Hats

On my personal systems I have come from Slackware to Debian and its variants with many stops along the way. I have always respected the enterprise value of Red Hat from a support perspective and never had any regrets that we choose it for the organization I worked for in that brief meeting in my office so many years ago but I am more of a DIY guy and just like .deb more than .rpm. Yes, ubuntu and friends have a place in my history and I still have some machines running ubuntu at this moment but I find myself veering more and more to Debian native, without getting into the thorny subject that is systemd. And closely related, I have been fascinated by the emergence of the raspberry pi, the plucky credit card sized computer that runs Linux on its ARM CPU. The hobbyists and serious industrial systems builders have done some amazing things with them. And now there are a number of competing SBC (Single Board Computers) in that space. And that competition is healthy. It pushes everyone involved to do better.

The relatively recent acquisition by IBM of Red Hat has launched many posts across the Internet - go ahead and search reddit or the register, I'll wait, if you haven't been bombarded with them in the last ten minutes. The concept of UNIX or more specifically the big commercial names in UNIX (IBM, Sun, DEC, HP) were always viewed as diametrically opposed to everything that Linux stood for. One side staunchly corporate, proprietary and expensive. The other free and open. No one historically would be able to "play" on a UNIX machine in their own home, not in the 90s or early 00s or at least until the dotcom meltdown when some hardware from defunct companies was available in dumpsters. Queue someone posting the link to the mainframe kid as a counter point. But in 2024, if you haven't personally run Linux in a vm or bare metal you probably know someone who has. If not, you or someone you love uses an Android phone or a set top box. If you have ever visited a website owned by Google or FaceBook, you were interacting with a Linux system. Linux has become part of the fabric of not just the "tech community" but typical, mainstream society, even if much of its use is hidden from view. And that level of penetration is something that the UNIX world never even aspired to nor was interested in pursuing. Can you imagine the licensing costs for every device? Talk about a non-starter. - I leave it as an exercise for the reader to insert your own joke about Java here.

So now with IBM aka "Big Blue" owning "Red Hat" ( = Purple Hat ) what does that say about the state of Linux in the enterprise? What does it say about the (relatively) eternal struggle between proprietary and open source? Particularly as companies are aggressively trying to re-define their software licenses and those who would benefit from their 'open source' software. I can't say this better than Jeff Geerling's post. One facet of the acquisition I can't help but find humorous: there is an old joke in the industry, "No one ever got fired for picking IBM." Although it is told with many variants. It harkens back to the time when IBM support contracts were solid, tangible things. When if you had an outage, a technician would be dispatched to your site and that issue would be resolved. And their hardware and software had really smart engineers standing by to troubleshoot and resolve issues. Middle of the night and the system is in a shack in the middle of the desert? No problem, a technician will be dispatched and on site. By this, I do not infer that IBM's current support services are sub-industry standard, my point is those were different times. And in my experience, Red Hat had become that same place: the well known, the comfortable, the reliable. And there is absolutely no question of Red Hat's contributions back to the Linux community and the kernel and other software over the decades.

But the landscape has fundamentally changed with Red Hat's decision to terminate CentOS - which they vehemently insist had nothing to do with their new feudal lord. This has been a watershed moment in the industry or a clarion call if you will.

Now, if you aren't familiar with the details, CentOS was the binary compatible distribution of RHEL. This is more thoroughly and eloquently explained on the [Wikipedia page] . The original use case for CentOS in practice often was: run CentOS if you or your company didn't need support from Red Hat. And it proved to be very effective in my experience. A sysadmin ran CentOS for his/her own needs at home or for side projects. Some companies ran CentOS in non-production environments. And then RHEL for production. All of this was in full compliance with licensing terms. And was quite brilliant for Red Hat stickiness because for all intents and purposes, it was RHEL just without the trademarked bits like the Red Hat logo. But you were on your own to find support with your internal expertise, your judicious web and technical mailing list searches. You simply couldn't call 1-800-Red-Hat (and no that is not the support line, don't call it) without purchasing a support agreement and running RHEL. And that was the real value proposition of RHEL, you got all the Linux goodness with a support agreement to get Red Hat's grey beards on the line if it went sideways.

And IBM obviously did their due diligence and determined that Red Hat was worth the $34 Billion dollars to add to the fold. And from an enterprise perspective, the one who controls Red Hat controls Linux and the revenue streams therefrom. Now I don't not say this as an attack on IBM. Any corporation is driven by 'share-holder value.' And technology companies in at least the last 50-60 years are no different than oil and gas or transportation or big pharma - their very existence requires that they pursue tomorrow's dollar while collecting today's. And with roughly 300,000 employees, they no doubt have smart business and technology people.

With IBM's latest acquisition, HashiCorp, (for a cool $6.4 Billion) they are positioning themselves with a stronger cloud presence, not primarily in selling cloud services, which I'm sure if you look they would be happy to sell you. But more so in controlling and selling some of the fundamental cloud building blocks. Red Hat at the Linux OS layer, HashiCorp at the software and services provisioning, deployment and management layers. They want to sell to the enterprises and cloud companies. Where the real money is, just like the mainframe days when a single system was often in the millions of dollars with software, hardware and support contracts. And while commercial software raises the hair on the necks of some die hard Linuxistas (yeah, we used to say things like that), there aren't many in the industry who begrudge a company from making money. After all, many of the developers, sysadmins, architects, engineers and others enjoy eating and being employed can be useful achieving that goal. So it is not simply a matter of open-source = good, commercial = bad. Part of the issue revolves around who is in control and what is their motivation to 'do the right thing.' To the community that springs up around a popular piece of software, "the right thing" rarely involves maximizing profit. While to a corporation, that is always the prime thing.

Now comes the rub for the argument of proprietary versus open source. Both Red Hat's and HasiCorp's main software: CentOS (aka almost) RHEL and Terraform (among others) were both licensed as open source and the reaction from the community has been to create forks: AlmaLinux, Rocky Linux and others as 'enterprise operating systems' both compatible with RHEL, although I get a kick out of the expression on the Rocky Linux website: "100% bug-for-bug compatible with Red Hat Enterprise Linux®". And the open source fork of Terraform prior to HashiCorp's license change is OpenTofu.

So there is this pivotable, basic, foundational premise to the 'open source' license(s) like the GPL which dates back to before Linux even existed: a community can take the code and start a new project if the current custodian is not behaving in the best interests of the community or for any reason whatsoever. Now that is not to say that that will be easy, or even successful. But it is permitted in the terms of the license to the code. But if many of the contributors to the original project leave to join the fork, at what point does the fork become the one true project and the original wither and die?

At least from the time of the Rober Barons, this story has played out: those with a powerful motive for profit will use their authority at the expense of the common people. The difference that open source and its licenses tried to address is: the people can take their wheat and move. Now I will grant that this analogy is a bit of a stretch because the software in and of itself isn't as important as life's necessities but with hundreds of thousands of people involved in the software industry needing an environment where they can buy food for their families it doesn't seem like that far of a stretch.

So then what is the answer as if there were a single answer? Is it simply a matter of all agreeing to all get along? Is it the hope that corporations will find some enlightened new path that isn't purely driven by profit? Is it a sign of a healthy code base if the main corporate sponsor changes the license and everyone just moves the focus to a fork with little hoopla?

How would I know, I'm just a grey beard still playing with Linux.

If case you are curious, yes, the purple hatted grey bearded images in this post were generated with AI, specifically through Bing's interface to DALL-E. The one directly above is my favorite. The grey beard appears to be modifying or fixing the typewriter rather than simply typing on it. A fitting analogy to the core essence of Linux itself.

Now Microsoft's evolving role with Linux and the open source world is a fascinating history and perhaps I will post on this topic in the future. It is most interesting that many linux-ie things that Microsoft owns/controls (such as github right here) have arguably improved under their custodianship.

And another phenomenon: with Microsoft's Subsystem for Linux, with MacOS being based on BSD, we are at the point when all common platforms are POSIX compliant.....What does that mean for the future of applications? Something else to ponder in the mirror.

Using Logseq as a second brain

Where I came from and how I got here

I have been somewhat of a serial experimenter when it comes to personal note taking and the apps that support it. I have been interested in the commonplace books of yesteryear since first reading about them a number of years ago. What has been preserved from Leonardo DaVinci and Alexander Graham Bell, as just two examples, is amazing. I still use a fountain pen and physical notebook from time to time but the majority of my notes are digital. I have fond memories of the Palm Pilot and that early feeling of digital note taking with a stylus. The strangely satisfying resistance on the screen of plastic on plastic.

My note taking sojourn made a significant stop at Evernote many years ago. Which I quite enjoyed but became embittered against when features that had been included in the free tier were moved to a paid tier.

Side rant: When companies take free features and start charging for them, I have an automatic, viseral repulsion for said app or service. I have no objection to differentiating the free tier and the paid one but be upfront with what is paid for and what is not. Every company that does this seems to be convinced that no one will re-consider their solution stack as a whole simply based on their financial mutability.

I started using OneNote because I already had a Microsoft subscription (eyes wide open on costs and features provided) and the experience syncing via OneDrive between Android, ios, Windows and Mac works quite well, except when it doesn't which is rarely in my experience.

I had heard about people using Obsidian and other notetaking apps that were more free form or outliner specific rather than using pre-structured folders, tags, pages, but I didn't pay much heed. I was reasonably happy with my adhoc structure in OneNote and quite frankly didn't record that much on a daily or weekly basis to be fussed about any alternatives.

And that was kinda the problem: I wasn't using my system as a 'trusted system' in GTD parlance. I had data stored slipshod across lots of platforms and places:

  1. OneNote
  2. Paper notes
  3. Too many browser tabs because they had reference material I (may) want to revisit.
  4. Email
  5. Text files in github repos

I came across some videos (thank you YouTube algorithm? How did you know, wait, don't answer that, its better if I don't know.) about using logseq as a second brain and how others were using it for personal knowledge management.

Three samples that got me thinking.

Now, I subscribe firmly to the idea that brains are best for thinking and problem solving and far too valuable a resource to burden with trying to remember every trivial (at least at the time) matter. So having a trusted system appeals to me on many levels. The tough part is "trusted." By 2024 I think it is safe to say that most knowledge workers have been impacted by oh-so-cool-corp dropping/cancelling/selling/changing the pricing model overnight of their wizzbang widget/app/service.

I tend to approach new systems with less excitement and more caution than I once did. But I was willing to give it a try. And so I set a goal of trying it for three months to see if the LogSeq system was one that would work for me. It just so happened to be the first week of January so the three month trial would run until the end of March 2024. I was curious to see whether the "record things in the journal and don't worry about organizing the information up front" approach was going to work for me. The month went by and I looked back and only missed making notes on one day.

In the second month, February, there were a few missed days but I was on vacation for part of the month and I wasn't fussed about recording in the journal while away. I didn't force myself to do it, I merely recorded anything that I thought worth the effort. March came and went. As I looked back, I saw that I had put something in the journal each and every day. I found this interesting to reflect on as the month ended: I found myself putting things into the journal and had, on more than one occasion, referred back to something I had recorded. There were a few themes uncovered in my weeks of data capture that I spent a few minutes organizing pages around but I never once thought about a rigid structure and whether I "had it right". I had almost sub-consciously applied tags to some of the journal entries and when reviewing the resultant pages, I made some minor adjustments, adding headings or applying formatting. One such page was this very one where I had tagged some Logseq specific information captures. When I reviewed it, I was motivated to do a little organizing.

My use a.k.a. what's working for me and what ain't

Things I like

  • Text backend is huge. My data, I am in control - backups, replication, opening with other tools
  • My three main platforms for daily compute are supported - Windows, Android, IoS
  • Using the journal entries as the main entry point frees the mind from thinking about organization first. Content first approach appeals to me.
  • I think it is making me capture more "rough" content for my future self to access and maybe organize or maybe just have available via search.
  • Ease of creating tags

Things I don't like (so far)

  • Larger blocks of text I want to paste in can't be edited. This 'everything is an outline' approach is a double edged sword.
  • I don't know enough about the upcoming "database version" but I am concerned that it may corrupt the beauty and simplicity of "text files are the backend" philosophy.
  • Too many orphaned files seemingly from android replication is annoying.
  • Many of the articles highlighting what can be done with logseq focus on plugins. My primary device is my ipad. Plugins only work on the desktop app.
  • The roadmap and updates from the dev team seem sporadic at best.

Conclusion

So what pithy conclusion shall I make on my experiment of learning and using Logseq for nigh on four months? Overall, I am satisfied with the product and my results. Do I think it is the be all, end all of note taking? No. Do I think it is a system that everyone should adopt? No, although it is marketed as simple (because text), in reality there are many finer points in its use that may be off putting to the average person, which is not to say that you should avoid trying it.

For me, the real question is will I continue using it now the experimental period has ended? And for me, that answer is yes. The second brain concept clicks with me, the everything starts as a journal entry works for my way of thinking too. And the fact that the effort I have put into learning a bit about logseq has resulted in some simple text files that I have replicated, backed up and accessible is valuable. If tomorrow I decide to abandon Logseq my data is still in text files. Text files I can manipulate in vim or python, with sed or awk or bash. Of all the things that logseq is and isn't, that is the real advantage for me.

This markdown file was created in logseq.