Tuesday, March 31, 2015

"devops is not magic"

This is a great presentation for hiring your first ops engineer.

  • hard problems
  • keep it simple
and many more

Saturday, March 28, 2015

Advanced Golang Generate

The go authors describe the go build process as:
go generate
go vet
go test
go build
or something to that effect.

Well this works great for standard applications but what happens when a package your project depends on also requires a generate step?

I hope someone else has an idea while I investigate.

UPDATE: the go build/generate does not do anything with dependencies. Seems they are expected to be complete in their original form. So if there is an opportunity to "generate" in a dependency it should be done when the dependency was released. Anyone?

Tuesday, March 24, 2015

fishshell and the makefile

I had a number of problems getting my makefile to work with fishshell. The first challenge was making GOPATH make sense but that was cleared up with an article by Andrew Gerand. Basically it came down to two tips. (1) more than one item in the GOPATH is ok (2) the first item in the set is the vendor folder. Voila.

But if you are like me and you like fishshell then the config file is bothering you. So let start at the beginning.

I have a simple Makefile with one sub-command:
    @echo "GOPATH=$(GOPATH)"
In my fish config file (like .profile or .bashrc) I have the following:
set -g GOPATH $HOME/gorepo $HOME
The -g indicates that I'm setting the GOPATH environment variable in the "global" space. Meaning this session or terminal window. Notice that I'm setting the values $HOME/gorepo and $HOME to the GOPATH variable.

But something interesting happens when I execute the command make env. For some reason I get the following output:
and I know this is the wrong value.

After re-reading the man page for the set command I discovered the '-x' option.  This tells fish to copy the environment variable to the child processes. This is an odd behavior because it was not obvious that the options could be commingled and the documentation never suggested it.  Just that the -g should have been enough.

Anyway, adding the -x to the -g and re-running my make env command and I got the right results.

This is good news for me as I was getting ready to cut bait.

Saturday, March 21, 2015

to Golang HOME or some place else?

I'm trying to cleanup all the junk on my computer; and there is a lot of junk. The hard part is the quantity of files not the size. As part of the Golang idiomatic way there are at least 3 places where "we" store packages.

(a) GOROOT - which is typically reserved for the compiler. Go's get function may let you do it if you override GOPATH for that request but it's probably not the best thing to go.

(b) setting GOPATH=$HOME - this is the usual way to do things, however, this means you are going to end up with a src, pkg, and bin folder in your $HOME. This is not a totally bad thing but just messy. Partly because you have 3 new folders in your HOME but you could inadvertently override another executable with the same name in your bin folder... as this bin folder is shared between your normal apps and your apps from source.

This is further messy because the order of the paths in the PATH environment variable is the order that they are evaluated when looking for an executable... so you could end up executing the wrong one.

(c) setting GOPATH=$HOME/gorepo - this means that the 3 folders I mentioned would be located in the gorepo folder. This seems better because we have segregated the packages and binaries from the rest of my everyday stuff. Development is partitioned away from the every day.

Now the only concern is that while I'm importing code from every corner of the github and bitbucket universe I'm polluting my gorepo folder with code that I did not write but with vendor code that others did. Frankly there are times when I just like to clear the decks and delete the vendor code. Now what do to?

  • create a gorepo and put my code in there
  • inside gorepo create a vendor folder shared by all
  • implement proper project vendoring when applicable
  • make sure my projects use the proper folder structure that maps to the VCS path
  • fork projects to your repo when practical
  • "configuration as code" - use a Makefile with simple commands
that should do it... even though it still means having more that one path in your GOPATH.

QUESTION: Where do you put repos that you want to inspect, fiddle and possibly fork later on? Since the full path is part of the namespace if your fiddling includes importing these projects you're going to be forced into changing the imports. So maybe "import early"?

UPDATE: Rats! Maybe GOPATH is just better being $HOME and sucking up the deps in the same folder?

UPDATE: Andrew Gerand wrote a good article on the subject.

"unsubscribe from all in 10 days" is a lie

For the uninitiated; unsubscribing from a mailing list should not take 7-10 business days. It certainly does not take them that long to add you to the mailing list; so why would it take that long to remove you? Certainly there are no humans making decisions or manual database deletions... in that case it would actually cost the marketer more to delete than acquire the eyeballs.

The fact is, they want 10 days (a) to sell your contact info to someone else (b) spam the crap out of you over the next 10 days (c) hope that by day 10 you've forgotten and ... maybe... you'll try to unsubscribe again.

It would be great if google could enhance their email product to do some inbox magic in this space.

Friday, March 20, 2015

venturehive spawns an idea

I gave a talk for Go-Miami at VentureHive in Miami last night and given the way traffic can be in the area I left work with plenty of time to get there and find a parking space. Being there 60m early I tweaked a few slides and prepared for the talk. I could not saw enough about the place.

This morning an idea hit me.

In the 1990s I was introduced to Hops International. While they currently have lush office space; when I met them they were working out of several apartments or condos on Miami beach. And so I asked myself if there was some value to converting some of these older buildings to a hybrid incubator/residence.

Wednesday, March 18, 2015

golang and vim where are you now

It appears that the Go team has removed the vim syntax highlighting files from their source. Subsequently a 3rd party took over... and then recently he gave up and everyone seems to be moving over to vim-go or maybe it's go-vim.

Upon reflection this project is a poison of a sort.

First and foremost the project depends on a package manager for vim. And then there are the 5-10 packages you also need to install as dependencies; and depending on the specifics you might be running the wrong version of vim the deps might require lua and/or ruby. And that's just for starters.

So simple syntax highlighting seems to be off the table (from the go authors). Luckily for me there are a number of other choices even though the terminal editor might be a thing of the past.

atom.io filetypes

I've been playing with a go package called safekeeper. The way it works is that it takes a xxxxxxx.go.safekeeper file as input and does a search/replace on various environment variables that you specify and outputs a .go file which is then compiled into your project.

One of the things I don't like about the project is that since one of the input files is, in fact, a .go file with a different filename extension (.go.safekeeper) the atom.io editor does not know what syntax highlighting to apply.

Thankfully there is a package called file-mode which you'll install in the usual atom way so I'm not going to detail that here. But I had a number of challenges getting things to work once the package was installed... and I do not have a specific recipe that made things work but in the end it did.

  • install atom.io if not already installed and launch it
  • open the settings tab
  • search for and install the file-mode package
  • it might be a good thing to restart the editor here
  • open the file you want to edit
  • add the sentinel string anywhere in the file. Some editors like it on line 1 and others at the EOF
  • For go, my tag looked like:
    • // -*- mode: Go -*-
    • I used uppercase 'Go' because when I opened a real Go file I noticed the file-mode indicator in the bottom right
The first few times I tried it failed. It might work for you. The documentation is sketchy but it recommended reloading the file after adding the sentinel. I tried that.  I also tried restarting the editor. And the very last thing I tried was ctl+alt+l.  (that's the lowercase L) And it worked.

Atom.io is a fast moving editor. There is a release every two days or so. At least 2 of the packages I depends on are updated even more frequently. I'm not a fan of javascript editors like brackets and atom for silly reasons but since I'm trying to build my own web based IDE I might like to integrate with Atom. Brackets, on the other hand, appears to be a RAD tool.

Sunday, March 15, 2015

Knowing the numbers - A lesson from the 'Shark Tank'

I'm not a shark and I didn't stay at a Holiday Inn but I have watched enough 'Shark Tank' to have an opinion as to what it takes to get some cash from these guys and I suppose the criteria is the same everywhere. And if I were a shark I would probably do the same thing.

The Rules:

  • sweat equity is considered free. It's a required component but always free.
  • Know your numbers
  • Having an idea alone is not good enough
  • Protecting your investment with Patents or high cost of emulation or reverse engineering is mandatory
  • You gotta have sales and they must be trending up
  • Having backlogged orders is good but only a fraction of the value because it's execution not investment
So don't expect to come up with an idea and have the sharks fall over themselves without:
  • product
  • customers
  • orders
  • protection
Basically the perfect deal from the beggar's position is not needing the money at all. (somewhere in there I find myself recalling a quote from a Harvard MBA business class)

I remember when I was a "rocker" or wannabe rocker. The band had been together for a year we thought we could get a record deal. First of all the record companies would not accept demo tapes and such. But more importantly the bands that had been signed had a lot more time on stage. They had the people and the product. They needed the record company for the process.

Saturday, March 14, 2015

Aggregated logging - the Google lesson

Whenever I build or deploy distributed systems the topic of log aggregation always pops up. I wish it were a difficult topic because then there would be some money to be made with a solution and with the number of times I've built these systems... I'd be very wealthy.

A friend of mine pointed me to a service/application that I was already familiar with. Bosun is a monitoring application from the makes of stack exchange. If my memory serves me this system is the first Go application written by Fog Creek/Stack Exchange. I do not remember my exact first impressions but looking at the code today I see a number of questionable practices. However,  in the end it comes down to answering a few brief questions which will all direct you you to the same place.

- are you going to aggregate 100% of the messages from the system being monitored
- how big is that 100%
- if you have 100 or 20,000 systems being monitored will the log aggregator be able to hold all of that data
- what is the data retention policy and do you have enough storage
- how long is it going to take to aggregate the data
- when its time to start deleting data will the system be available (usually not)
- what sort of queries will need to be performed on the data
- will you map-reduce
- what happens when the primary aggregator fails
- replicate the primary DB to a hot backup
- how many users will query the data in real time
- what sort of monitoring and alerting dashboard is there
- clock drift, latency, queuing cause event ordering issues.

And so on...

Basically, if you think you're going to aggregate the logs from 20K servers to some sort of logging/aggregation server like logstash, loggly, new-relic then you might not actually know your data or your systems.

Google has considerably more servers and yet they do not do log aggregation. Google performs event aggregation. When something happens that requires intervention then the system being monitored sends an alert to the monitor which then alerts the appropriate staff. This is an actionable event. If the event requires more information, as in the real logs, then the operator or SRE must log into or request the logs from the alerting system.

Thursday, March 12, 2015

golang project and package names

"Package names are central to good naming in Go programs. Take the time to choose good package names and organize your code well. This helps clients understand and use your packages and helps maintainers to grow them gracefully."
-- Sameer Ajmani (Package Names)
I started playing with gotcl and one of the first things I did was fork it and merge the project with gotclsh. I even added a few examples in order to make sure it worked; and it did.

Next, I started working on a framework where tcl was going to be an interpreted scripting language used as part of the "work" that needed to be accomplished. It's possible to call this work something like the node in a flow based program. In this case it was part of the details of the installinator/macroinator which I've written about.

When I initially merged the tcl projects I named the project 'tcl' and did not care a lick that I had changed the project name but not the package name. As I started practicing with my project and working out the basic kinks I realized that there was a problem with the repo name and the project name. In the end I renamed everything back to 'gotcl' and went to bed.

After a day of banging on the keyboard and a relaxing drive to the local toy store I realized that "we" had not read the book. Actually there is no book but there is plenty of common sense missing.

  • The project is written in go, so there's no need to name it gotcl
  • The project cannot be compiled unless you use the go tools
  • Go in the package name is redundant and repetitive
  • It's silly but that's two more characters to type... everywhere
  • Looking at the number or curated go projects with go in the name... shame and embarassment
The go tcl package should be named tcl and only tcl. Nothing else. It's time to reread that article above.

Choosing a Tiny OS to run your Docker containers

The space is starting to get crowded:

  • CoreOS
  • Boot2docker
  • RancherOS
  • Project Atomic (Fedora, CentOS, RedHat)
  • Ubuntu Snappy
  • OpenStack
  • VMware

CoreOS is the most production ready of the group. The alpha channel supports the most modern versions of all of the tool chains except etcd (which is surprising).

Boot2docker is tuned to run docker but it's RAM only and is well documented as a development only platform. But it works well with the exception that it is not capable of sharing host folders as volumes on the container.

RancherOS is interesting in that it's a total immersion in the container ecosystem. Even PID-1 is a container. I imagine it's going to work because either it works or it doesn't and it's obvious. The authors are very clear that this project is VERY alpha.

Project Atomic is probably production ready. That it spans Fedora, CentOS and RedHat is interesting but not a make or break. The last time I tried to install the Fedora version it took several days to make my way through the documentation. I imagine the next time I do through this it's going to be easier but there is something to hate about having to convert image formats before importing that makes this a bad experience. UPDATE: Atomic might be the most secure host OS due to the influence of SELinux.

Ubuntu is clearly one of the grandest Linux projects. They recently produced Snappy as a me-to in the tiny linux distros. I have tried to deploy it a few times but with little success as I refused to read the documentation. I will have to revisit that.

OpenStack and VMware are attempting to build Docker shims so that the container feels like it running on a host. The details a sketchy but promise to let the devops leverage the tools and environments to run Docker containers as if they were VMs. I have not been able to compute the savings as yet. Not even in bold strokes. In the proper Docker installation where the guest is running on a Scratch container the benefit is clear. But when running on top of a proper distro like ubuntu there is some OS overhead that is incurred. By inference when a container is running on top of a dedicated kernel shim the costs may not be any different or just marginally better than running in a proper VM.

As for Docker, I'm still on the fence between Docker and Rocket.  The CoreOS team clearly has a better handle on the security issues and yet the Docker team is trying to get marketshare. Unless you're running in a multi-tenant environment the rocket trust model might not be useful. Also, with Apcera Continuum the policy layer is implemented and appears to be much stronger than the Rocket trust. But we still need container standards!

Good luck to the teams. 

Wednesday, March 11, 2015

'go get' private libraries

It's early in my investigation, however, when collecting packages that my project depends on by calling the `go get` command in the go toolkit... I get an error because the package is private and is protected by an account username and password. Ratts.  The only way to get around this is going to be setting the github and bitbucket configuration with my credentials. Of course that means leaving plenty of breadcrumbs as I `go get` the public libraries.

The good news is that the private library in question is in my own private repo. It's just not any fun.

I think the fossil community needs two projects. (1) get the go build tools to work with fossil. Whether that's implementing a shim so that fossil responds like a git repo or (2) by adding the native commands to the go tools so that they can work as-is.

The fossil scm is awesome. If paired with go we could see yet another renaissance. There is something to be said for "the unix way" but when combined in a "turbo" way it's just a new level of goodness.

Friday, March 6, 2015

tools for the installinator

I've found the first two extensions to the installinator/macroinator. lisp and tcl. One of the interesting things I noticed was that both languages, especially tcl, might work well on the command line for a REPL. For example in tcl
$ tclrepl fcopy "file1.txt" "file2.txt"
and then in lisp
$ lisprepl (fcopy "file1.txt" "file2.txt")
Inside the REPL I'm pretty certain all I need to do is join the args back together. The quoted params are already handled by the basic command line stuff.  It's also interesting to note that a space is the separator character in both languages.
executeMe := strings.Join(flag.Args()," ") 
and then send the execMe to the interpreter. I have not decided on the exact command line but it should be pretty simple. There are only a few things that the REPL actually needs.

  • optional configuration file (-c)
  • optional file to execute (-f) 
  • And the remaining args are what will be executed after the file(s) have been loaded.
First load and execute the configuration file. The config file might import or include other files of a similar type. They will all be executed as they would in their native language. Anything at layer 0 will be considered a global.

Next the executable (f) file will be loaded and executed. Once this file returns to the main() then the command line is checked for a possible command to execute.

Internally, when the macros are individually initialized they should be adding to the language reference instance and connecting to the installinator macros. The initial work will be started by wrapping the macroinator with some libraries. This way I can POC before migrating the remaining installinator functions.

Wednesday, March 4, 2015

I will be talking about Go at Go Miami

On March 19th I'll be speaking at Go Miami meet up. I'll be talking about Configuration as Code in Go projects. 
As I was investigating an implementation of flow based programming in Go, I realized that I needed an installation framework and that there were some similarities to
frameworks like Chef, Puppet, Ansible and SaltStack. Meet the “installinator”; a “configuration as code” implementation of a DSL framed in Go using JSON for it’s configuration and thinking deeply about "Compose with functions, not methods.
I plan to talk about: Macroinator's design, tcl as an embedded DSL, a little flow based programming. and whatever I can do in the short time I have left.

We hope you'll attend. If not, it will be recorded and possibly live-casted.

Tuesday, March 3, 2015

When to reinstall your OS?

I remember back my Windows days when I would reinstall Windows regularly. All it would take is a few too many BSOD or perceived slow downs or a new release from Microsoft.

Several days ago I bought a 2TB replacement drive for my backup MacBook unibody. When I first installed it I used Carbon Copy Cloner to move the entire contents of my 256GB SSD onto the 2TB HDD. One thing I noticed was that things were really slow. Second thing I noticed was that not everything worked. Did I mention it was slow?

Since then I reinstalled OS X Yosemite from USB (no recovery partition on this machine). And now things are humming along nicely. The performance has been restored. I think there are a few explanations.

(1) the compression of bits on the drive is higher and although its a 5400 rpm drive the bits are more closely packed and so they are probably faster in and out of the carious stages of the hardware.

(2) The drive name changed from the SSD to HDD. That means that certain apps that know the full partition path were failing. Those failures must have escalated into the various parts of the OS thus turning into perceived slowness.

The bottom line.  OS X needs to be reinstalled to improve perceived performance issues just like Windows. It looks like the distance is narrowing.

Monday, March 2, 2015

if you have etcd should everything use it?

For many years after Microsoft deployed the registry it was the hell of all hells. It was the one thing that could kill your windows server or desktop and could render your machine unusable. In some cases you could boot into single user mode and repair it; years later there was a snapshot tool; and a plethora of 3rd party tools that did the same.

The etcd project from CoreOS defines etcd as "A highly-available key value store for shared configuration and service discovery".

Talking about Service Discovery,

If you're deploying a gaggle of applications in your environment they may need to discover each other in order to communicate on some level. This function is/was traditionally performed with a DNS server and/or configuration files. In this environment systems and services were typically static, however, in the world of containers and virtualization anything can be anywhere. And worse yet the IP address can change more frequently. High Availability(HA) and system hardware, container failures and system/application upgrades (blue/green) can make the entire system unstable.

One challenge for DNS is that it has a TTL. It takes time for records to be updated and pass through the system. On the other hand the TTL can be set very low for frequent updates and more importantly DNS is the authority on hostname+IPaddress configurations. However in the realm of service discovery DNS is useless. DNS is essentially a key/value store where the key is the fully qualified domain name(fqdn) and the value is the IP address. Of course there are other records and other ways to search the DNS repository but in the end this is what you get.

But what happens when you take etcd and put a DNS API wrapper around it. That's the premise behind SkyDNS. SkyDNS provides the same DNS APIs but uses etcd to store the data instead of other mediums like a traditional DNS server. This is ok for a number of reasons. First of all having direct access to etcd means that the data can be updated using standard etcd query strings. It also means that SkyDNS could implement a listener such than when records important to SkyDNS are updated SkyDNS is given a poke so that it can figure out what happened and react.

The big benefit for this model is that the storage and replication is shared and can be easily backed up. Depending on what you know about DNS this could actually be good or bad. DNS replication may or may not be in realtime in the traditional sense. And in SixSigma/Root Cause analysis the notion that SkyDNS relies on etcd means that the aggregate system is inherently less reliable.

Now what is going to happen when you have 20 or 30 applications and services that use etcd in the same way you might have previously used postgres or some other replicated db? Here is the point where I leave you hanging. etcd is a replicated KV store. There are many KV stores. I'm certain etcd is small (the rocket container is 3.5MB) Fleet and flannel and possibly a few systems use etcd for configuration management so it's clear that it works. But where is the line? Should it be orchestration configuration management or just any kind of configuration?

etcd does not support TLS yet. So security is going to be a hack for a while. What else is missing?

PS: If I had to deploy a DNS server right now I'd probably go with GeoDNS. It has a good reputation and is used by the NTP project. I'm pretty certain that it uses a config file instead of a database for the configuration. This also make replication simple. It also puts the burden on the application and the filesystem to replicate and slide the new configuration in. I do not see it here but I imagine that storing the configuration in a VCS repo and triggering changes would be very useful. As for services it's time to start looking at the extended attributes.

Embedding TCL in my Go application

This has interesting potential as I consider the extensibility of both my Macroinator and Flow based programming projects. (flow is not published yet as it is in the middle of a complete refactoring)

The target OS, for this example, is OS X Yosemite. And the first dependency to install is TCL. I'm using homebrew to install tcl (alternatively Jim) instead of the Apple version or the source from the source.

It's important to note that while brew works in userspace and is highly curated this is still an attack vector for the bad guys.

1) install tcl with brew
brew update
brew install homebrew/dupes/tcl-tk
** note that tcl-tk is located in the homebrew/dupes folder. This is to indicate that the project tcl-tk duplicates some of the features in OSX.

** brew installs the proper tcl-tk, not jim, and does not offer Jim as an alternative. The tk portion of the install requires some legacy X11 libraries and that makes me very sad.

For the next step I need to install the tcl/go bindings. Looking at the landing page for gotcl I noticed that there are plenty of missing elements in the project so I'm going to try gothic. (Gothic also supports tcl/tk 8.6 which is what we installed in step 1.

2) install Gothic.go
env GOPATH=/Users/rbucker/gocode go get github.com/nsf/gothic
At this point unless you've previously installed X11 header files and libraries you're going to get a compile error about a missing header file. So at this point I abandoned all hope of the tcl-tk working.

3) uninstall tcl-tk
brew uninstall tcl-tk

4) install cask
brew install caskroom/cask/brew-cask
** cask is a set of edge case brew tools. I'm not sure if there is any additional risk loading from here but I'm trying anyway. My bunny sense is telling me I should have created an OS X VM and practiced there.

5) install tcl
brew cask install tcl
** This is a different version of tcl. This one happens to be from ActiveState. I like those guys. I'm not exactly sure how much is their code but if tcl is going to work these guys are awesome. If you're going to write proper tcl you might want to try their tools. I think they can produce proper cross platform executables but it has been a while since I read anything about them.

** one other note about installing this tcl.  I was prompted for my admin password. So my bunny sense was justified.  Onward we go.

6) install gothic

I tried the same command again.  And got the same response. So now I'm unwinding cask just so I'm not giving up too much space etc.

7) uninstall cask tcl
brew cask uninstall tcl
8) uninstall cask
brew uninstall caskroom/cask/brew-cask
** even though something was installed as an admin user; when I uninstalled I was not prompted for the same admin credentials to remove them. Hmmm...

Now that this was a FAIL... there is one other thing I'm thinking about. The piccolo project shows just how easy it is to implement a variation of the tcl language. In this case batteries are not included but it feels like something I might want to do instead of building my own parser/compiler.... although with my new found experience with generate it might just be an option. But it's a topic for another conversation.

Another link to more tcl info.

UPDATE: This is an interesting implementation from the CoreOS team.  Mayday is a monitoring tool but their task design makes the whole thing interesting and more like I am intending except I still need a hybrid in order to be more general purpose.

Sunday, March 1, 2015

Broke the Ice with Java on OSX

My MacBook Air has been a java virgin since I bought the computer. After all that crap with Oracle and the bugs they let creep into Java I had just enough of it and swore to never install it again. I had refused to install CrashPlan, GoToMyPC and IntelliJ because they used Java... regardless if it was embedded in the application or an external dependency.

Today I finally encountered an application that made it necessary.

When the application started it popped up a splash screen with a link to a website where I could download and install the JRE. It was pretty strange that the vendor did not mention Oracle by name but that is where I was directed to. I clicked the button to download and install my Java dependency and completed the installation task.  But when I started the application I received the same popup. I tried to install the JRE several times but nothing worked.

Finally I downloaded and installed Apple's version of the JRE. My guess is that Apple's version was dates back to Jan 2014 but I cannot be sure. Obviously this creates some general concerns about versions, security, and Apple's deprecation when the time comes. (i.e.; some MOV formats are no longer supported.)

Once I installed Apple's version of the JRE everything worked smoothly. Let's see what happens next.

safekeeper - go generate: substitute tokens with ENV variables

Safekeeper it's a novel idea to prevent the need of putting secrets directly in your code which might be stored in your version control system and this exposing secrets. In response safekeeper uses go's generate functionality to process a template and replace the various tokens with their production values.

I like it but...

It would be easy enough to do, however, I'm wrestling with the idea at the moment. (a) how secure is it really if the build pipeline needs to keep this information in the environment. At some point it needs to be stores so that it can be restored (b) Putting the credentials in the code makes the attack vector the program and not the environment. The application is going to leave echo of itself as it's backed up, tested in staging and so on. (c) with the discovery services associated with tools like etcd this sort of thing might be delayed until actual runtime instead of at-rest.

So for now I'm trying it with one of my projects (macroinator). I might implement my own version using go's own template schema instead of their version. But that's for another day.

In conclusion I do not think it's a solution for secure access.

Nim namespacing

Reading this comment in a post:
Namespacing is very sloppy. Importing a module dumps the entirety of its contents into your namespace. Method calls are just syntactic sugar: a.b() is exactly the same as b(a), so methods are also in the globalish namespace. Seems to rely extremely heavily on overloading.
I applaud his commitment, the post was very long and addressed a number of topics. One thing that caught my eye was your criticism of nim namespacing or lack thereof. I'm not sure I have a strong enough opinion either way but one thing I have been noodling on is the notion of a monolithic codebase with a global namespace. Silly me, this is exactly how C does it so it's only natural that Nim does too. Java and C# privacy modifiers are nonsensical since "we" typically have access to the source. The Go implementation uses case for privacy but implements package namespaces which are easy to overlap meaning having to use aliases.

In a related post. Namespaces are supposed to provide some sort of orthogonal barrier between packages. One perfect example is the C compiler.  Most C compilers prefix all user attributes, variables and functions with a '_' (underscore).  This way there is no possible way for the user code to overlap with the compiler-provided code and libraries.

If the compiler defined a variable SYS and the user defined a similar variable; the user variable would be renamed to _SYS so the two would not reference the same data or cause linker issues. 'C' does not address the issue between libraries. If you have two libraries that perform different functions but have the same name then you have a serious challenge ahead of you.

While I'm not invested in the subject one way or the other I can see the merit and making namespaces optional might create it's own challenges. I think the answer is going to be some sort of meta approach that will satisfy both sides. Golang's generate subcommand seems to have some potential in this area although current examples have been fairly trivial string replaces or Stringer code generators.

another bad day for open source

One of the hallmarks of a good open source project is just how complicated it is to install, configure and maintain. Happily gitlab and the ...