The go authors describe the go build process as:
or something to that effect.
Well this works great for standard applications but what happens when a package your project depends on also requires a generate step?
I hope someone else has an idea while I investigate.
UPDATE: the go build/generate does not do anything with dependencies. Seems they are expected to be complete in their original form. So if there is an opportunity to "generate" in a dependency it should be done when the dependency was released. Anyone?
I had a number of problems getting my makefile to work with fishshell. The first challenge was making GOPATH make sense but that was cleared up with an article by Andrew Gerand. Basically it came down to two tips. (1) more than one item in the GOPATH is ok (2) the first item in the set is the vendor folder. Voila.
But if you are like me and you like fishshell then the config file is bothering you. So let start at the beginning.
I have a simple Makefile with one sub-command:
env: @echo "GOPATH=$(GOPATH)"
In my fish config file (like .profile or .bashrc) I have the following:
set -g GOPATH $HOME/gorepo $HOME
The -g indicates that I'm setting the GOPATH environment variable in the "global" space. Meaning this session or terminal window. Notice that I'm setting the values $HOME/gorepo and $HOME to the GOPATH variable.
But something interesting happens when I execute the command make env. For some reason I get the following output:
I'm trying to cleanup all the junk on my computer; and there is a lot of junk. The hard part is the quantity of files not the size. As part of the Golang idiomatic way there are at least 3 places where "we" store packages.
(a) GOROOT - which is typically reserved for the compiler. Go's get function may let you do it if you override GOPATH for that request but it's probably not the best thing to go.
(b) setting GOPATH=$HOME - this is the usual way to do things, however, this means you are going to end up with a src, pkg, and bin folder in your $HOME. This is not a totally bad thing but just messy. Partly because you have 3 new folders in your HOME but you could inadvertently override another executable with the same name in your bin folder... as this bin folder is shared between your normal apps and your apps from source.
This is further messy because the order of the paths in the PATH environment variable is the order that they are evaluated when looking for an e…
For the uninitiated; unsubscribing from a mailing list should not take 7-10 business days. It certainly does not take them that long to add you to the mailing list; so why would it take that long to remove you? Certainly there are no humans making decisions or manual database deletions... in that case it would actually cost the marketer more to delete than acquire the eyeballs.
The fact is, they want 10 days (a) to sell your contact info to someone else (b) spam the crap out of you over the next 10 days (c) hope that by day 10 you've forgotten and ... maybe... you'll try to unsubscribe again.
It would be great if google could enhance their email product to do some inbox magic in this space.
I gave a talk for Go-Miami at VentureHive in Miami last night and given the way traffic can be in the area I left work with plenty of time to get there and find a parking space. Being there 60m early I tweaked a few slides and prepared for the talk. I could not saw enough about the place.
This morning an idea hit me.
In the 1990s I was introduced to Hops International. While they currently have lush office space; when I met them they were working out of several apartments or condos on Miami beach. And so I asked myself if there was some value to converting some of these older buildings to a hybrid incubator/residence.
It appears that the Go team has removed the vim syntax highlighting files from their source. Subsequently a 3rd party took over... and then recently he gave up and everyone seems to be moving over to vim-go or maybe it's go-vim.
Upon reflection this project is a poison of a sort.
First and foremost the project depends on a package manager for vim. And then there are the 5-10 packages you also need to install as dependencies; and depending on the specifics you might be running the wrong version of vim the deps might require lua and/or ruby. And that's just for starters.
So simple syntax highlighting seems to be off the table (from the go authors). Luckily for me there are a number of other choices even though the terminal editor might be a thing of the past.
I've been playing with a go package called safekeeper. The way it works is that it takes a xxxxxxx.go.safekeeper file as input and does a search/replace on various environment variables that you specify and outputs a .go file which is then compiled into your project.
One of the things I don't like about the project is that since one of the input files is, in fact, a .go file with a different filename extension (.go.safekeeper) the atom.io editor does not know what syntax highlighting to apply.
Thankfully there is a package called file-mode which you'll install in the usual atom way so I'm not going to detail that here. But I had a number of challenges getting things to work once the package was installed... and I do not have a specific recipe that made things work but in the end it did.
install atom.io if not already installed and launch itopen the settings tabsearch for and install the file-mode packageit might be a good thing to restart the editor hereopen the file yo…
I'm not a shark and I didn't stay at a Holiday Inn but I have watched enough 'Shark Tank' to have an opinion as to what it takes to get some cash from these guys and I suppose the criteria is the same everywhere. And if I were a shark I would probably do the same thing.
sweat equity is considered free. It's a required component but always free.Know your numbersHaving an idea alone is not good enoughProtecting your investment with Patents or high cost of emulation or reverse engineering is mandatoryYou gotta have sales and they must be trending upHaving backlogged orders is good but only a fraction of the value because it's execution not investment
So don't expect to come up with an idea and have the sharks fall over themselves without: productcustomersordersprotection
Basically the perfect deal from the beggar's position is not needing the money at all. (somewhere in there I find myself recalling a quote from a Harvard MBA business class)
Whenever I build or deploy distributed systems the topic of log aggregation always pops up. I wish it were a difficult topic because then there would be some money to be made with a solution and with the number of times I've built these systems... I'd be very wealthy.
A friend of mine pointed me to a service/application that I was already familiar with. Bosun is a monitoring application from the makes of stack exchange. If my memory serves me this system is the first Go application written by Fog Creek/Stack Exchange. I do not remember my exact first impressions but looking at the code today I see a number of questionable practices. However, in the end it comes down to answering a few brief questions which will all direct you you to the same place.
- are you going to aggregate 100% of the messages from the system being monitored
- how big is that 100%
- if you have 100 or 20,000 systems being monitored will the log aggregator be able to hold all of that data
- what is the dat…
"Package names are central to good naming in Go programs. Take the time to choose good package names and organize your code well. This helps clients understand and use your packages and helps maintainers to grow them gracefully."
-- Sameer Ajmani (Package Names)
I started playing with gotcl and one of the first things I did was fork it and merge the project with gotclsh. I even added a few examples in order to make sure it worked; and it did.
Next, I started working on a framework where tcl was going to be an interpreted scripting language used as part of the "work" that needed to be accomplished. It's possible to call this work something like the node in a flow based program. In this case it was part of the details of the installinator/macroinator which I've written about.
When I initially merged the tcl projects I named the project 'tcl' and did not care a lick that I had changed the project name but not the package name. As I started practicing …
CoreOSBoot2dockerRancherOSProject Atomic (Fedora, CentOS, RedHat)Ubuntu SnappyOpenStackVMware
CoreOS is the most production ready of the group. The alpha channel supports the most modern versions of all of the tool chains except etcd (which is surprising).
Boot2docker is tuned to run docker but it's RAM only and is well documented as a development only platform. But it works well with the exception that it is not capable of sharing host folders as volumes on the container.
RancherOS is interesting in that it's a total immersion in the container ecosystem. Even PID-1 is a container. I imagine it's going to work because either it works or it doesn't and it's obvious. The authors are very clear that this project is VERY alpha.
Project Atomic is probably production ready. That it spans Fedora, CentOS and RedHat is interesting but not a make or break. The last time I tried to install the Fedora version it took several days t…
It's early in my investigation, however, when collecting packages that my project depends on by calling the `go get` command in the go toolkit... I get an error because the package is private and is protected by an account username and password. Ratts. The only way to get around this is going to be setting the github and bitbucket configuration with my credentials. Of course that means leaving plenty of breadcrumbs as I `go get` the public libraries.
The good news is that the private library in question is in my own private repo. It's just not any fun.
I think the fossil community needs two projects. (1) get the go build tools to work with fossil. Whether that's implementing a shim so that fossil responds like a git repo or (2) by adding the native commands to the go tools so that they can work as-is.
The fossil scm is awesome. If paired with go we could see yet another renaissance. There is something to be said for "the unix way" but when combined in a "tu…
I've found the first two extensions to the installinator/macroinator. lisp and tcl. One of the interesting things I noticed was that both languages, especially tcl, might work well on the command line for a REPL. For example in tcl
$ tclrepl fcopy "file1.txt" "file2.txt"
and then in lisp
$ lisprepl (fcopy "file1.txt" "file2.txt")
Inside the REPL I'm pretty certain all I need to do is join the args back together. The quoted params are already handled by the basic command line stuff. It's also interesting to note that a space is the separator character in both languages.
executeMe := strings.Join(flag.Args()," ")
and then send the execMe to the interpreter. I have not decided on the exact command line but it should be pretty simple. There are only a few things that the REPL actually needs.
optional configuration file (-c)optional file to execute (-f) And the remaining args are what will be executed after the file(s) have bee…
On March 19th I'll be speaking at Go Miami meet up. I'll be talking about Configuration as Code in Go projects.
As I was investigating an implementation of flow based programming in Go, I realized that I needed an installation framework and that there were some similarities to
frameworks like Chef, Puppet, Ansible and SaltStack. Meet the “installinator”; a “configuration as code” implementation of a DSL framed in Go using JSON for it’s configuration and thinking deeply about "Compose with functions, not methods.
I plan to talk about: Macroinator's design, tcl as an embedded DSL, a little flow based programming. and whatever I can do in the short time I have left.
We hope you'll attend. If not, it will be recorded and possibly live-casted.
I remember back my Windows days when I would reinstall Windows regularly. All it would take is a few too many BSOD or perceived slow downs or a new release from Microsoft.
Several days ago I bought a 2TB replacement drive for my backup MacBook unibody. When I first installed it I used Carbon Copy Cloner to move the entire contents of my 256GB SSD onto the 2TB HDD. One thing I noticed was that things were really slow. Second thing I noticed was that not everything worked. Did I mention it was slow?
Since then I reinstalled OS X Yosemite from USB (no recovery partition on this machine). And now things are humming along nicely. The performance has been restored. I think there are a few explanations.
(1) the compression of bits on the drive is higher and although its a 5400 rpm drive the bits are more closely packed and so they are probably faster in and out of the carious stages of the hardware.
(2) The drive name changed from the SSD to HDD. That means that certain apps that know the fu…
For many years after Microsoft deployed the registry it was the hell of all hells. It was the one thing that could kill your windows server or desktop and could render your machine unusable. In some cases you could boot into single user mode and repair it; years later there was a snapshot tool; and a plethora of 3rd party tools that did the same.
The etcd project from CoreOS defines etcd as "A highly-available key value store for shared configuration and service discovery".
Talking about Service Discovery,
If you're deploying a gaggle of applications in your environment they may need to discover each other in order to communicate on some level. This function is/was traditionally performed with a DNS server and/or configuration files. In this environment systems and services were typically static, however, in the world of containers and virtualization anything can be anywhere. And worse yet the IP address can change more frequently. High Availability(HA) and system hardwa…
This has interesting potential as I consider the extensibility of both my Macroinator and Flow based programming projects. (flow is not published yet as it is in the middle of a complete refactoring)
The target OS, for this example, is OS X Yosemite. And the first dependency to install is TCL. I'm using homebrew to install tcl (alternatively Jim) instead of the Apple version or the source from the source.
It's important to note that while brew works in userspace and is highly curated this is still an attack vector for the bad guys.
1) install tcl with brew
brew install homebrew/dupes/tcl-tk
** note that tcl-tk is located in the homebrew/dupes folder. This is to indicate that the project tcl-tk duplicates some of the features in OSX.
** brew installs the proper tcl-tk, not jim, and does not offer Jim as an alternative. The tk portion of the install requires some legacy X11 libraries and that makes me very sad.
For the next step I need to install the tcl/go bindings. …
My MacBook Air has been a java virgin since I bought the computer. After all that crap with Oracle and the bugs they let creep into Java I had just enough of it and swore to never install it again. I had refused to install CrashPlan, GoToMyPC and IntelliJ because they used Java... regardless if it was embedded in the application or an external dependency.
Today I finally encountered an application that made it necessary.
When the application started it popped up a splash screen with a link to a website where I could download and install the JRE. It was pretty strange that the vendor did not mention Oracle by name but that is where I was directed to. I clicked the button to download and install my Java dependency and completed the installation task. But when I started the application I received the same popup. I tried to install the JRE several times but nothing worked.
Finally I downloaded and installed Apple's version of the JRE. My guess is that Apple's version was dates ba…
Safekeeper it's a novel idea to prevent the need of putting secrets directly in your code which might be stored in your version control system and this exposing secrets. In response safekeeper uses go's generate functionality to process a template and replace the various tokens with their production values.
I like it but...
It would be easy enough to do, however, I'm wrestling with the idea at the moment. (a) how secure is it really if the build pipeline needs to keep this information in the environment. At some point it needs to be stores so that it can be restored (b) Putting the credentials in the code makes the attack vector the program and not the environment. The application is going to leave echo of itself as it's backed up, tested in staging and so on. (c) with the discovery services associated with tools like etcd this sort of thing might be delayed until actual runtime instead of at-rest.
So for now I'm trying it with one of my projects (macroinator). I m…
Reading this comment in a post:
Namespacing is very sloppy. Importing a module dumps the entirety of its contents into your namespace. Method calls are just syntactic sugar: a.b() is exactly the same as b(a), so methods are also in the globalish namespace. Seems to rely extremely heavily on overloading.
I applaud his commitment, the post was very long and addressed a number of topics. One thing that caught my eye was your criticism of nim namespacing or lack thereof. I'm not sure I have a strong enough opinion either way but one thing I have been noodling on is the notion of a monolithic codebase with a global namespace. Silly me, this is exactly how C does it so it's only natural that Nim does too. Java and C# privacy modifiers are nonsensical since "we" typically have access to the source. The Go implementation uses case for privacy but implements package namespaces which are easy to overlap meaning having to use aliases.