Skip to main content

My requirements for the ideal DSL

I do not want to create a primer or answer the question as to what a DSL is (domain specific language) but I recognize that the line between a successful programming language and a ubiquitous uber programming language is somewhere between the range of problems that a language can solve. For example languages like C, C++, and assembler are excellent for low level programming but one can get tangled at high level. Languages like Go, C#, erlang and Java (CLR, Mono, JVM, Beam/EVM) are better at medium level challenges and most dynamic languages like perl, Ruby, JavasScript and Python are better at high level problems.

Going back to Dave Thomas's (pragprog) AU Ruby Conf keynote. Paraphrasing: The power of any DSL is that it's ideally suited for the complexity of the challenge it solves making the programmer productive.

A side note: I have purposefully left out languages like CoffeeScript, Elixir and Clojure because they are more akin to translations than they are standalone; and they simply do not solve any more problems than they inherit from their environments etc...

Up until this point in the argument I have been examining these languages from the level at which the programming interacts or develops application. From this perspective languages are broken into 3 components. (1) syntax; (2) APIs; (3) execution.

syntax; The differences in most languages is most dramatic in the syntax. It's the place where most novice language designers think the definition of a DSL is. Almost all syntax looks the same. That's probably because todays language tools like YACC and LEX, BNF etc help the language designer get to market quickly. So with the exception of concepts like Object-Oriented, lambda, and closures (and some other concepts) I think it's safe to say that ALL modern programming languages are more similar than they are different.

APIs; unless the language provides direct access to the system hardware like registers and memory; everything, if exposed, is going to be provided in the APIs. The difference between the high and low level language APIs is the amount of protection the API provides for the user and the overall efficiency and implementation. For example 'C' lets the operating system detect the memory errors while Java and C# perform that task (whether they defer to the OS and intercept or preempt is not important)

execution; There are many variations to this model. From a binary with statically linked in libraries; or dynamic linked libraries; virtual machines like the CLR, Mono, JVM and EVM and more(DartVM, LuaJit); or complete interpretive runtimes like perl, perl6(I think), Ruby, Python, JavaScript. Execution is the biggest barrier to the DSL and the DSL environment because it's easy to translate a DSL into a language that already has an execution path and it's hard to get to a clean path. (how many GCC preprocessors do we really need).

One interesting idea is some sort of mutation. The demo I watched was the conversion of the UnReal engine from 'C' to JavaScript(and the demo rocks!). They accomplished the task in two ways. (a) they compiled the C code with the LLVM and then converted the code in the LLVM to JavaScript. (b) the optimized JavaScript was then running on asm.js which is a stripped down and performant JavaScript environment.(further optimized for FireFox). Unfortunately this is in the opposite direction.

My requirements for the ideal DSL:

  1. easily digested idiomatic "one solution" syntax
  2. rich APIs partitioned vertically by task and horizontally by complexity
  3. execution as a binary or with an embedded interpreter or JIT
For me; the ideal DSL is one DSL that allow me to write a device driver, implement a dynamic website backend, or generate a PDF report from a CSV.

Comments

Popular posts from this blog

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

UPDATE 2017-10-30: With gratitude the CoreOS team has provided updated information on their pricing, however, I stand by my conclusion that the effective cost is lower when you deploy monster machines. The cost per node of my 1 CPU Intel NUC is the same as a 96 CPU server when you get beyond 10 nodes. I'll also reiterate that while my pricing notes are not currently…

eGalax touch on default Ubuntu 14.04.2 LTS

I have not had success with the touch drivers as yet.  The touch works and evtest also seems to report events, however, I have noticed that the button click is not working and no matter what I do xinput refuses to configure the buttons correctly.  When I downgraded to ubuntu 10.04 LTS everything sort of worked... there must have been something in the kermel as 10.04 was in the 2.6 kernel and 4.04 is in the 3.x branch.

One thing ... all of the documentation pointed to the wrong website or one in Taiwanese. I was finally able to locate the drivers again: http://www.eeti.com.tw/drivers_Linux.html (it would have been nice if they provided the install instructions in text rather than PDF)
Please open the document "EETI_eGTouch_Programming_Guide" under the Guide directory, and follow the Guidline to install driver.
download the appropriate versionunzip the fileread the programming manual And from that I'm distilling to the following: execute the setup.sh answer all of the questio…

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).

TL;DR;

I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…