Skip to main content

perl web frameworks : part 2

After I finished writing part 1 of this article I thought I was going to call this subject dead as mojo was not behaving as the install documentation suggested. And if you cannot get the simplest hello world application to execute properly then all hope of doing something interesting is lost.

Alas, I head from mojo's author and all seems well. Sort of. First of all he said that I should read the perldoc. (and I like perldoc, oh yes I do). I read the doc and it does correct the statement make in the install guide. I appreciate that, however, I was not given the sense that he feels that the install doc is wrong. So we have a net-zero points here. And then we also skirted the issue of the use of the term 'daemon' as a parameter to launch the app as described in the perldoc. I don't think this issue will be resolved either. I have, however, created a github ticket with the hope that these will be corrected. Finally, I'm not sure that I care what the param name is or if it actually runs as a detached process or attached. I plan to use daemontools to control and monitor the process(s) and therefore it needs to run in the foreground anyway.

The last piece of advice that I was given was to read the FAQ. It's worth the read but there was nothing groundbreaking in there.

On to the rest of the article which has less to do with these frameworks and a little more to do with best practices.

Perl Concurrency

PerlDancer documents LWP as a dependency where Mojolicious said that they implemented their own mechanism and that they were actually designed after LWPng; which is presumably better than LWP. (I'll leave that to someone else.) On the otherhand mojo supports libev aka ioloop; which is a good thing and I happen to be using that structure in both tornadoweb and zeromq. 

[Update 2011.09.03] - I just did some reading on perl threads. It seems that threads are now native sometime after perl 5.8.7 and that the Threads.pm module was removed after 5.10.x. I had no idea that so much had happened to perl since the last time I was in the middle of it all. Clearly they resolved some issues that Python has not. This does not really mean anything to the Mojolicious concurrency model. mojo still uses EV/libev and hypnotoad for "preforking non-blocking I/O". Which does not mean much in terms of multithreading as a single transaction/event can still starve a single process. Hypnotoad is used to fork the main app into multiple evented application instances. (you can infer the performance issues here)

Performance

This is a slippery subject. There are so many benchmarks and many more ways to interpret the results. After eyeballing a few and trying to determine if there was a predilection on the part of the tester... it seems that perl and python are close enough in the performance curve for me not to care. At least there is no low hanging fruit except maybe Perl 6 and pypy. So let's keep this simple.

CPAN, CPANP & CPANMINUS

I previously wrote that perldoc was perl's killer app. I think I was wrong. As I recall the good ole days of perl hacking I'm greeted with the warm and fuzzy of all of the CPAN packages I've installed over the years. Sadly not everything installs on the first try but there are many instances where things just work. Unfortunately I think that the CPAN is starting to show it's age and that perl does not have the following that it once had and therefore many packages are falling into disrepair.

Actor Model (aka Worker Model)

LWP, LWPng, EV, libev etc... are only going to take your application so far. At some point the work has to be divided so that the bulk of the blocking work can a) be performed in the context of a meta-transaction or a literal transaction; b) run at full speed for a prolonged period while the web app might service or queue other requests. In this way you might have N cores and therefore you would deploy N-1 workers leaving one core dedicated to the misc functions. (not that you could do any sort of affinity but at least there might be a surplus of resources)

Message Queues - ZeroMQ

If you are going to implement an actor model then be sure to do so with an MQ. While many languages like erlang and go have built in messaging or IPC functionality they are not instrumented, not standardized, and certainly not cross platform. But they tend to be fast and efficient. So it depends on your design objectives and while I hate dependencies; I do like a good MQ.

One other benefit of a good MQ is that the work can be distributed across nodes... not unlike erlang and go.

NoSQL - MongoDB

I like NoSQL but I have yet to find a use case that really demonstrates it's value. Sure there is BigTable, SimpleDB and a few other implementations that are cool and interesting to study. My intuition tells me that that the amount of data, number of clients, and the number of servers is so out of proportion that it makes sense. But as anyone who has developed for the cloud, even the simplest cloud storage solution gets really expensive because you're charged for a) bandwidth, b) storage, c) CPU. So if your modest application is going to saturate a modest hardware investment what makes you think that big boys can. I'm starting to think that the number of servers vastly outnumbers the number of requests.

NoSQL is just not a real answer for a modest website or application. a) there are no reporting tools as there are for SQL; b) while there are SMEs in the NoSQL field they are currently in disproportion to the number of qualified SQL DBAs; c) there are plenty of ORM modules that make rapid application development easier; even though we all know that the ORM is usually replaced with hand-coded SQL/classes.

Cache - Redis

Redis is considered a NoSQL database by many but it is also a cache engine. It has many of the k/v services and functionality that memcache has but it will persists to disk and replicate too. There are also a number of userspace features that make it a good NoSQL general purpose database.

Redis is probably most commonly used in rails-type applications in order to cache the DB results. This is a good use of the functionality, however, since everything needs to fit in RAM it's only suited for some portion of the data as it reached the applications capacity.

In some web frameworks the caching is implemented as a plug-in.

Conclusion

I'm new to both mojolicious and perldancer. I'm not sure if one is better than the other. While looking at their hello world apps they look very similar. perldancer takes it's heritage from ruby's sanatra and mojolicious from catalyst. Looking at their websites and documentation neither stands out.

If I had some recommendations for Mojolicious: a) fix the doc as I recommended. It's a distraction to the noob; b) while I do not need a lesson in the fine art of installing CPAN modules it would be nice if the cookbook was a little more complete. Specially where it concerns EV and the other tightly coupled modules. Seeing as they are so close there should be some better doc. Not just pretty.

perldancer depends on the CPAN for it's docs. I would have thought they would have rendered their own and added some value where they could. I suppose there is no real reason to use anything but the CPAN specially since they depend on it so heavily. But on the other hand there is something nice about doc that seems to belong.

If I had to decide which would be first. Mojolicious would be my first choice but only by a small margin. I like adding packages when I have to; and I like that it is version 2 rather than a perl version of a ruby application... meaning that the design warts might still be there.

Comments

Popular posts from this blog

Entry level cost for CoreOS+Tectonic

CoreOS and Tectonic start their pricing at 10 servers. Managed CoreOS starts at $1000 per month for those first 10 servers and Tectonic is $5000 for the same 10 servers. Annualized that is $85K or at least one employee depending on your market. As a single employee company I'd rather hire the employee. Specially since I only have 3 servers.

The pricing is biased toward the largest servers with the largest capacities; my dual core 32GB i5 IntelNuc can never be mistaken for a 96-CPU dual or quad core DELL

If CoreOS does not figure out a different barrier of entry they are going to follow the Borland path to obscurity.

UPDATE 2017-10-30: With gratitude the CoreOS team has provided updated information on their pricing, however, I stand by my conclusion that the effective cost is lower when you deploy monster machines. The cost per node of my 1 CPU Intel NUC is the same as a 96 CPU server when you get beyond 10 nodes. I'll also reiterate that while my pricing notes are not currently…

eGalax touch on default Ubuntu 14.04.2 LTS

I have not had success with the touch drivers as yet.  The touch works and evtest also seems to report events, however, I have noticed that the button click is not working and no matter what I do xinput refuses to configure the buttons correctly.  When I downgraded to ubuntu 10.04 LTS everything sort of worked... there must have been something in the kermel as 10.04 was in the 2.6 kernel and 4.04 is in the 3.x branch.

One thing ... all of the documentation pointed to the wrong website or one in Taiwanese. I was finally able to locate the drivers again: http://www.eeti.com.tw/drivers_Linux.html (it would have been nice if they provided the install instructions in text rather than PDF)
Please open the document "EETI_eGTouch_Programming_Guide" under the Guide directory, and follow the Guidline to install driver.
download the appropriate versionunzip the fileread the programming manual And from that I'm distilling to the following: execute the setup.sh answer all of the questio…

Prometheus vs Bosun

In conclusion... while Bosun(B) is still not the ideal monitoring system neither is Prometheus(P).

TL;DR;

I am running Bosun in a Docker container hosted on CoreOS. Fleet service/unit files keep it running. However in once case I have experienced at least one severe crash as a result of a disk full condition. That it is implemented as part golang, java and python is an annoyance. The MIT license is about the only good thing.

I am trying to integrate Prometheus into my pipeline but losing steam fast. The Prometheus design seems to desire that you integrate your own cache inside your application and then allow the server to scrape the data, however, if the interval between scrapes is shorter than the longest transient session of your application then you need a gateway. A place to shuttle your data that will be a little more persistent.

(1) storing the data in my application might get me started more quickly
(2) getting the server to pull the data might be more secure
(3) using a push g…