Monday, October 27, 2014

Things that you might think don't influence the buying decision - yet they do

Recently I've been reviewing APM products for the Java-based system I run for my pay. After some research online I ended up with two choices; one based on the recorded demos I'd seen earlier, and another one just since it was very easy to get for a trial period. If there are two choices, and to be able to see what one of them really does I'd first have to expose all kinds of details about my company and my position while the other is freely downloadable for a 30 day trial by just entering name, email address and company name, I'll choose the latter.

The one with an easy self-service trial period (let's call it product A from now on) was a breeze to install, both the server and the agents, and the GUI looked very sleek and intuitive - it took me roughly an hour to get as far as looking at some analysis of slow transactions from a test server. They clearly had made an effort to polish the product and make it such that one doesn't need to be über geek to be able to install and use it. I am rather geek myself, but I still appreciate the effort.

The supplier of the other one (product B), of which I'd seen some really nice demos, had a peculiar requirement: I had to schedule for a live demo before getting the download. Had I not already made my mind that I wanted to have this product for a trial run, I might have bailed out (as the demo required much more of my time and I'd already seen much of the stuff they could demo for me). However, I made the personal sacrifice of attending to a session outside of the office hours (due to the awkward eight hour time difference). During the demo it become clear that the product actually didn't officially support the version of the monitoring software that we had and to which product B was to be integrated to, but they had a beta to which they immediately added the support and asked me if I'd like to wait for releasing it or have it immediately as a beta. Original estimate for the release was "a few weeks", but it eventually was changed to "next week". I waited, happy for such a swift response for the need.

As the promised time came, I received a download link. I grabbed the installation packages and went on with the instructions. However, I got stalled for roughly an hour in trying to get the agent installed successfully, as the only automated installation was for the case when the application server is started with a batch file and there was only some notes about setting things up when it is installed as a service. I finally figured out the parameter I needed to add to the service wrapper to get it going, and so finally got everything installed. Then I went for the GUI to get some readings on how the system is running, just as I did with product A. This time that was not a breeze, either, despite of the demo I'd seen just a week earlier. I had to look things up in the manual to navigate in the GUI and get what I wanted.

Needless to say, at this point I had rather strong preference for the product A for its ease of set up and use.

I emailed my experience with the product B (with some very honest critique, pointing also out the contrast with the product of their competitor) to the account manager I had been in contact with. I was ok with having the technical issues forwarded to their support team and after two days I received email from the product manager who asked for some clarifications, mentioned that some of the issues I noted were known and gave some workarounds, and promised to get back with the rest of the issues after they had looked into them more closely. The next day I noticed they had actually opened support cases for many of my issues as I got a more detailed status report and some more assistance.

One and a half weeks later, after catching up with some other things in the meantime, I emailed some further info and questions on some of the issues, for which I received again very prompt and helpful response. The next day - directly after installing the agent in production - I had yet another issue (due to having messed up some config while trying to work around on my own an issue with the agent installer), and next morning I had enough info to remedy the situation.

At this point my preference had changed again - despite the product B being hard to install (well, I'd already installed it on every server I needed) and having some bugs and limitations (that were now known to me) and inferior GUI and user experience overall.

Why?

I had seen that they had a support team and attitude that is not that common in the industry. I knew, that if I ever encounter an issue with their product, I'll have a solution in my inbox promptly. Besides, having a GUI with a steep learning curve is not an issue as I've got used to things that way (that's what I do here, figure out things) and I prefer a "professional" UI over an easy and simple one that can get restricting when you want something that is not a very common use case. Last but not least, the off-the-shelf license cost of the product B in our case was lower than the price of the product A even after applying all the possible discounts their sales person could come up with.

So, even if your product is polished and fancy, you might still get beaten by someone who has the right attitude (not to forget a more realistic price tag). Also, if you don't offer an easy-to-start trial period, you might also get ignored, unless you've already earlier succeeded in making a very good impression.
submit to reddit Delicious

Saturday, June 21, 2014

Sometimes it is better to have a fresh start than to work on what has been around for ages

I finally got to upgrade my ancient desktop at home. The box was originally built around 7-8 years ago, with some minor upgrades along the road, and now it had Athlon XP 2500+ with 1.5 GB RAM. Seriously, it still could run light-weight Linux desktop pretty fine, with even two user sessions. The merit goes much to the U160 SCSI server grade disk system that only got outdated in the rather recent years by the SATA architecture. The origins of that box go still further into the history, as it is basically the direct offspring of my first Linux box from the days of RH 4.2 (and I mean RH before the days of RHEL or even Centos, literally in the last century).

I'd rather build new systems alongside of the current ones, so that there's time to slowly work on the new box to get everything going fine and dandy, but this was to be a "old junk out, new junk in" style of operation. Regarding the HW part of it it was pretty simple, only the main board with CPU and memory needed to be replaced and the SSD installed, since PSU was fairly recent and had enough wattage for the new setup, mostly since I'm doing fine with the integrated GPU in the new i7, and even the CPU was low-power model. Just plug back in the SCSI adapter and the 2nd NIC and power back on. It was a no-brainer to get the system to boot back to Debian 7.5, and everything was running so fast and smoothly already - but it was running all 32-bit.

Since I have experience of both doing fresh installs and major upgrades, I've noticed that I like more the upgrade path since there's no need to rebuild configuration piece-by-piece afterwards, but rather get everything in place in a big but process-wise simple effort. First, I intended to upgrade to 64-bit kernel with SMP support to get all CPUs and full RAM into use. I've not done cross-compiling earlier and after some futile efforts to get the Debian kernel build tools to produce a working 64-bit kernel (I might have succeeded without using .deb build but I've learnt to like the concept too much to diverge) I went with a stock kernel from the repository. Great, it supported all the important hardware out-of-the-box (and yay, it was the first time I'm running stock kernel in many years). But what about the rest of the software? A 64-bit kernel can run it just fine with IA32 emulation, but isn't it a bit dumb to only be able to get some of the performance of the shiny HW... That's what I thought, and went googling about converting the system from 32-bit to 64-bit.

That's where the crux of the story comes in.

There are plenty of pages giving some instructions and also some warnings (e.g. this askubuntu.com Q&A which says 'it is very complicated' and is absolutely correct about it). Debian wiki has a great article on architecture migration which I decided to follow. I got pretty far with it (that's why I called it a great article), but eventually I couldn't get all packages to reinstall, and aptitude was acting pretty confused (initially it didn't list any packages under Installed packages even though it did admit that I had packages in installed state, and even after I told it to rethink it (with forget new packages + update) it still was in denial about the overall state of affairs. What was more worrying, there were quite a lot of errors about libs being of wrong architecture. I got some individual packages fixed using the same manual procedure as for resolving conflicts during the mass-upgrade/migration, but eventually I felt that this is not going to end up with a wholly working system. Looking at it now, I did some mistakes in the process which at least made things harder, so I think it still should be possible to follow the article successfully, but definitely it is not an easy path and requires knowledge on resolving package conflicts (doing a couple of dist-upgrades when the Debian team releases new stable release is a good prerequisite). After all apt is a terrific package management system, IMO, and can do all kinds of stunts in the hands of an expert.

So, after wasting many hours down the upgrade path I gave up, downloaded an install ISO and started over. After approximately the same amount of work I spent on the upgrade attempt I now have pretty much all major things in place and working and after all, now I know that if something is off, I just need to compare things with the old state of affairs that can be found on the old disks and migrate the changes. It is also a good chance to re-learn some things like setting up DAV on Apache for use with Subversion.

What is to be learnt from all this? At least for the kinds of me who prefer building on the old, it is good to learn to consider the benefits of a fresh start, and the downsides of carrying the load of the past over the new platform. What might help with this is having some sort of a configuration management system that could be used to restore at least part of the customisation that has to be re-done. Having that at home might still be an over-kill, though...
submit to reddit Delicious

Sunday, March 23, 2014

Mining not for average Joe (for long at least)

I'll put an end to my adventures in cryptocoin mining. The most obvious reason is that my current hardware is not really too good for it (and as the summer is coming there would be heating issues), but now that there are ASICs coming in also for Scrypt coins, the difficulty will go on the rise making it even worse for people like me. There are even cloud mining rigs that one could rent (I'm waiting for someone soon publish an article that states "nn% of all computing power used for cryptocurrencies")...

I've seen it claimed that Satoshi Nakamoto had intended mining to be something that also average folks could profit from, but given the fierce competition to build ever more powerful mining rigs, that is not happening, and likely it will never be happening at large as long as mining doesn't include some sort of a human component (without it making mining a full-time job for the person), or unless new algorithms are introduced often enough to keep the slow ASIC development at bay (but even then people who are both able and willing to buy $1000 USD worth gear every two years will be gaining the most).

Despite my decision, I'll be keeping my eye on the subject, after all I do have some fractions of LTC and some DOGE.

Edit (some 3 hours later):  Yet it is not so simple. CPU mined coins are vulnerable to botnets, and on the other hand ASICs are terrific on hashing vs. power efficiency (i.e. green mining). And yet, in the lack of clean, renewable and cheap source of electricity, it's not a good idea globally if people started buying rigs that draw 1kW constantly...
submit to reddit Delicious

Sunday, March 16, 2014

Things to consider for profitable cryptocurrency mining

There is a looong discussion on Reddit on whether Dogecoin mining is profitable or not. I do not claim I'd had the stamina to read all the way through it, but a theme seems to get repeated ("yes it is" - "no it isn't").

The opinions also vary widely on the usefulness of so called altcoins (meaning anything other than Bitcoin). Surely any coin (I think I'll from now on use "coin" instead of the more tedious "cryptocurrency") that is not accepted widely as payment isn't really useful as a token of exchange, i.e. payment, but both for miners and traders they might prove useful. However, as these coins come and go it's essential to assess if the value of a given coin is expected to stay or increase in the future to prevent losses.

Trading aside (since that's not my cup of tea), once one has established adequate level of trust on a given coin, there are things to consider (after considering the efficiency of your mining hardware):
  • Do you want to take the exchange risk involved in holding on a coin for more than a day? If you do, do you see a given coin increasing in exchange value in the near future?
  • If you're going for low exchange risk and will immediately sell what you mine, does the lower risk factor counter the daily transaction and exchange fees? Also, what coin is the most profitable today regarding difficulty, reward, network hash rate and the target coin/currency?
  • Even if you're willing to take the exchange risk, it's worth checking the profitability based on difficulty, reward and network hash rate.
There are a number of mining profitability calculators around the Net. There are CoinWarz, Dustcoin, CrabCoins and whatnot (please don't get offended if your favourite one is not listed, those are just random ones I ended up to). I got interested in how those calculators actually estimate (as for that it is, estimation, since random events are in play) the profitability - and I think everyone who's using the calculators should be interested as well. CoinWarz does the calculation on the server so I couldn't check their code, but both Dustcoin and CrabCoins reveal their formula in the page source. Both also use pretty much the same formula:

 time [s] x hashrate [H/s] x reward
------------------------------------
         difficulty x A

where A = 0x100010001h at Dustcoin and A = 2^32 at CrabCoins. Since those values are close to being the same, the sites give almost exactly the same results.

At this point I started looking up more calculators. CoinSelect, Where to Mine and Criptovalute seemed to give the same results so I guess they use the same formula, too. And hey, the formula does make sense: The longer the time, the greater your hash rate or the greater the reward, the greater will be the profit - and the greater the difficulty, the smaller will be the profit. What bugs me is that coefficient A as I don't know where it is derived. The actual code at Dustcoin uses two other constants in place of it, but they are likewise as cryptic to me. I'd be glad if someone pointed me to an explanation for the coefficient.

One thing to note about the calculators: Always check if they are using the correct up-to-date data. Difficulty might be off from the current one, as well as exchange rates, and even the reward (but that would mean their data is really stale). It makes sense to check at least two sources that you trust to eliminate the risk of deciding based on incorrect data.

It is very likely that an extremely favourable mining situation will not go on for long, as also other miners will come to mine thus raising the hash rate which in turn will make difficulty rise. So, there will be constant ebb and flow which also means one should automate pool/coin switching based on estimated profit to continuously adapt to the changing situation. There is already software for that, quick googling brought up CryptoSwitcher and I remember having seen others as well. E.g. cgminer has API that allows centralised remote controlling of miners and when you add automatic decision making based on network data, your miners should be always after the largest profits, or at least staying away from the least profitable coins.

It should be quite easy to implement home-brewn mining automation since e.g. CoinWarz offers an API that one could use to directly access their profitability data. The free version allows 25 calls in 24 hours, meaning the situation could be checked once an hour, which should be quite enough when you're not doing this too seriously. The lack of real-time network data can be compensated by steering away from the most volatile coins. There will still be the risk of sudden large exchange rate changes, but those should be rare enough to keep the risk relatively small.

Finally, for those who want it to be extremely easy, there are mining pools that automatically do the switching for you. If you trust their algorithm does a good work, one of those is the easiest way to get the benefits of coin switching.
submit to reddit Delicious

Sunday, February 2, 2014

Adding sharing buttons for Pinterest, Reddit, Delicious, Stumbleupon and LinkedIn to a Blogger blog

My better half wanted to add a Pin It button to her blog and needed my help, so I did some googling around it to find out how to do it. As a result I figured I could also add support for some other sharing sites in my own blog as well. It was not as straight-forward as many blog posts and support docs on the subject do claim, so I'll tell here what I did in order to get them all nicely lined up. I hope this makes life easier for somebody else.

This was the resulting row of buttons,
in case I end up changing the layout later on...


Pinterest

There is a rather good blog post at bloggercentral.com on adding Pin It button to Blogger, and it also tells the basics about the blog template.

However, I had very little luck with the embedded template editor (after trying two browsers including Chromium, saving the template after changes didn't work), and found out it is way better to take the XML backup of the template and use a good text editor to modify that. Remember to save your changes on a different name! On Linux e.g. kedit and gedit work fine, but on Windows it was tougher since Notepad doesn't understand Unix style line breaks, and Wordpad isn't really a text editor so it is not guaranteed to preserve formatting. I downloaded Notepad++ Portable for the task (since it does not require system-wide installation), but any decent text editor that supports UTF-8 encoding and Unix style line breaks should do.

The other thing that didn't go as the instructions said was the placement of the button code - in both of the blogs it was required to add the button at the second occurrence of <data:blog.post/>. I should study more of the template structure to figure out in what cases the other occurrences are being used, but anyway I made the addition to all of them.

Reddit

The support doc on reddit.com shows many options for the button, all with sample HTML code, but there is a catch in using them with Blogger. All the samples would more or less work on a single blog post, but they wouldn't work on the blog home page which has many posts. The advanced options down the page show a way to go with some of the buttons (those that use a script tag), but in all cases the page reference needs to be modified to suit Blogger. Below is what I use.
<a expr:href='&quot;http://www.reddit.com/submit?url=&quot; + data:post.url'>
 <img src='http://www.reddit.com/static/spreddit7.gif' alt='submit to reddit' border='0'/> </a>

Noteworthy things:
  1. Usage of expr prefix on href attribute tells Blogger that it needs to interpret the attribute value which contains references to layout data tags (e.g. data:post.url).
  2. Also quoting needs some tuning, since the whole expression to be interpreted requires quotes around it, and the static text part in it also needs to be enclosed in quotes, thus the two occurrences of &quot;.
If you'd be using those buttons that have just a script tag in the example, just look at the example given under Interactive button advanced settings and change the values of reddit_url and reddit_title to point to data:post.url and data:post.title, respectively. However, reddit seems to figure out the title from the URL if title is not given, which is nice since I didn't find a way to make Blogger understand multiple URL parameters (although it should be possible and I do know how to format a GET request with multiple parameters).

Delicious

Also delicious.com shows a working sample of their button, but as mentioned just above with reddit, it didn't work that well with multiple parameters on the URL in Blogger template. However, it seems to be enough to put just the url parameter there. Also in this case I moved the request URL to href attribute even though it is not as neat as hiding it in the onClick handler. So, here's what I use:
<a expr:href='&quot;http://del.icio.us/post?url=&quot; + data:post.url' target='_blank'>
  <img border='0' alt='Delicious' title='Del.icio.us' src='https://delicious.com/img/logo.png' height='16' width='16' />
</a>

Stumbleupon

The badge creator at stumbleupon.com looks rather fancy, but at that point I had grown a bit tired of all fancy things which are hard to put into the template, and so I took the easy route of peeking at a page with a working badge and extracting the URL and the icon from there. Not quite as recommended, but it seems to work, too:
<a class='logo' target='_blank' expr:href='&quot;http://www.stumbleupon.com/submit?url=&quot; + data:post.url'>
  <img border='0' alt='Stumbleupon' src='http://cdn.stumble-upon.com/i/badges/badgeLogo18x18.png?v5' height='18' width='18' />
</a>

LinkedIn

There is a share plugin generator on developer.linkedin.com which gives the necessary code for the share button, and the data-url attribute with added expr prefix gets its value from data:blog.url just like in all the above.

The final touch

All of the above would work just fine alone, but putting them together took some additional effort for the result to look nicely aligned. From the Pinterest sample code I took the enclosing div, and put all the rest within that, too. However, the icons ended up aligned pretty bad, and so I added vertical alignment.

That solved all but Pin It  and inShare buttons, which have enforced styles from the accompanying Javascript code. For Pinterest it is possible to just ditch the Javascript and go with plain link+icon, but LinkedIn has made it more complex, so I ended up adjusting the styling of the enclosing div and adding some spacers to add some space around the buttons.

So here is my template addition as a whole:
<style type='text/css'> 
  #sharing-wrapper {margin:10px 0 0 0; text-align:left; vertical-align:baseline !important; padding:0px !important;}
  #sharing-wrapper img {padding: 0px !important;}
  #sharing-wrapper .spacer {padding-left: 8px;}
</style> 

<div id='sharing-wrapper'>

<!-- pinterest start -->
  <a data-pin-config='none' data-pin-do='buttonPin' expr:href='&quot;http://pinterest.com/pin/create/button/?url=&quot; + data:post.url'>
    <img src='//assets.pinterest.com/images/pidgets/pin_it_button.png'/>
  </a>
  <span style='margin-left:-44px;'>
    <a data-pin-config='none' data-pin-do='buttonBookmark' href='//pinterest.com/pin/create/button/' style='outline:none;border:none;'/>
  </span>
  <script src='http://assets.pinterest.com/js/pinit.js' type='text/javascript'/> 
<!-- pinterest end -->

<span class='spacer'/>

<!-- reddit.com start -->
<a expr:href='&quot;http://www.reddit.com/submit?url=&quot; + data:post.url'>
  <img src='http://www.reddit.com/static/spreddit7.gif' alt='submit to reddit' border='0'/> </a>
<!-- reddit.com end -->

<span class='spacer'/>

<!-- del.icio.us start -->
<a expr:href='&quot;http://del.icio.us/post?url=&quot; + data:post.url' target='_blank'>
  <img border='0' alt='Delicious' title='Del.icio.us' src='https://delicious.com/img/logo.png' height='16' width='16' />
</a>
<!-- del.icio.us end -->

<span class='spacer'/>

<!-- stumbleupon start -->
<a class='logo' target='_blank' expr:href='&quot;http://www.stumbleupon.com/submit?url=&quot; + data:post.url'>
  <img border='0' alt='Stumbleupon' src='http://cdn.stumble-upon.com/i/badges/badgeLogo18x18.png?v5' height='18' width='18' />
</a>
<!-- stumbleupon end -->

<span class='spacer'/> 
<!-- linkedin start -->
<script src='//platform.linkedin.com/in.js' type='text/javascript'></script>
<script type='IN/Share' expr:data-url='data:post.url'></script>
<!-- linkedin end -->

</div>
I put that right after <data:blog.post/> tag so that it appears after the post body text. It would be even nicer to have it on the same row as the built-in share buttons, but that would really require figuring out the templating more deeply than I am willing to do right now.

In addition, put the following right before </body>, since repeating it for every post messes up the positioning of Pin It button on blog home page:
<script src='http://assets.pinterest.com/js/pinit.js' type='text/javascript'/>

(oh, and since this blog is visually not that fancy, I think I'll drop Pinterest out, I doubt anyone would pin from this anyway...)

Addendum (5.2.2014): Digg and Tumblr

Later I also added Digg and Tumbr sharing.

Digg was simple, although the current incarnation doesn't seem to provide any official share button or widget.Thus I just made a link to their submit URL and used their favicon for the icon.

Tumblr was a bit harder, since the official JavaScript version only works on single post pages (not on the blog home page) and when using a simple GET request URL, the shared URL needs to be encoded - which Blogger template API can't handle (also they don't seem to fetch the page title automatically so that, too, must be encoded and included in the link). So I made my own inline JS to create the link the way I want it to be.

Here are the additions to the above, placed just before ending the </div>:
<span class='spacer'/>

<!-- digg start -->
<a class='logo' expr:href='&quot;http://digg.com/submit?url=&quot; + data:post.url' target='_blank'>
  <img alt='Digg' border='0' height='18' src='http://digg.com/static/images/digg_favicon.png' width='18'/>
</a>
<!-- digg end -->

<span class='spacer'/>

<!-- tumblr start -->
<script type='text/javascript'>
  var strPostUrl = "<data:post.url/>";
  var strPostTitle = "<data:post.title/>";
  document.write("&lt;a href='http://www.tumblr.com/share/link?url="
    +encodeURIComponent(strPostUrl)+"&amp;name="+encodeURIComponent(strPostTitle)
    +"' target='_blank' title='Share on Tumblr'&gt;&lt;img src='http://platform.tumblr.com/v1/share_3.png' width='129' height='20'/&gt;&lt;/a&gt;");
</script>
<!-- tumblr end -->
submit to reddit Delicious

Monday, January 20, 2014

Green mining?

Green server room was a hot topic a couple of years ago. The cost of energy plays a large part in the total operating cost of any computer system today, and this might indeed have been the largest incentive for the industry to go green (in addition of the PR value). The energy requirements are also worth noticing with private computer use.

In the previous post I talked about cryptocurrencies.  Mining is the most reliable way of gaining wealth with them, trading at exchange being more risky (yet also having greater potential profits). Mining is computationally very intensive, though, and thus it is also energy intensive (quite like traditional mining industry), and there are two main points about energy - the economical cost and the ecological cost. The economical cost is largely due to cost of energy, and ecological cost comes from production of the energy. The economical efficiency of a cryptocurrency miner is dependent - in addition to the price of energy - on the kilohash/s per kilowatt ratio of the mining equipment. The specialised mining rigs build for the single purpose of mining Bitcoins have pretty good kH/s per kW ratio, and it is a constant race to enhance the efficiency even more. Still, I'd also guess that most operators try to get their electricity at the lowest possible price, which many times means power mills burning coal. Coal (or either oil) is not that good for the ecological efficiency, which I'm going to measure in kH/s per CO2 (metric) ton. CO2 is being used due to simplicity although it represents only a fraction of all ecological effects (e.g. enviromental effects of mining and refining of the fuel, other emissions from the production of energy, and building of the facility are somewhat harder to take into account).

My initial idea with the small project that was described in the previous post was to take advantage of my work laptop while I'm not working, more precisely to borrow the hardware in a way no different to if I was surfing in Youtube over the weekend or trading stocks after office hours, using my own Internet connection and the electricity I pay myself (thus, I feel I'm not being taking advantage of my employer). I chose laptop not for the computing power (which is not that great) but for the energy efficiency of mobile equipment. The max output of the charger of the laptop is 65 W - a mere desktop processor of roughly the same specs can have max TDP of 95 W - not to mention the consumption of the rest of the hardware.


Calculating efficiency ratios

Let's toss some numbers around:

The laptop topped 14.1 khash/s - although that might not be the best that can be got out of the hardware due to only utilising the CPU - with measured power draw of 47 W, which means kH/s per kW ratio of 300. With 3 threads the hash rate decreased to 11.9 kH/s with power draw of 43 W, which actually means the efficiency ratio dropped, too, to 277. With only 2 threads (one per physical core) the rate came down to 9.7 khash/s with the power draw of 40 W, which means efficiency ratio of 243. A good example why it is worth measuring and calculating these things, since at least to me the result was not evident from the mere numbers. I would have also expected the power consumption to come down a bit more. No wifi or bluetooth was enabled, but I haven't touched the PM settings either - probably there might be some things to tweak.

There are not many (if any) LTC mining rigs (at least such that would compare to the BTC rigs) on the market due to the nature of the LTC hashing algorithm. However, I found a tutorial for building an LTC mining rig which quotes some specs to assess the efficiency. The rig ought to reach 1940 kH/s with power draw of 720 W when ideally tuned, which makes 2694 kH/s per kW - much better than the laptop.

LTC and BTC mining rates are not directly comparable, and since also the exchange rates are different and vary all the time, it is quite impossible to do comparison with the BTC rigs. However, just out of curiosity, let's peek at the first ASIC powered BTC mining rig I found with specs on the power consumption. It also happens to be the best you can get from the market at the moment. The specs say it can do 2 Thash/s at nominal power of 1650 W, so the efficiency ratio here is 1,212,121. That is mighty lot of hashing power (even though the figure is not comparable with the ones above). No wonder there is a market for such an ultra-expensive gear that can do exactly one thing and due to the continuous rise of the mining difficulty will get obsolete in less than six months. With that kind of computing gear it starts to become necessary to have proper A/C in place even in the most arctic climate (since computer gear don't like to get too cool, either), and the average figure for server rooms is that it takes at least as much power to get the heat out than to run hardware that produced the heat, which would practically halve the mining-power efficiency ratio. Having just one of those should be quite enough for keeping part of a small house warm during the cold season.


The cost of electricity and return of investment

The total price of electricity where I live during the last year was on average 0.125 EUR/kWh (based on the pricing for a small single house with estimated consumption of 18 MWh/year). Running the 2 TH/s BTC rig all around the year (14.5 MWh) would cost roughly 1800 EUR. With the current obscenely high exchange rate of BTC it would take 2.92 BTC to cover the electricity cost (and 7.15 BTC to pay back the rig of 5999 USD list price).

With the same electricity price, the home-built LTC rig would drain 6.3 MWh in a year, costing roughly 788 EUR (currently equivalent of 45.3 LTC), and to pay back the price of the parts (1356 USD) would take additional 57.6 LTC.

According to a Bitcoin calculator, the above mentioned BTC rig would have paid itself back on the third week, but assuming mining difficulty increase of 30%, the electricity price as above and the current exchange rates and other constants as they are now, the profit would have dropped to mere 100 EUR/week in just 190 days. At that point the return of investment would, however, be over 10,000 EUR, so it wouldn't be a bad investment as such. Adding some delivery and maintenance fees, the profit would likely be around 8000 EUR. It is worth noting, that the calculator listed some recent rigs that will never break even (assuming the current situation and forecast), let alone gain any profit.

In contrast to that, it would take 200+ days for the LTC rig to pay itself back, even if assuming the recent stability of LTC mining difficulty. It would take roughly 560 days to get about the same sized profit relative to the price paid as above (and here I didn't even throw in any maintenance fees yet). As it is unlikely that difficulty will keep at the current level that long and also electricity price is likely to go up, it might take a year just to break even. So, either BTC value is vastly bloated to make BTC mining so profitable, or specialised HW just beats commodity gear in efficiency.


The carbon dioxide foot-print

From the ecological side, generating 14.5 MWh of electricity would produce something like 13.9 tons of CO2 if generated by coal-fired thermal power, 10.9 tons of CO2 (oil-fired thermal power) or 7.4 tons of CO2 (natural gas combined). Also other sources of energy cause some CO2 emissions indirectly (building and operation of facilities etc), from 0.5 tons (solar power) to less than 0.2 tons (water power).
(Sources: Hitachi, U.S. Energy Information Agency. Values shown here calculated from averages between the sources)

Even though there are skeptics who claim climate change due to human actions is a myth, there is a strong scientific evidence that CO2 emissions alter the climate. There's also no question about the adverse ecological effects of all sorts of large scale industrial actions, or the fact that fossil fuels are going to end some day (or more precisely, it will become too expensive to extract them from the ground). So, in the ever-continuing absence of fusion power plants, I think it is well worth thinking where we are using the energy that is produced, and what kinds of production methods should be preferred.


Summary

After such a rant it is always hard to draw things to a closing, but I'll try...

Clearly it is best to have the equipment you're running match the requirements as closely as possible to be as efficient as possible. Home computers are bad at that since they usually try to offer a balanced but wide set of features. Servers are built with a different mind-set, they have a target task that needs to be filled with no extras.

Financially, the best you can probably do with a high-end gaming GPU (or an array of such) is to put it mining for cryptocurrency when you're not gaming. That will pay at least some of the next generation gear you'll likely want to buy at some point. If possible, aim for renewable energy sources for the electricity you buy, to get some of that greeniness into your personal IT, too. It can be argued that the heat dissipation of home electronics reduces the energy needed for heating, but that applies only to the cold season, having A/C to bust out the excess heat in the summer would just put the profits down the drain and be a waste of energy.

Also, if you have boxes that are on 24/7, it's worth trying a CPU miner with a low priority (in Windows) / high nice value (in Linux). That way you'd get something out of those otherwise idle hours, to compensate the base cost of having a box on (you have measured the power consumption of your computers when they idle, haven't you?).

However, one thing is sure: If you're going to make an investment, especially for financial purposes, be sure to calculate beforehand if the investment will pay itself back in a reasonable time.
(Uh, that might be the first financial statement I have ever made :D )
submit to reddit Delicious

Sunday, January 12, 2014

Employing random hardware for Litecoin mining

Cryptocurrencies are a prominent trend at the moment, with Bitcoin (BTC) leading the way. In fact, BTC seems to already have passed the limits of hype and boomed also financially to the extent that mining of BTC is hardly anymore profitable unless you have a state of the art mining rig (and there is a chance the rig won't pay for itself as the mining difficulty rises). For the ones who want to try the ever-lucrative miracle of getting bucks out of thin air (or more precicely, out of CPU/GPU cycles), there are fortunately many alternatives left. Litecoin (LTC) is maybe currently the most prominent alternative for BTC (at least measured by market capitalisation) that can be mined (source: coinmarkercap.com). Ripples is ahead of LTC in market cap but it works differently to the likes of BTC.

That's it for the financial part of this post. I'll give whatever I will get mined to charity, and with the hardware I have there's no risk of getting rich in this business. Anyway, I got interested of the possibility to easily harness the processing power of unused hardware along the lines of plug it in and leave it there to crunch numbers. The same idea could be employed to anything that uses massively distributed processing.

I selected a live Linux distro on a USB stick as the OS for the experiment. Linux, since it doesn't require much resources, is easy to tweak and free to use, and I'm more familiar with it than Windows. Grml Live Linux was the one I chose to use, mostly because of the small footprint (good fit for old 512MB sticks).

There were two things in this project that I hadn't done before: customising a live distro (to add miner software) and re-producing a bootable ISO image. The first part ended up being pretty trivial, even though Grml uses squashed file system so I had to unsquash it first and squash it up again after modifications. Since I won't need to retain any run-time changes in the filesystem between reboots squashfs is just fine for the purpose.


Customising of the live image

Mount the original Grml ISO and copy it somewhere for modifications:

> sudo mount /var/tmp/grml32-full_2013.09.iso -o loop /media/cdimage
> cp /media/cdimage /var/tmp

Unsquash the root filesystem, modify it and put it back together:

> mkdir /var/tmp/grml32-custom
> cd /var/tmp/grml32-custom
> unsquashfs /var/tmp/cdimage/live/grml32-full/grml32-full.squashfs 


[copy stuff and adjust to run on the target setup]

> cd ..
> mksquashfs grml32-custom/squashfs-root/ \
  /var/tmp/grml32-custom_2013.09.squashfs -b 262144
> cp grml32-custom.squashfs /var/tmp/cdimage/live/grml32-full/

I ended up using the same block size for the squashfs as the original (thus "-b 262144"), but likely that doesn't matter much other than reduced overhead for large files since the default block size mksquashfs uses is pretty small.

Finally, to save space, I deleted the original .squashfs file from live/grml32-full/ and modified live/grml32-full/filesystem.module to point to the customised version.


Re-building the ISO

This took me a while to solve, since there were all kinds of instructions on the net how to produce a bootable ISO, but none worked. With genisoimage I got as far as being able to boot with the image file using QEmu, but after copying the image on the USB stick it wasn't working anymore. The issue seemed to be that the image didn't contain proper partition table (checking with cfdisk showed no partitions). Of course, one could just have the contents of the root partition in the image and then install boot loader manually after installing the image, but I preferred to produce a neat and simple image that does everything the original one does, too.

I finally ended up using xorriso as instructed somewhere:

> xorriso -as genisoimage -r -o grml32-custom_2013.09.iso \
  -b boot/isolinux/isolinux.bin -c boot/isolinux/boot.cat \
  -no-emul-boot -boot-info-table -boot-load-size 4 \
  -V "GRML_custom" -iso-level 3 -partition_offset 16 \
  -isohybrid-mbr /usr/lib/syslinux/isohdpfx.bin /var/tmp/cdimage/

Finally, just I just cat'd the image on the USB drive.

This required me to install xorriso and syslinux and their dependencies from the Debian repository, but that was all extra that was needed.


The aftermath

I used cpuminer in this experiment because that is what I use on my boxes anyway (couldn't get cgminer to compile, and I doubt my low-end GPUs would provide much extra power anyway). I had built init.d script for minerd which makes it rather easy to construct a plug-in-and-mine solution.
The nice thing here was that Grml had all the required libraries already so I had just to copy over the executable, scripts and config files. I don't know how much it affects that I am now running an executable built for old AMD Athlon XP on a Intel Core i5, but unless there is good evidence that compiling for a newer target CPU would do any good, I don't bother. At least this way the executable should run on pretty much all hardware that is worth trying - the old Athlon XP here gets about 0.8 khash/sec which is pretty near at not worth trying...

Interestingly, running 4 miner threads on the Core i5 chip (which advertises 4 cores due to hyper-threading) gained about 14.1 khash/sec, but since that also meant having the laptop fan blow at top rpm constantly, I changed to running only 2 miner threads, still gaining about 9.7 khash/sec. This is a good example on the limitations of shared-something CPU architectures, but probaly also an example on how the turbo boost technology can compensate by reaping the headroom left in TDP due to only using part of the chip.

(edit: after measuring the actual power draw with both 4 and 2 mining threads it was obvious that the power vs. hashing efficiency is much better with 4 cores)
submit to reddit Delicious