Friday, July 3, 2015

The joys and griefs of having a hardware keyboard with a smartphone

My awaited TOHKBD arrived among the first batch that was sent out, just before I started my early summer vacation in mid-June. The package actually arrived before the software had even passed QA at Jolla, so I had to wait for a couple of days before being able to actually use the keyboard - that certainly was a chilling factor. Another chill was the M key which I noticed didn't have any tactile response, and I assumed it is faulty. That was confirmed when the SW was made officially available and I was able to test the keyboard for real, the keypress was registered only after applying much force. I joined the few others with similar issues by whining about it on TMO forum, also reporting it by email to FunkyOtherHalf. Dirk van L, the one responsible for the coordination of HW development, responded and asked to wait and see if it settles with use - OK, fair enough, getting a replacement keypad won't be easy anyway for a gadget that has been manufactured in a small volume, strictly based on orders. As I'm not typing this post with my TOHKBD it is obvious that things have improved a lot, although there's still huge difference in the response and the required force. There's also another key which needs more force but has normal response (if any Os are missing from this post, there's the reason) and that's perhaps even more distracting.

There are still SW issues open even after the couple of updates by Kimmo L, mostly prominent being the lack of other layouts than American. Support for other layouts requires changes in Sailfish, and it'll take some more time until Jolla releases an update with those changes.


Despite all that, I'm happy for spending the money in the project. There's no way I could (and would) have written text this long with the on-screen keyboard. I'm also confident that the SW issues get resolved sooner or later. Even now it's a delight to use the terminal app, and writing in English is fine already. Seems like not all apps support all keys (tab, home, end, page up/down being the most prominent), but the community will fix that, too, eventually. Jolla, together with the community, will also eventually enable landscape mode everywhere (definitely something wait for from Sailfish 2.0). The greatest remaining issue is the faulty keys. According to a Kickstarter project page update, all keypads should have been manually tested, but it is evident that if that really was done for all batches (or even to this batch mine came from), it was done badly. I'd hope that there would eventually be some sort of a compensation with replacement keypads for those who need them. Hoping for the best!

Oh, one more "grief": Now the originally slim-fit phone has a rather bulky look which certainly will cause comments like "is that from the 90s or what" (hey, it is still thinner than my previous trusted phone, the E90!). Due to that, also my Mapbagdrag case don't fit anymore. Well, maybe someone will, some day, make a nice case for sturdier TOHs, too.


submit to reddit Delicious

Friday, January 9, 2015

Really unlike - state of affairs with the Jolla phone 1.5 years after launch party

 It's roughly 1.5 years after the event in which Jolla launched their Sailfish OS powered phone, and a bit over one year after the first devices were sent out to people who pre-ordered. Now, as I've been using my Jolla for a couple of weeks, I want to write up something - not just to make noise of the company and their product but also hopefully giving some useful points of view for anyone considering buying into Sailfish ecosystem. After all, I did wait for myself, too, although I was waiting for the The Other Half physical keyboard - which I'm now waiting eagerly...

To be honest, Jolla is still a bit a geek-phone, made by geeks, supported by a geek community. I switched from the old'n'trustworthy Nokia E90 and I'm in heaven, but coming from Samsung Android or Apple iOS might be quite different (I've been using Samsung at work for some years now). The basics are there (of course it depends on what is "basic" to any given user), but one might still miss some things. For one, the app market is still small, even though with Android app support there's potentially quite a few apps available. Edit: The most annoying thing about the Android support actually might be that Aptoide and Yandex don't carry the same apps that are available at Google Play, or you might for some other reason get a much older version - I have reddit sync v10 from Google Play for the Android 4.4 phone and got v5 from Aptoide for Jolla (which has Android 4.1).
Of course, one must note that it is a product created by a very small team of people (comparing to the teams working with mobile devices at Google, Samsung and Apple), and it takes some time to get everything built and fine-tuned. Personally, taking everything into account, I think the result has this far been awesome, and if they keep up the same pace at Jolla, this is to become something remarkable.

There are, however. some things that are sub-optimal or downright missing at the moment, for example:
  • Android apps can't access low-level device data such as cellular and wi-fi stats, making certain geek apps not working. Edit: Cell ID stuff is missing at least, but then again it is hard to compare since you might not get the same version installed to Jolla.
  • Some Android apps also can't seem to access location data, even though at least GPS data is available. This might of course be due to a non-standard way of accessing location data in the app.
  • There are many things that cannot be done in the Sailfish UI, but require enabling developer mode and accessing the command line. For me, and the other geeks, this is just fine, but I can't imagine everyone to think the same.
  •  The out-of-memory killer is rather trigger happy in the latest Sailfish version (1.1.1.27/Vaarainjärvi), which somewhat nullifies the benefits of multi-tasking.

Some other things I think are pretty remarkable:
  • There are many neat things that can be done via the developer mode CLI, like making certain apps start up automatically, customising virtual keyboard layout etc.
  • The multitasking really works. A non-geek example of this is that I can have my favourite music playing from Youtube in the background while doing something else on the phone - and even being able to see the live thumbnail of the video on the home screen!
  • User interfaces are always subjective to judge, but coming from Android / Samsung TouchWiz I think the UI of Sailfish works at least as well, and it didn't take too long to learn it, in fact I noticed this week I was trying to swipe an app to background on Android :)
  • There's pretty good support for also running Android apps if there are no native apps for a purpose. One could possibly even install the whole Google Play stack on Sailfish. Now, I dare you to try the same in any other ecosystem!

One thing to note here is also that unlike the big ones, Jolla does listen to their community and work together to make the product more pleasing. Working communication with a supplier techies (and a good community overall) is something I've learnt to appreciate.

All in all, I think the Jolla phone is a good device, just as long one doesn't have too high expectations to get a sleek and thoroughly finalised device. The potential - in my opinion - is huge, so stay tuned...

Update: (10.1.2014 9:30 GMT+2) added clarifications on the Android support.
submit to reddit Delicious

Monday, October 27, 2014

Things that you might think don't influence the buying decision - yet they do

Recently I've been reviewing APM products for the Java-based system I run for my pay. After some research online I ended up with two choices; one based on the recorded demos I'd seen earlier, and another one just since it was very easy to get for a trial period. If there are two choices, and to be able to see what one of them really does I'd first have to expose all kinds of details about my company and my position while the other is freely downloadable for a 30 day trial by just entering name, email address and company name, I'll choose the latter.

The one with an easy self-service trial period (let's call it product A from now on) was a breeze to install, both the server and the agents, and the GUI looked very sleek and intuitive - it took me roughly an hour to get as far as looking at some analysis of slow transactions from a test server. They clearly had made an effort to polish the product and make it such that one doesn't need to be über geek to be able to install and use it. I am rather geek myself, but I still appreciate the effort.

The supplier of the other one (product B), of which I'd seen some really nice demos, had a peculiar requirement: I had to schedule for a live demo before getting the download. Had I not already made my mind that I wanted to have this product for a trial run, I might have bailed out (as the demo required much more of my time and I'd already seen much of the stuff they could demo for me). However, I made the personal sacrifice of attending to a session outside of the office hours (due to the awkward eight hour time difference). During the demo it become clear that the product actually didn't officially support the version of the monitoring software that we had and to which product B was to be integrated to, but they had a beta to which they immediately added the support and asked me if I'd like to wait for releasing it or have it immediately as a beta. Original estimate for the release was "a few weeks", but it eventually was changed to "next week". I waited, happy for such a swift response for the need.

As the promised time came, I received a download link. I grabbed the installation packages and went on with the instructions. However, I got stalled for roughly an hour in trying to get the agent installed successfully, as the only automated installation was for the case when the application server is started with a batch file and there was only some notes about setting things up when it is installed as a service. I finally figured out the parameter I needed to add to the service wrapper to get it going, and so finally got everything installed. Then I went for the GUI to get some readings on how the system is running, just as I did with product A. This time that was not a breeze, either, despite of the demo I'd seen just a week earlier. I had to look things up in the manual to navigate in the GUI and get what I wanted.

Needless to say, at this point I had rather strong preference for the product A for its ease of set up and use.

I emailed my experience with the product B (with some very honest critique, pointing also out the contrast with the product of their competitor) to the account manager I had been in contact with. I was ok with having the technical issues forwarded to their support team and after two days I received email from the product manager who asked for some clarifications, mentioned that some of the issues I noted were known and gave some workarounds, and promised to get back with the rest of the issues after they had looked into them more closely. The next day I noticed they had actually opened support cases for many of my issues as I got a more detailed status report and some more assistance.

One and a half weeks later, after catching up with some other things in the meantime, I emailed some further info and questions on some of the issues, for which I received again very prompt and helpful response. The next day - directly after installing the agent in production - I had yet another issue (due to having messed up some config while trying to work around on my own an issue with the agent installer), and next morning I had enough info to remedy the situation.

At this point my preference had changed again - despite the product B being hard to install (well, I'd already installed it on every server I needed) and having some bugs and limitations (that were now known to me) and inferior GUI and user experience overall.

Why?

I had seen that they had a support team and attitude that is not that common in the industry. I knew, that if I ever encounter an issue with their product, I'll have a solution in my inbox promptly. Besides, having a GUI with a steep learning curve is not an issue as I've got used to things that way (that's what I do here, figure out things) and I prefer a "professional" UI over an easy and simple one that can get restricting when you want something that is not a very common use case. Last but not least, the off-the-shelf license cost of the product B in our case was lower than the price of the product A even after applying all the possible discounts their sales person could come up with.

So, even if your product is polished and fancy, you might still get beaten by someone who has the right attitude (not to forget a more realistic price tag). Also, if you don't offer an easy-to-start trial period, you might also get ignored, unless you've already earlier succeeded in making a very good impression.
submit to reddit Delicious

Saturday, June 21, 2014

Sometimes it is better to have a fresh start than to work on what has been around for ages

I finally got to upgrade my ancient desktop at home. The box was originally built around 7-8 years ago, with some minor upgrades along the road, and now it had Athlon XP 2500+ with 1.5 GB RAM. Seriously, it still could run light-weight Linux desktop pretty fine, with even two user sessions. The merit goes much to the U160 SCSI server grade disk system that only got outdated in the rather recent years by the SATA architecture. The origins of that box go still further into the history, as it is basically the direct offspring of my first Linux box from the days of RH 4.2 (and I mean RH before the days of RHEL or even Centos, literally in the last century).

I'd rather build new systems alongside of the current ones, so that there's time to slowly work on the new box to get everything going fine and dandy, but this was to be a "old junk out, new junk in" style of operation. Regarding the HW part of it it was pretty simple, only the main board with CPU and memory needed to be replaced and the SSD installed, since PSU was fairly recent and had enough wattage for the new setup, mostly since I'm doing fine with the integrated GPU in the new i7, and even the CPU was low-power model. Just plug back in the SCSI adapter and the 2nd NIC and power back on. It was a no-brainer to get the system to boot back to Debian 7.5, and everything was running so fast and smoothly already - but it was running all 32-bit.

Since I have experience of both doing fresh installs and major upgrades, I've noticed that I like more the upgrade path since there's no need to rebuild configuration piece-by-piece afterwards, but rather get everything in place in a big but process-wise simple effort. First, I intended to upgrade to 64-bit kernel with SMP support to get all CPUs and full RAM into use. I've not done cross-compiling earlier and after some futile efforts to get the Debian kernel build tools to produce a working 64-bit kernel (I might have succeeded without using .deb build but I've learnt to like the concept too much to diverge) I went with a stock kernel from the repository. Great, it supported all the important hardware out-of-the-box (and yay, it was the first time I'm running stock kernel in many years). But what about the rest of the software? A 64-bit kernel can run it just fine with IA32 emulation, but isn't it a bit dumb to only be able to get some of the performance of the shiny HW... That's what I thought, and went googling about converting the system from 32-bit to 64-bit.

That's where the crux of the story comes in.

There are plenty of pages giving some instructions and also some warnings (e.g. this askubuntu.com Q&A which says 'it is very complicated' and is absolutely correct about it). Debian wiki has a great article on architecture migration which I decided to follow. I got pretty far with it (that's why I called it a great article), but eventually I couldn't get all packages to reinstall, and aptitude was acting pretty confused (initially it didn't list any packages under Installed packages even though it did admit that I had packages in installed state, and even after I told it to rethink it (with forget new packages + update) it still was in denial about the overall state of affairs. What was more worrying, there were quite a lot of errors about libs being of wrong architecture. I got some individual packages fixed using the same manual procedure as for resolving conflicts during the mass-upgrade/migration, but eventually I felt that this is not going to end up with a wholly working system. Looking at it now, I did some mistakes in the process which at least made things harder, so I think it still should be possible to follow the article successfully, but definitely it is not an easy path and requires knowledge on resolving package conflicts (doing a couple of dist-upgrades when the Debian team releases new stable release is a good prerequisite). After all apt is a terrific package management system, IMO, and can do all kinds of stunts in the hands of an expert.

So, after wasting many hours down the upgrade path I gave up, downloaded an install ISO and started over. After approximately the same amount of work I spent on the upgrade attempt I now have pretty much all major things in place and working and after all, now I know that if something is off, I just need to compare things with the old state of affairs that can be found on the old disks and migrate the changes. It is also a good chance to re-learn some things like setting up DAV on Apache for use with Subversion.

What is to be learnt from all this? At least for the kinds of me who prefer building on the old, it is good to learn to consider the benefits of a fresh start, and the downsides of carrying the load of the past over the new platform. What might help with this is having some sort of a configuration management system that could be used to restore at least part of the customisation that has to be re-done. Having that at home might still be an over-kill, though...
submit to reddit Delicious

Sunday, March 23, 2014

Mining not for average Joe (for long at least)

I'll put an end to my adventures in cryptocoin mining. The most obvious reason is that my current hardware is not really too good for it (and as the summer is coming there would be heating issues), but now that there are ASICs coming in also for Scrypt coins, the difficulty will go on the rise making it even worse for people like me. There are even cloud mining rigs that one could rent (I'm waiting for someone soon publish an article that states "nn% of all computing power used for cryptocurrencies")...

I've seen it claimed that Satoshi Nakamoto had intended mining to be something that also average folks could profit from, but given the fierce competition to build ever more powerful mining rigs, that is not happening, and likely it will never be happening at large as long as mining doesn't include some sort of a human component (without it making mining a full-time job for the person), or unless new algorithms are introduced often enough to keep the slow ASIC development at bay (but even then people who are both able and willing to buy $1000 USD worth gear every two years will be gaining the most).

Despite my decision, I'll be keeping my eye on the subject, after all I do have some fractions of LTC and some DOGE.

Edit (some 3 hours later):  Yet it is not so simple. CPU mined coins are vulnerable to botnets, and on the other hand ASICs are terrific on hashing vs. power efficiency (i.e. green mining). And yet, in the lack of clean, renewable and cheap source of electricity, it's not a good idea globally if people started buying rigs that draw 1kW constantly...
submit to reddit Delicious

Sunday, March 16, 2014

Things to consider for profitable cryptocurrency mining

There is a looong discussion on Reddit on whether Dogecoin mining is profitable or not. I do not claim I'd had the stamina to read all the way through it, but a theme seems to get repeated ("yes it is" - "no it isn't").

The opinions also vary widely on the usefulness of so called altcoins (meaning anything other than Bitcoin). Surely any coin (I think I'll from now on use "coin" instead of the more tedious "cryptocurrency") that is not accepted widely as payment isn't really useful as a token of exchange, i.e. payment, but both for miners and traders they might prove useful. However, as these coins come and go it's essential to assess if the value of a given coin is expected to stay or increase in the future to prevent losses.

Trading aside (since that's not my cup of tea), once one has established adequate level of trust on a given coin, there are things to consider (after considering the efficiency of your mining hardware):
  • Do you want to take the exchange risk involved in holding on a coin for more than a day? If you do, do you see a given coin increasing in exchange value in the near future?
  • If you're going for low exchange risk and will immediately sell what you mine, does the lower risk factor counter the daily transaction and exchange fees? Also, what coin is the most profitable today regarding difficulty, reward, network hash rate and the target coin/currency?
  • Even if you're willing to take the exchange risk, it's worth checking the profitability based on difficulty, reward and network hash rate.
There are a number of mining profitability calculators around the Net. There are CoinWarz, Dustcoin, CrabCoins and whatnot (please don't get offended if your favourite one is not listed, those are just random ones I ended up to). I got interested in how those calculators actually estimate (as for that it is, estimation, since random events are in play) the profitability - and I think everyone who's using the calculators should be interested as well. CoinWarz does the calculation on the server so I couldn't check their code, but both Dustcoin and CrabCoins reveal their formula in the page source. Both also use pretty much the same formula:

 time [s] x hashrate [H/s] x reward
------------------------------------
         difficulty x A

where A = 0x100010001h at Dustcoin and A = 2^32 at CrabCoins. Since those values are close to being the same, the sites give almost exactly the same results.

At this point I started looking up more calculators. CoinSelect, Where to Mine and Criptovalute seemed to give the same results so I guess they use the same formula, too. And hey, the formula does make sense: The longer the time, the greater your hash rate or the greater the reward, the greater will be the profit - and the greater the difficulty, the smaller will be the profit. What bugs me is that coefficient A as I don't know where it is derived. The actual code at Dustcoin uses two other constants in place of it, but they are likewise as cryptic to me. I'd be glad if someone pointed me to an explanation for the coefficient.

One thing to note about the calculators: Always check if they are using the correct up-to-date data. Difficulty might be off from the current one, as well as exchange rates, and even the reward (but that would mean their data is really stale). It makes sense to check at least two sources that you trust to eliminate the risk of deciding based on incorrect data.

It is very likely that an extremely favourable mining situation will not go on for long, as also other miners will come to mine thus raising the hash rate which in turn will make difficulty rise. So, there will be constant ebb and flow which also means one should automate pool/coin switching based on estimated profit to continuously adapt to the changing situation. There is already software for that, quick googling brought up CryptoSwitcher and I remember having seen others as well. E.g. cgminer has API that allows centralised remote controlling of miners and when you add automatic decision making based on network data, your miners should be always after the largest profits, or at least staying away from the least profitable coins.

It should be quite easy to implement home-brewn mining automation since e.g. CoinWarz offers an API that one could use to directly access their profitability data. The free version allows 25 calls in 24 hours, meaning the situation could be checked once an hour, which should be quite enough when you're not doing this too seriously. The lack of real-time network data can be compensated by steering away from the most volatile coins. There will still be the risk of sudden large exchange rate changes, but those should be rare enough to keep the risk relatively small.

Finally, for those who want it to be extremely easy, there are mining pools that automatically do the switching for you. If you trust their algorithm does a good work, one of those is the easiest way to get the benefits of coin switching.
submit to reddit Delicious

Sunday, February 2, 2014

Adding sharing buttons for Pinterest, Reddit, Delicious, Stumbleupon and LinkedIn to a Blogger blog

My better half wanted to add a Pin It button to her blog and needed my help, so I did some googling around it to find out how to do it. As a result I figured I could also add support for some other sharing sites in my own blog as well. It was not as straight-forward as many blog posts and support docs on the subject do claim, so I'll tell here what I did in order to get them all nicely lined up. I hope this makes life easier for somebody else.

This was the resulting row of buttons,
in case I end up changing the layout later on...


Pinterest

There is a rather good blog post at bloggercentral.com on adding Pin It button to Blogger, and it also tells the basics about the blog template.

However, I had very little luck with the embedded template editor (after trying two browsers including Chromium, saving the template after changes didn't work), and found out it is way better to take the XML backup of the template and use a good text editor to modify that. Remember to save your changes on a different name! On Linux e.g. kedit and gedit work fine, but on Windows it was tougher since Notepad doesn't understand Unix style line breaks, and Wordpad isn't really a text editor so it is not guaranteed to preserve formatting. I downloaded Notepad++ Portable for the task (since it does not require system-wide installation), but any decent text editor that supports UTF-8 encoding and Unix style line breaks should do.

The other thing that didn't go as the instructions said was the placement of the button code - in both of the blogs it was required to add the button at the second occurrence of <data:blog.post/>. I should study more of the template structure to figure out in what cases the other occurrences are being used, but anyway I made the addition to all of them.

Reddit

The support doc on reddit.com shows many options for the button, all with sample HTML code, but there is a catch in using them with Blogger. All the samples would more or less work on a single blog post, but they wouldn't work on the blog home page which has many posts. The advanced options down the page show a way to go with some of the buttons (those that use a script tag), but in all cases the page reference needs to be modified to suit Blogger. Below is what I use.
<a expr:href='&quot;http://www.reddit.com/submit?url=&quot; + data:post.url'>
 <img src='http://www.reddit.com/static/spreddit7.gif' alt='submit to reddit' border='0'/> </a>

Noteworthy things:
  1. Usage of expr prefix on href attribute tells Blogger that it needs to interpret the attribute value which contains references to layout data tags (e.g. data:post.url).
  2. Also quoting needs some tuning, since the whole expression to be interpreted requires quotes around it, and the static text part in it also needs to be enclosed in quotes, thus the two occurrences of &quot;.
If you'd be using those buttons that have just a script tag in the example, just look at the example given under Interactive button advanced settings and change the values of reddit_url and reddit_title to point to data:post.url and data:post.title, respectively. However, reddit seems to figure out the title from the URL if title is not given, which is nice since I didn't find a way to make Blogger understand multiple URL parameters (although it should be possible and I do know how to format a GET request with multiple parameters).

Delicious

Also delicious.com shows a working sample of their button, but as mentioned just above with reddit, it didn't work that well with multiple parameters on the URL in Blogger template. However, it seems to be enough to put just the url parameter there. Also in this case I moved the request URL to href attribute even though it is not as neat as hiding it in the onClick handler. So, here's what I use:
<a expr:href='&quot;http://del.icio.us/post?url=&quot; + data:post.url' target='_blank'>
  <img border='0' alt='Delicious' title='Del.icio.us' src='https://delicious.com/img/logo.png' height='16' width='16' />
</a>

Stumbleupon

The badge creator at stumbleupon.com looks rather fancy, but at that point I had grown a bit tired of all fancy things which are hard to put into the template, and so I took the easy route of peeking at a page with a working badge and extracting the URL and the icon from there. Not quite as recommended, but it seems to work, too:
<a class='logo' target='_blank' expr:href='&quot;http://www.stumbleupon.com/submit?url=&quot; + data:post.url'>
  <img border='0' alt='Stumbleupon' src='http://cdn.stumble-upon.com/i/badges/badgeLogo18x18.png?v5' height='18' width='18' />
</a>

LinkedIn

There is a share plugin generator on developer.linkedin.com which gives the necessary code for the share button, and the data-url attribute with added expr prefix gets its value from data:blog.url just like in all the above.

The final touch

All of the above would work just fine alone, but putting them together took some additional effort for the result to look nicely aligned. From the Pinterest sample code I took the enclosing div, and put all the rest within that, too. However, the icons ended up aligned pretty bad, and so I added vertical alignment.

That solved all but Pin It  and inShare buttons, which have enforced styles from the accompanying Javascript code. For Pinterest it is possible to just ditch the Javascript and go with plain link+icon, but LinkedIn has made it more complex, so I ended up adjusting the styling of the enclosing div and adding some spacers to add some space around the buttons.

So here is my template addition as a whole:
<style type='text/css'> 
  #sharing-wrapper {margin:10px 0 0 0; text-align:left; vertical-align:baseline !important; padding:0px !important;}
  #sharing-wrapper img {padding: 0px !important;}
  #sharing-wrapper .spacer {padding-left: 8px;}
</style> 

<div id='sharing-wrapper'>

<!-- pinterest start -->
  <a data-pin-config='none' data-pin-do='buttonPin' expr:href='&quot;http://pinterest.com/pin/create/button/?url=&quot; + data:post.url'>
    <img src='//assets.pinterest.com/images/pidgets/pin_it_button.png'/>
  </a>
  <span style='margin-left:-44px;'>
    <a data-pin-config='none' data-pin-do='buttonBookmark' href='//pinterest.com/pin/create/button/' style='outline:none;border:none;'/>
  </span>
  <script src='http://assets.pinterest.com/js/pinit.js' type='text/javascript'/> 
<!-- pinterest end -->

<span class='spacer'/>

<!-- reddit.com start -->
<a expr:href='&quot;http://www.reddit.com/submit?url=&quot; + data:post.url'>
  <img src='http://www.reddit.com/static/spreddit7.gif' alt='submit to reddit' border='0'/> </a>
<!-- reddit.com end -->

<span class='spacer'/>

<!-- del.icio.us start -->
<a expr:href='&quot;http://del.icio.us/post?url=&quot; + data:post.url' target='_blank'>
  <img border='0' alt='Delicious' title='Del.icio.us' src='https://delicious.com/img/logo.png' height='16' width='16' />
</a>
<!-- del.icio.us end -->

<span class='spacer'/>

<!-- stumbleupon start -->
<a class='logo' target='_blank' expr:href='&quot;http://www.stumbleupon.com/submit?url=&quot; + data:post.url'>
  <img border='0' alt='Stumbleupon' src='http://cdn.stumble-upon.com/i/badges/badgeLogo18x18.png?v5' height='18' width='18' />
</a>
<!-- stumbleupon end -->

<span class='spacer'/> 
<!-- linkedin start -->
<script src='//platform.linkedin.com/in.js' type='text/javascript'></script>
<script type='IN/Share' expr:data-url='data:post.url'></script>
<!-- linkedin end -->

</div>
I put that right after <data:blog.post/> tag so that it appears after the post body text. It would be even nicer to have it on the same row as the built-in share buttons, but that would really require figuring out the templating more deeply than I am willing to do right now.

In addition, put the following right before </body>, since repeating it for every post messes up the positioning of Pin It button on blog home page:
<script src='http://assets.pinterest.com/js/pinit.js' type='text/javascript'/>

(oh, and since this blog is visually not that fancy, I think I'll drop Pinterest out, I doubt anyone would pin from this anyway...)

Addendum (5.2.2014): Digg and Tumblr

Later I also added Digg and Tumbr sharing.

Digg was simple, although the current incarnation doesn't seem to provide any official share button or widget.Thus I just made a link to their submit URL and used their favicon for the icon.

Tumblr was a bit harder, since the official JavaScript version only works on single post pages (not on the blog home page) and when using a simple GET request URL, the shared URL needs to be encoded - which Blogger template API can't handle (also they don't seem to fetch the page title automatically so that, too, must be encoded and included in the link). So I made my own inline JS to create the link the way I want it to be.

Here are the additions to the above, placed just before ending the </div>:
<span class='spacer'/>

<!-- digg start -->
<a class='logo' expr:href='&quot;http://digg.com/submit?url=&quot; + data:post.url' target='_blank'>
  <img alt='Digg' border='0' height='18' src='http://digg.com/static/images/digg_favicon.png' width='18'/>
</a>
<!-- digg end -->

<span class='spacer'/>

<!-- tumblr start -->
<script type='text/javascript'>
  var strPostUrl = "<data:post.url/>";
  var strPostTitle = "<data:post.title/>";
  document.write("&lt;a href='http://www.tumblr.com/share/link?url="
    +encodeURIComponent(strPostUrl)+"&amp;name="+encodeURIComponent(strPostTitle)
    +"' target='_blank' title='Share on Tumblr'&gt;&lt;img src='http://platform.tumblr.com/v1/share_3.png' width='129' height='20'/&gt;&lt;/a&gt;");
</script>
<!-- tumblr end -->
submit to reddit Delicious