Signing an Android App

Before you release you’ll need to sign your .apk file. The guide from Google implies that you should make a single key for your identity, and sign all your apps with that key. I recommend against that. You should make a key per app. The reason I suggest that is because one day you might want to hand over responsibility for that app to another company or developer, or, the lottery win, a company wants to buy the whole thing from you. In that case, if you have a company-wide key, then you’re handing over keys to all your apps, not just one.

For all of this I’ll assume you already have the Android SDK installed. You’ll also need a JRE package for the keytool and jarsigner tools.

Let’s say then that you’re signing MegaApp.apk. First you should generate an individual key store just for this app.

$ keystore -genkey -v \
    -keyalg RSA -keysize 2048 -validity 10000 \
    -keystore megaapp.jks \
    -alias megaapp
Enter keystore password:  
Re-enter new password: 
What is your first and last name?
  [Unknown]:  MegaApp
What is the name of your organizational unit?
  [Unknown]:  Android Apps
What is the name of your organization?
  [Unknown]:  FussyLogic Ltd
What is the name of your City or Locality?
  [Unknown]:  Cheshire
What is the name of your State or Province?
  [Unknown]:  England
What is the two-letter country code for this unit?
  [Unknown]:  GB
Is CN=MegaApp, OU=Android Apps, O=FussyLogic Ltd, L=Cheshire, ST=England, C=GB correct?
  [no]:  yes

Generating 2,048 bit RSA key pair and self-signed certificate (SHA256withRSA) with a validity of 10,000 days
        for: CN=MegaApp, OU=Android Apps, O=FussyLogic Ltd, L=Cheshire, ST=England, C=GB
Enter key password for <megaapp>
        (RETURN if same as keystore password):  
[Storing megaapp.jks]

We’re going to store one key per keystore, so we name the key and the keystore both megaapp (this makes it easier to remember what alias you used :-)). You can, of course, pick your keystore password as you wish to match the security of your organisation – however, I suggest you treat the keystore file itself as the secure element, and make the password the same as the alias, “megaapp” (we’ll see why below). You can pick the DN (distinguished name) as you please, my recommendation is not to put your own name when asked, but to put the name of the app again.

We’ve got a 2048 bit key, valid for 10,000 days (27 years), stored in a keystore file under an alias and password the same as the keystore name. Obviously this file has very poor internal security – so you must be very circumspect in where you store and who you give access to this file.

What I’d like to describe is using a continuous integration system (I like Jenkins) to build and sign a release-ready version of your apk. For that reason we’re going to do everything on the command line so that you can reproduce the process in your build server’s scripts.

Build your app. Here’s how I do it:

cd your/app/directory
rm -f build.xml
rm -f local.properties
android update project --name "MegaApp" --path .
ant clean
ant release

The ant release command builds you an unsigned .apk (unlike the ant debug command, which builds an .apk signed with the debug key, which is unsuitable for release to the Play store). If you have a more complicated app, perhaps with libraries, you may want to copy or symlink them in this sequence too.

We’re now in a position to produce a signed .apk. It’s all boilerplate, so you will substitute your app name where appropriate.

# input parameters
APKNAME=MegaApp
KEYALIAS=megaapp

# standard form of files and aliases
KEYSTORE=${KEYALIAS}.jks
STOREPASS=${KEYALIAS}
KEYPASS=${KEYALIAS}
UNSIGNED=bin/${APKNAME}-release-unsigned.apk
SIGNED=bin/${APKNAME}-release-signed.apk
ALIGNED=${APKNAME}.apk

# generate signed-but-unaligned apk
jarsigner -verbose \
    -sigalg SHA1withRSA -digestalg SHA1 \
    -keystore ${KEYSTORE} \
    -storepass ${STOREPASS} \
    -keypass ${KEYPASS} \
    -signedjar "${SIGNED}" \
    "${UNSIGNED}" ${KEYALIAS}
# check signing worked
jarsigner -verify -verbose -certs "${SIGNED}"
# create aligned apk from signed apk
zipalign -v 4 "${SIGNED}" "${ALIGNED}"

I’m presenting this in a form that cuts-and-pastes into a script, with only a couple of name changes at the top to make it suit your app.

Hopefully you see the logic in using such an insecure password – if we had used a secure password, we would have had to write it, plain text, in this script anyway so that the signing could be automated – so what security would we actually have gained? Someone with access to the key store would have access to this script, and hence would have the password anyway.

Importantly: the keystore is now your signing security – if you lose control of that then others can tell your users that any arbitrary .apk they produce is actually an upgrade to your app. Keep this file safe.

Feel free to combine the build and sign snippets into a single file.

As I said, I have, essentially, this script on my Jenkins build server for my Android apps – that server is never exposed to the public internet. On completion the signed app is copied to a web server were my clients can download it at their leisure – the advantage of a signed app is that your users can rely on upgrades being a trusted chain. Of course you can also upload the .apk to the Play store.

Posted in FussyLogic | Tagged , , | Leave a comment

Bitcoin Explained (VIII)

Last time we saw how once you are the “owner” of some bitcoins, that gives you the magical power of being able to specify what conditions the next claimant must meet before they become the “owner”

As flexible as it is, there is a problem with this arrangement. Every client has to be taught how to manufacture every special condition script. Let’s say we want to do an M-of-N payout to implement an escrow transaction. I have my mobile wallet, and you have a web wallet, and we have some arbiter who is using a desktop client. We have to find some way of generating a script, then each of us providing signatures of it, and inputs to it with our coins. There is no standard protocol for doing that, and if there were, it would be a lot of work for each of the wallet developers to implement.

Then what if someone comes up with a new type of condition script: a time-locked M-of-N with trusted party abort?

Finally, this ever-more complicated script has to go in the spending transaction – the person doing the spending then is faced with the miner’s per-byte cost of the transaction, but the spender doesn’t care what complicated measures the next owner wants to impose on their newly owned coins, why should they pay the higher fee?

The solution is pay to script hash or P2SH as it’s often abbreviated.

The idea is that instead of paying to a Bitcoin address, or rather, instead of specifying a condition in the transaction output, the spender specifies nothing other than the hash of the script that is allowed to make the claim. Now, think back to our previous standard claim script:

{
   "lock_time":0,
   "size":341,
   "inputs":[
      {
         "prev_out":{
            "index":0,
            "hash":"ac13d28fd26c1b7e88083c63859d0010919c5b037676f1b424807674f6f54377"
         },
         "script":"00483045022100c99f1a41b28693087617fb03c45e7504654f251423a4c5e2b5b9f4ee8ff04482022027b5f55c36507167ef24f109e5ff44e279447ec80641bdf21299d971035e2fa10148304502200acf7136240358e010f2b09a5818068a00169667218763ae698d4d118751a73c022100abe7a4984cc312ac28ea69c69636f4801b05bc952221342f875eeea237bca8d6014c6952210397c34bc26fd4dc5523c83b0027c18a7505c2668ff8d6ec8834b08074d80bd8f42102f8cc6389cbfac80ee9432a0e2e370b0c0f52b1359df3faddae266a966b11bb712103c072355c488cf8a064ba9d6c2255d7a4ac06b0de4d637870541a815e8f5ce7db53ae"
      }
   ],
   "version":1,
   "vin_sz":1,
   "hash":"f84007fb6b7faba3c5de7fb30526744f9b4e8e3d8841d5d4416c5c8a3d632648",
   "vout_sz":1,
   "out":[
      {
         "script_string":"OP_DUP OP_HASH160 af5cda336bfb8c7543d40e65ae1c3acab95549bf OP_EQUALVERIFY OP_CHECKSIG",
         "address":"1GzEV4jzdR1A715J1rvU5GomW3a5UJZqaZ",
         "value":4672558,
         "script":"76a914af5cda336bfb8c7543d40e65ae1c3acab95549bf88ac"
      }
   ]
}

Implementing this feature required a change to the transaction format in core Bitcoin.

Let’s have a look at a P2SH address in the wild. This is a P2SH address with one transaction paying it, and one transaction paying out from it. First the (abridged) transaction that funds it.

{
   "lock_time":0,
   "size":224,
   "version":1,
   "vin_sz":1,
   "hash":"ac13d28fd26c1b7e88083c63859d0010919c5b037676f1b424807674f6f54377",
   "vout_sz":2,
   "out":[
      {
         "script_string":"OP_HASH160 c57a2d34802679c0916cdb447d128c1708844354 OP_EQUAL",
         "address":"3KhBQK4BzGcYEyi6gxnumVYYDcr9EZY4Qj",
         "value":4673558,
         "script":"a914c57a2d34802679c0916cdb447d128c170884435487"
      },
   ]
}

Then the (abridged) transaction that spends from the P2SH address:

{
   "lock_time":0,
   "size":341,
   "inputs":[
      {
         "prev_out":{
            "index":0,
            "hash":"ac13d28fd26c1b7e88083c63859d0010919c5b037676f1b424807674f6f54377"
         },
         "script":"00483045022100c99f1a41b28693087617fb03c45e7504654f251423a4c5e2b5b9f4ee8ff04482022027b5f55c36507167ef24f109e5ff44e279447ec80641bdf21299d971035e2fa10148304502200acf7136240358e010f2b09a5818068a00169667218763ae698d4d118751a73c022100abe7a4984cc312ac28ea69c69636f4801b05bc952221342f875eeea237bca8d6014c6952210397c34bc26fd4dc5523c83b0027c18a7505c2668ff8d6ec8834b08074d80bd8f42102f8cc6389cbfac80ee9432a0e2e370b0c0f52b1359df3faddae266a966b11bb712103c072355c488cf8a064ba9d6c2255d7a4ac06b0de4d637870541a815e8f5ce7db53ae"
      }
   ],
   "version":1,
   "vin_sz":1,
   "hash":"f84007fb6b7faba3c5de7fb30526744f9b4e8e3d8841d5d4416c5c8a3d632648",
   "vout_sz":1,
}

My “abridgements” in these cases are to remove the inputs and outputs that aren’t under discussion – hopefully that let’s you see the pieces that are relevant, without having so many impenetrable long hex numbers to sift through.

So you can see that the spending transaction’s input references the funding transaction’s output (funding_hash:index). Next notice that the condition script is smaller than the standard one we examined previously. The funding (output) script is:

OP_HASH160
PUSH_20 c57a2d34802679c0916cdb447d128c1708844354
OP_EQUAL

BIP16 describes how a P2SH output may be “spotted” and it is this form of address that does it OP_HASH160, PUSH_20, OP_EQUAL). It’s not just a script, it’s a marker to tell the Bitcoin core that this script is not, in fact, to be used as a script; rather that this is a P2SH output. The fact that it is a P2SH-patterned output is also what allows anyone displaying this output to show it as a class-5 address (which is a “3” in bas58).

I think, if the Bitcoin developers had known what they know now, there would never have been anything but P2SH – the output “scripts” would have been only a hash, and nothing else. The three extra bytes here are not the most painful of overheads, so it’s not the end of the world.

Running this as if it were a script using pre-P2SH rules, will always fail. An OP_HASH160 is 33 bytes, so the OP_EQUAL operation will never return true if this is not identified as a P2SH marker.

It’s important to remember that this is no longer a script. It is a backward-compatible way of switching bitcoin to P2SH mode and supply the hash of the only claimant script that is allowed to claim it. The claimant script must not only successfully complete, but it must first match this hash before bitcoin will even attempt to execute it. Let’s look in more detail then at the claim (input) script:

00 (OP_FALSE)
48 (PUSH_72)
   30 45 02 21 00 c9 9f 1a   41 b2 86 93 08 76 17 fb
   03 c4 5e 75 04 65 4f 25   14 23 a4 c5 e2 b5 b9 f4
   ee 8f f0 44 82 02 20 27   b5 f5 5c 36 50 71 67 ef
   24 f1 09 e5 ff 44 e2 79   44 7e c8 06 41 bd f2 12
   99 d9 71 03 5e 2f a1 01
48 (PUSH_72)
   30 45 02 20 0a cf 71 36   24 03 58 e0 10 f2 b0 9a
   58 18 06 8a 00 16 96 67   21 87 63 ae 69 8d 4d 11
   87 51 a7 3c 02 21 00 ab   e7 a4 98 4c c3 12 ac 28
   ea 69 c6 96 36 f4 80 1b   05 bc 95 22 21 34 2f 87
   5e ee a2 37 bc a8 d6 01
4c 69 (OP_PUSHDATA1(105))
   52 21 03 97 c3 4b c2 6f   d4 dc 55 23 c8 3b 00 27
   c1 8a 75 05 c2 66 8f f8   d6 ec 88 34 b0 80 74 d8
   0b d8 f4 21 02 f8 cc 63   89 cb fa c8 0e e9 43 2a
   0e 2e 37 0b 0c 0f 52 b1   35 9d f3 fa dd ae 26 6a
   96 6b 11 bb 71 21 03 c0   72 35 5c 48 8c f8 a0 64
   ba 9d 6c 22 55 d7 a4 ac   06 b0 de 4d 63 78 70 54
   1a 81 5e 8f 5c e7 db 53   ae

This, again, is a P2SH-specific layout. If run as a script directly, OP_FALSE would simply cause it to fail instantly. Remember though that the funding “script” is known at this point, and we’ve already seen that we can detect the funding script as a P2SH hash, and hence know that the claiming “script” should be interpreted as a P2SH claim. These have some special rules:

  • Must begin with OP_FALSE.
  • Must then only contain PUSH operations.

Other than this the script is simply run. The magic of a P2SH script then happens: the last item on the stack is popped, and de-serialized to form the condition script, its hash validated against the P2SH we got from the funding transaction, then the stack that remains is used as the initial stack for that condition script. The equivalent of running the following combined script:

00 (OP_FALSE)
48 (PUSH_72)
   30 45 02 21 00 c9 9f 1a   41 b2 86 93 08 76 17 fb
   03 c4 5e 75 04 65 4f 25   14 23 a4 c5 e2 b5 b9 f4
   ee 8f f0 44 82 02 20 27   b5 f5 5c 36 50 71 67 ef
   24 f1 09 e5 ff 44 e2 79   44 7e c8 06 41 bd f2 12
   99 d9 71 03 5e 2f a1 01
48 (PUSH_72)
   30 45 02 20 0a cf 71 36   24 03 58 e0 10 f2 b0 9a
   58 18 06 8a 00 16 96 67   21 87 63 ae 69 8d 4d 11
   87 51 a7 3c 02 21 00 ab   e7 a4 98 4c c3 12 ac 28
   ea 69 c6 96 36 f4 80 1b   05 bc 95 22 21 34 2f 87
   5e ee a2 37 bc a8 d6 01
52 (OP_2)
21 (PUSH_33)
   03 97 c3 4b c2 6f d4 dc   55 23 c8 3b 00 27 c1 8a
   75 05 c2 66 8f f8 d6 ec   88 34 b0 80 74 d8 0b d8
   f4
21 (PUSH_33)
   02 f8 cc 63 89 cb fa c8   0e e9 43 2a 0e 2e 37 0b
   0c 0f 52 b1 35 9d f3 fa   dd ae 26 6a 96 6b 11 bb
   71
21 (PUSH_33)
   03 c0 72 35 5c 48 8c f8   a0 64 ba 9d 6c 22 55 d7
   a4 ac 06 b0 de 4d 63 78   70 54 1a 81 5e 8f 5c e7
   db
53 (OP_3)
ae (OP_CHECKMULTISIG)

This (as we’d expect for Brawker’s usage) is a 2-of-3 multisig condition script. All the cryptographic work is done in OP_CHECKMULTISIG, which I won’t describe in detail here – suffice to say it’s even more complex than OP_CHECKSIG that we saw previously. Let me simplify this by making it more symbolic:

PUSH_TRANSACTION_SIGNATURE#1
PUSH_TRANSACTION_SIGNATURE#2
PUSH_2
PUSH_DESTINATION_PUBLIC_KEY#1
PUSH_DESTINATION_PUBLIC_KEY#2
PUSH_DESTINATION_PUBLIC_KEY#3
PUSH_3
OP_CHECKMULTISIG

In a higher level language, we might have written this as

OP_CHECKMULTSIG([KEY#1, KEY#2, KEY#3], [SIG#1, SIG#2])

Its job is to check that the signatures are valid for two of the three keys. The signatures, as for OP_CHECKSIG we saw before, are signatures of the transaction outputs (which aren’t shown, as we’re not concerning ourselves with were the money goes next).

It’s important to note the similarity to the standard bitcoin transaction form.

PUSH_TRANSACTION_SIGNATURE      )  from input script in claiming
PUSH_CLAIM_PUBLIC_KEY           )  transaction

OP_DUP                          )
OP_HASH160                      )  from output script in
PUSH_NEW_OWNER_PUBLIC_KEY       )  funding transaction
OP_EQUALVERIFY                  )
OP_CHECKSIG                     )

And our P2SH script:

PUSH_TRANSACTION_SIGNATURE#1    )  from input script in claiming
PUSH_TRANSACTION_SIGNATURE#2    )  transaction

PUSH_2                          )
PUSH_DESTINATION_PUBLIC_KEY#1   )  from input script in claiming
PUSH_DESTINATION_PUBLIC_KEY#2   )  transaction, but must have a
PUSH_DESTINATION_PUBLIC_KEY#3   )  hash equal to the hash in the
PUSH_3                          )  funding transaction output
OP_CHECKMULTISIG                )

What’s important is not the change from an OP_CHECKSIG to OP_CHECKMULTISIG (that’s just because multisig-in-claimer is one key use of P2SH, so that’s were we’re first seeing it used), rather it’s that the condition script and the claim script were both stored in the claiming transaction; and yet, the funder was able to ensure that the claim script was the one they intended by requiring a particular hash-of-condition-script.

Note that nothing would stop us supplying a single-pay transaction of exactly the same form in a P2SH script – making P2SH able to do all that the traditional form do. It’s this property that made me say “I think, if the Bitcoin developers had known what they know now, there would never have been anything but P2SH”.

A particularly interesting result is that there is a one-to-one correspondence between a particular bitcoin address and the standard-form script that pays it – meaning wallets can easily detect payments to your addresses regardless of how they are paid.

The advantages then:

  • It may seem convoluted, but a lot of that convolution is because of the need to remain backward compatible. Fundamentally, P2SH is simpler than traditional transactions because the script is no longer split across transactions.

  • It becomes possible to have many payments to a single P2SH hash, without every single one needing to contain a copy of the condition script. This is a far more satisfying arrangement, it’s the equivalent of us identifying bank accounts by account number rather than the name of the bank, the address of the bank, and the name of the account holder. This has the additional advantage that storage in the blockchain is not wasted with needless duplication.

  • Wallets do not need to implement a convoluted condition script negotiation protocol. The single hash, of the same form as any other Bitcoin address, is all that’s needed. Wallets need only be able to support pay-to-hash and any complexity of target is supported.

  • The receiver of the funds gets to choose the level of complexity of the usage conditions. The payer doesn’t need to be involved in whatever internal arrangements the receiver is using for managing their funds. For example, an e-commerce site might want to operate a 2-of-2 validation on all their funds – why should every customer, when they pay them have to go through the inconvenience of getting a copy of the script that implements that requirement? They do not care at all, and P2SH lets them do that.

  • Bitcoin transaction fees contain a charge per byte of transaction. Since it is the receiver who is imposing complicated conditions on use of the funds, it is they who should pay the fee. P2SH funding transactions are always the same size, regardless of the complexity of the claiming conditions.

Posted in FussyLogic | Tagged | Leave a comment

Bitcoin Explained (VII)

This series of articles is an extension to my earlier Bitcoin Explained series. It absolutely isn’t necessary for understanding bitcoin, the previous articles are still valid. However, I recently made use of the Brawker website, and wanted to understand their use of P2SH more fully. This new series of articles is the result of me trying to gain that understanding, and on the way explaining transactions in more detail.

We’ll begin with the idea of bitcoin scripts.

First let me repeat the diagram I gave in Part VI, as to the contents of a particular transaction (#5004 in this case):

TX#5000:out#1 = ...
TX#5000:out#2 = ...
TX#5000:out#3 =  50 BTC -
                         \
       ...                \
TX#5001:out#3 = ...        \             /-- 80 BTC -> PUBKEY#9292
TX#5001:out#5 = 150 BTC ----+- TX#5004 -+
TX#5001:out#6 = ...        /  (155 BTC)  \-- 70 BTC -> PUBKEY#8755
                          /
TX#5002:out#1 = ...      /
TX#5002:out#2 =   5 BTC -
       ...

Earlier we found that the inputs were really references to earlier outputs; and in fact were “claims” on those outputs. The outputs of the transaction aren’t really outputs, as they don’t go anywhere, rather they are “conditions of claim”. Bitcoin transactions have their own, highly specific, scripting language that lets us describe the output conditions, and then meet those conditions with claims.

Let me relabel the above diagram with that in mind:

TX#5000:CONDITION#3
CLAIM()  -> 50 BTC -
                     \
                      \
TX#5001:CONDITION#5    \             /-- 80 BTC -> CONDITION#0
CLAIM()  -> 150 BTC ----+- TX#5004 -+
                       /  (155 BTC)  \-- 70 BTC -> CONDITION#1
                      /
TX#5002:CONDITION#2  /
CLAIM()  ->   5 BTC -

Our “inputs” then, aren’t really inputs, they are a reference to an earlier transaction’s condition script, and a claim script that validates the claim.

These “scripts” are a series of operators that perform actions on a stack. Not unlike Postscript or Forth.

Let’s look first at a typical transaction; here’s one I picked at random with a few clicks on . We can get the raw hex of the transaction by appending “?format=hex” to the transaction display URI.

01 00 00 00 01 66 01 62   1c 41 a6 dc a0 38 1e 17
3b 9a 8d 89 35 64 e9 3d   ea 95 9b 51 04 16 4c 82
09 9e 34 d2 3c 06 00 00   00 6b 48 30 45 02 21 00
a7 55 54 4e 79 a1 d3 ff   9e 68 3e e5 34 cb a7 1e
2a 7d b5 78 dd 39 2f 6f   67 d1 03 e9 7e 40 80 3f
02 20 53 be 17 c6 dd f5   f7 b6 b1 64 44 a8 08 9f
ad 94 3d f0 1b 17 a3 08   89 96 c1 90 76 69 54 88
ba 4a 01 21 03 74 51 ce   c1 99 e4 6e cb 33 49 11
4d 67 48 cc 4f 12 b9 40   63 2c b2 a3 73 92 83 63
fe 1a fa 10 f8 ff ff ff   ff 02 10 ae 93 03 00 00
00 00 19 76 a9 14 d7 c5   3f 1e d6 9e 35 f2 4b 3a
c1 f0 22 7f 4f 17 73 dd   35 01 88 ac 80 d9 e9 02
00 00 00 00 19 76 a9 14   38 d4 68 61 53 49 86 69
4f b6 a0 b6 98 c0 88 a1   56 9d 44 15 88 ac 00 00
00 00 

Remember that the bitcoin formats are pretty dense binary as every byte counts when we’re storing billions of these things. We could decode this by hand for our edification (I’d encourage you to do so if you are interested in programming, it’s good for the soul), but http://blockchain.info also have a very nice raw decoder interface we can make use of.

{
   "lock_time":0,
   "size":226,
   "inputs":[
      {
         "prev_out":{
            "index":6,
            "hash":"3cd2349e09824c1604519b95ea3de96435898d9a3b171e38a0dca6411c620166"
         },
         "script":"483045022100a755544e79a1d3ff9e683ee534cba71e2a7db578dd392f6f67d103e97e40803f022053be17c6ddf5f7b6b16444a8089fad943df01b17a3088996c19076695488ba4a0121037451cec199e46ecb3349114d6748cc4f12b940632cb2a373928363fe1afa10f8"
      }
   ],
   "version":1,
   "vin_sz":1,
   "hash":"85de8fa55135a4a0b126cafabe30b0b9b6034121cb7439740a30aaf34c25d127",
   "vout_sz":2,
   "out":[
      {
         "script_string":"OP_DUP OP_HASH160 d7c53f1ed69e35f24b3ac1f0227f4f1773dd3501 OP_EQUALVERIFY OP_CHECKSIG",
         "address":"1LftaBA1WVJCPKen9tmrb8g5DgXAVghZKy",
         "value":60010000,
         "script":"76a914d7c53f1ed69e35f24b3ac1f0227f4f1773dd350188ac"
      },
      {
         "script_string":"OP_DUP OP_HASH160 38d46861534986694fb6a0b698c088a1569d4415 OP_EQUALVERIFY OP_CHECKSIG",
         "address":"16BVJ6G1FdocrgQ5kAQHLwxrA9gxfftXRX",
         "value":48880000,
         "script":"76a91438d46861534986694fb6a0b698c088a1569d441588ac"
      }
   ]
}

This is the same information as was in the block of hex bytes, but decoded into more readable fields for us – there is actually a little more than that here, as some additional hashes are calculated for convenience, but I’ll not go into them now. Notice one input and two outputs. We’ll ignore the outputs of this transaction, and instead focus on this transaction’s single input.

   "inputs":[
      {
         "prev_out":{
            "index":6,
            "hash":"3cd2349e09824c1604519b95ea3de96435898d9a3b171e38a0dca6411c620166"
         },
         "script":"483045022100a755544e79a1d3ff9e683ee534cba71e2a7db578dd392f6f67d103e97e40803f022053be17c6ddf5f7b6b16444a8089fad943df01b17a3088996c19076695488ba4a0121037451cec199e46ecb3349114d6748cc4f12b940632cb2a373928363fe1afa10f8"
      }
   ],

The script itself, needs a little bit of decoding, we will do this one by hand:

48 (PUSH_72)
   30 45 02 21 00 a7 55 54   4e 79 a1 d3 ff 9e 68 3e
   e5 34 cb a7 1e 2a 7d b5   78 dd 39 2f 6f 67 d1 03
   e9 7e 40 80 3f 02 20 53   be 17 c6 dd f5 f7 b6 b1
   64 44 a8 08 9f ad 94 3d   f0 1b 17 a3 08 89 96 c1
   90 76 69 54 88 ba 4a 01
21 (PUSH_33)
   03 74 51 ce c1 99 e4 6e   cb 33 49 11 4d 67 48 cc
   4f 12 b9 40 63 2c b2 a3   73 92 83 63 fe 1a fa 10
   f8 

The PUSH_nn bitcoin script simply pushes the following nn bytes of script onto the execution stack. To make this more straight forward to understand, I’m going to write this as:

PUSH_TRANSACTION_SIGNATURE
PUSH_CLAIM_PUBLIC_KEY

Moving on to the other fields in the input array, we can see that this script is claiming index 6 of previous transaction 3cd2349e09824c1604519b95ea3de96435898d9a3b171e38a0dca6411c620166. We can use http://blockchain.info to look that transaction up too. I won’t repeat the whole hex and decode, you can look at them yourself should you be interested. Instead, let’s just look at output 6, which is what our original transaction is claiming.

{
   "script_string":"OP_DUP OP_HASH160 b442e3f5440fa5858807b18a4c9aff6c773648df OP_EQUALVERIFY OP_CHECKSIG",
   "address":"1HS8mAHuiemhxNdw7gGFYJTY8ioa3MYL4N",
   "value":108900000,
   "script":"76a914b442e3f5440fa5858807b18a4c9aff6c773648df88ac"
},

We see then that our transaction is claiming 108900000 satoshis. The address and script_string fields have been added for convenience, consider them informational for now. What we’re interested in is the script.

OP_DUP
OP_HASH160
b442e3f5440fa5858807b18a4c9aff6c773648df
OP_EQUALVERIFY
OP_CHECKSIG

The vast majority (at present) of bitcoin outputs are condition scripts of this form – and in truth, Bitcoin core maintains a list of what it calls “standard” scripts – so entirely arbitrary scripts are not allowed in the interests of not ending up with abuse of all the full node’s CPUs. A claim is tested for validity by prepending the claim script to it, then running the whole. The hex in the middle is the hex equivalent of the payee’s bitcoin address. Bitcoin addresses are usually presented in base-58 and with a type-code and checksum, this is the raw hex form. A bitcoin “address” is in truth, the RIPE-160 hash of the public key in an ECDSA public/private key pair.

Let’s combine the claim and the condition, and give the hex bytes symbolic names.

PUSH_TRANSACTION_SIGNATURE
PUSH_CLAIM_PUBLIC_KEY
OP_DUP
OP_HASH160
PUSH_NEW_OWNER_PUBLIC_KEY
OP_EQUALVERIFY
OP_CHECKSIG

Now we’re in a position to “run” this script. Remember that the script operates on a stack. That stack starts out empty, PUSH_TRANSACTION_SIGNATURE leaves it as:

TRANSACTION_SIGNATURE

Then PUSH_CLAIM_PUBLIC_KEY adds its data (I’ll draw the stack growing downwards):

TRANSACTION_SIGNATURE
CLAIM_PUBLIC_KEY

OP_DUP copies the last item on the stack.

TRANSACTION_SIGNATURE
CLAIM_PUBLIC_KEY
CLAIM_PUBLIC_KEY

OP_HASH160 calculates the RIPE-160 hash of the last stack item. Recall that a RIPE-160 hash of a public key is also a bitcoin address. So this operation in fact converts a public key into a bitcoin address:

TRANSACTION_SIGNATURE
CLAIM_PUBLIC_KEY
CLAIM_BITCOIN_ADDRESS

PUSH_NEW_OWNER_PUBLIC_KEY pushes its data onto the stack:

TRANSACTION_SIGNATURE
CLAIM_PUBLIC_KEY
CLAIM_BITCOIN_ADDRESS
NEW_OWNER_PUBLIC_KEY

OP_EQUALVERIFY pops the last two items and compares them for equality; aborting in failure if they are not. Remember that PAYEE_ADDRESS was part of the condition script, and CLAIM_PUBLIC_KEY was from the claim script. So this operation is verifying that the person claiming this output is the owner of the public key referenced in the claimed transactions condition script.

TRANSACTION_SIGNATURE
OWNER_PUBLIC_KEY

And we’re back to were we began, but this time so CLAIM_PUBLIC_KEY becomes OWNER_PUBLIC_KEY. We’ve now got one operation left in the script, OP_CHECKSIG. This is a biggie. OP_CHECKSIG is a complex, compound operator, and it’s shortness is deceptive. (If I’m honest, I find OP_CHECKSIG one of the smelliest bits of bitcoin precisely because it’s operations are so complex, it relies on a very particular set of byte formats and memory operations, and you’d be basically very worried about reimplementing it yourself in case you differed even slightly from the bitcoin-core implementation).

What it does is take the rest of the claiming transaction and verify that the signature of it, TRANSACTION_SIGNATURE, was generated by OWNER_PUBLIC_KEY. Phew. “The rest of the claiming transaction” is important – what’s being signed is the output condition section of the new transaction – i.e. the conditions that the next claimant must meet. That means that only the owner of OWNER_PUBLIC_KEY can specify how his property can next be claimed.

Let’s simplify this: if I want to pay you, I must claim some other coins that have previously specified that only I can claim them. I must then specify who is the next claimant – you. I do that by saying (in script form) “only a signature generated by PAYEE_ADDRESS can specify the next condition script for these coins”. Since I do not own the private key for PAYEE_ADDRESS, even I cannot meet my own condition – only you can.

That’s the current standard bitcoin transaction that every client and every node will use by default. It should be clear though, that the claim script we specified was entirely arbitrary. Nothing, in fact, stops us from specifying the following condition script:

OP_TRUE

Anyone would be able to meet this condition; all they need do is create a transaction that claims to meet it, and it’s theirs. It will become a “first to pick up the money I found lying around” situation. You can imagine then that the variety of possible condition scripts is infinite.

Next time, I’ll discuss the limitations of this arrangement, and how bitcoin has addressed those limitations with P2SH (pay to script hash).

Posted in FussyLogic | Tagged | Leave a comment

Compile-Time Polymorphism

Let’s say we have a peripheral, like an accelerometer, connected via an SPI bus. Then let’s say we have to different embedded projects that make use of this same chip, implemented on two different microcontrollers. Wouldn’t it be nice to be able to write the accelerometer communication code once, keep a low runtime overhead that we are always seeking in embedded projects, and yet defer the low-level SPI bus writes to the appropriate code for each microcontroller?

If we had infinite resources we might do this:

class TAccelerometer {
  public:
    Accelerometer() {}
    virtual Accelerometer() {}

    bool setup();
    void shutdown();
    uint8_t read();

  protected:
    virtual uint8_t readWriteSPI(uint8_t out) = 0;
    virtual void select() = 0;
    virtual void deselect() = 0;
};

// in project #1
class TAccelerometerOnCPU1 : public Accelerometer {
  protected:
    uint8_t readWriteSPI(uint8_t out) override;
    void select() override;
    void deselect() override;
};

// in project #2
class TAccelerometerOnCPU2 : public Accelerometer {
  protected:
    uint8_t readWriteSPI(uint8_t out) override;
    void select() override;
    void deselect() override;
};

We put all the high-level accelerometer commands in the public member functions and override the virtuals to suit the hardware for CPU1 and CPU2. Great. Except on an embedded system we’ve paid for a whole set of virtual calls; and possibly (depending on how clever the compiler is) the code for a class we’re not using.

As an alternative, we can do a kind of compile-time equivalent; we do this with a dependency injection done via a template.

template <typename TSPI>
class TAccelerometer {
  public:
    bool setup();
    void shutdown();
    uint8_t read();

  protected:
    uint8_t readWriteSPI(uint8_t out) { return TSPI::readWriteSPI(out); }
    void select() { return TSPI::select(); }
    void deselect() { return TSPI::deselect(); }
};

// in project #1
class TSPIForCPU1 {
  public:
    static uint8_t readWriteSPI(uint8 out);
    static uint8_t select();
    static uint8_t deselect();
};

// in project #2
// ... same again ...

Now we have no run time overhead; the compiler has enough information to inline and directly call all the hardware-specific functions with no runtime overhead. We can make this more pleasant by using our old friend the curiously recurring template pattern.

template <typename TSPI>
class TAccelerometer {
  private:
    TAccelerometer();

  public:
    static bool setup();
    static void shutdown();
    static uint8_t read();

  protected:
    static uint8_t readWriteSPI(uint8_t out) { return TSPI::readWriteSPI(out); }
    static void select() { return TSPI::select(); }
    static void deselect() { return TSPI::deselect(); }
};

// in project #1
class TSPIForCPU1 : public TAccelerometer<TSPIForCPU1> {
  public:
    static uint8_t readWriteSPI(uint8 out);
    static uint8_t select();
    static uint8_t deselect();
};

In some ways this second version is more representative of the truth – with all the functions static, we’re saying that there is no such thing as a runtime instance of this class. That’s a sensible way of looking at it, as it’s not like we could ever instantiate two – there is one bit of hardware, and these calls always operate on it.

There is still an annoyance though, which I’ve ignored in the above examples, is that the implementation would have to be entirely in the header for this to work. We can’t get away from the fact that both the “base” and “override” have to be available at compile time when we’re dealing with templates, but we can perhaps improve on keeping the whole implementation inline in headers.

Posted in FussyLogic | Tagged , , , , | Leave a comment

Basic Pointers

I saw this question on reddit, and decided to help. It’s a little more fundamental than I usually blog about, but I thought it might help someone one day who only seen the abstractions that high-level languages present to us, never the nitty-gritty underneath.

So as a beginner programmer the line of code:

int *pApples = &apples;

means that I am creating “something” “somewhere” and this “something” points a finger at the memory address of apples. When I output the “somewhere” of pApples by typing in &pApples I get the same address as &apples. To me, it looks like two things are occupying the same space at the same time. I’m just fiddling with pointers at the moment. This is way beyond what I have been taught in class so far so sorry if this is a stupid question.

Literally “where” is an unanswerable question. The compiler, CPU and OS all contribute to that.

However, broadly, the pointer is, like every other variable, a block of memory. Allocated either on the stack, the heap, or global memory spaces.

The particular CPU (and compiler) you are using determines how big a block of memory is allocated for any particular variable type. For example, an unsigned int on my system (and probably yours) is 32-bits wide, so takes up four bytes. Let’s say it happens, in one particular run of a program, to be allocated at 0x40000000 in memory, and is equal to 0xaabbccdd.

0x40000000 0x40000001 0x40000002 0x40000003
[0xdd]     [0xcc]     [0xbb]     [0xaa]

This particular CPU is little-endian (hence the least significant byte of the 32-bit unsigned int is stored first in memory); on big-endian CPUs, the opposite would be true. Big-endian is more friendly for humans to read, but it’s often easier for a CPU and whatever algorithm you’re writing to start at the least significant end, so little-endian has ended up more common.

It’s actually better (and one of the advantages of using a higher-level language) to not write code that knows the endian-ness of its target and let the compiler sort out those details. We’ll do the same by just labelling the whole memory block as one entity with our unsigned int in it.

0x40000000
[0xaabbccdd]

The compiler makes this bit of memory available to us, labelled and typed, so we’ll treat this as.

(unsigned int i)
@ 0x40000000
[0xaabbccdd]

With this information the compiler knows enough about i to be able to perform all your integer operations on it. So when we loaded i with something:

unsigned int i;  // as the programmer we don't know that
                 // this is stored at 0x4000000
i = 0xaabbccdd;

The compiler then knows to write 0xdd into 0x4000000, 0xcc into 0x40000001, etc (actually on a 32-bit system it leaves it to the CPU to write it all in one operation, but imagine it’s done byte-by-byte for now). Similarly if you do

i = i / 10;

It knows to translate this to (making up some assembler language):

mov  [0x4000000], R1    ; put contents of 0x4000000 in register 1
mov  0x000000a, R2      ; put 10 in register 2
call _unsigned_int_long_division  ; divide R1 by R2, answer in R3
mov  R3, [0x40000000]   ; answer goes from register back to memory

Importantly, if it was a signed int, then it would know to call a different subroutine, but your i = i / 10 line would be unchanged. Similarly, for float or long long, or even uint8_t. Also note that the division operation is really just another function call, it’s just so fundamental that languages tend to give it a concise symbol.

Now, we’ve already seen pointers in this little bit of pseudo assembly as we copied the number from memory to a CPU register so the CPU could work on it, then back into the memory location representing the variable. We can do similar things in C though.

unsigned int i;   // an integer, stored at say 0x40000000
unsigned int *p;  // a pointer to an integer, stored at say 0x40000004

i = 0xaabbccdd;

// Point p at i
p = 0x40000004;

You would never do this. You can’t in fact, because it’s only because this isn’t a real example that we even know that the compiler/linker has decided that i is stored at 0x40000000. Fortunately, we don’t need to know, the compiler comes with a handy operator which lets us query it’s knowledge of where it chose to store this variable – the “address-of” operator.

p = &i;

Now, note that we’re not doing anything different from when we assigned 0xaabbccdd to i earlier – we’re just putting a number in a variable. It’s just that this time the number has an additional meaning.

Let’s look at the memory:

(unsigned int i)   (unsigned int *p)
@ 0x40000000       @ 0x40000004
[0xaabbccdd] <---- [0x40000000]

Both 32-bits of storage, but different types.

Remember how the compiler knew when we performed operations with i which subroutines to call because it also knew what type the variable was. Exactly the same applies to pointers – the compiler knows that the number stored in p can have some “pointery” operations performed on it that can’t be done on an unsigned int (and vice versa in fact). In particular, the inverse operation of “address-of”, “dereference”, which is the unary operator symbol “*”.

i = i / 10;   // legal, division is defined for `unsigned int`
p = p / 10;   // illegal, division is meaningless for a pointer

i = *p;       // legal,
p = *i;       // illegal, we can't dereference an integer

The compiler is protecting us – even though p and i are both 32-bit numbers, dereferencing i is not possible because the compiler has been told it doesn’t point at anything. Note that if we were using assembly language, there would be no such protection, the CPU doesn’t know anything about data types (for the most part). So, we can dereference a pointer, but what type should the dereferenced number have? We’ve already told the compiler that.

unsigned int *p;

We’re saying that “*p” will be an unsigned int. Hence while we can’t perform division on a pointer, we can perform it on a dereferenced integer pointer.

i = *p / 10;

Let’s look at the psuedo-assembly:

mov  [0x40000004], R4   ; put the contents of p into a register
mov  [R4], R1           ; put contents of 0x4000000 in register 1
mov  0x000000a, R2      ; put 10 in register 2
call _unsigned_int_long_division  ; divide R1 by R2, answer in R3
mov  R3, [R4]           ; answer goes from register back to memory
                        ; pointed to by R4

The middle of this is the same as before, but we’ve added some indirection around the outside.

Nothing stops you continuing this idea. Since we can point at anything, and we have a way of telling the compiler the type of the thing we’re pointing at, we can go again:

unsigned int i;
unsigned int *p;
unsigned int **pp;

p = &i;
pp = &p;

pp is now a pointer-to-a-pointer-to-an-unsigned int. Same as before; we’ve told the compiler that **pp is equal to an unsigned int. Or *pp must be something that can points to an unsigned int *.

To go back to your question directly.

When I output the “somewhere” of pApples by typing in &pApples I get the same address as &apples.

Hopefully now you can see that &pApples is not equal to &apples. &pApples is a pointer to a pointer to an integer; and &apples is a pointer to an integer. To modify my last example a little to use your naming:

int apples;          // let's say this is at 0x40000000
int *pApples;        // let's say this is at 0x40000004
int **ppApples;      // let's say this is at 0x40000008

pApples = &apples;   // 0x40000004 now holds 0x40000000
ppApples = &pApples; // 0x40000008 now holds 0x40000004

// Note because the pointers point at apples regardless of what it
// holds, it doesn't matter that we assign this after we assign to
// the pointers
apples = 10;         // 0x40000000 now holds 10

Let’s fill in our last pretend memory map:

(int apples)     (int *pApples)   (int **ppApples)
@ 0x40000000     @ 0x40000004     @ 0x40000008
[0x0000000a] <-- [0x40000000] <-- [0x40000004]

Now we can use those pointers…

// Derferencing the pointer-to-the-integer gets us an integer
assert(*pApples == apples);
// Dereferencing the pointer-to-the-pointer once gets us a pointer to
// apples, pApples
assert(*ppApples == pApples);
// Since *ppApples equals pApples; then dereferencing *ppApples is the same
// as derferencing pApples... i.e. apples
assert(**ppApples == apples);

The compiler picked the storage locations of these variables; and we fetched those locations with the address-of operator and wrote them to other variables. Those other variables have to be typed such that the compiler will allow assignment of pointers to them, but a pointer is just another variable once you scratch the surface.

Posted in FussyLogic | Tagged , , , , , | Leave a comment

Dell XPS 15 and Linux

I’ve got an XPS15 – Dell’s flagship laptop. This article is my notes on how I set it up to run Linux.

I’ve had considerably harder times than I had with this laptop when I was installing Linux on laptops in the past. Everything “Just Works” once you install the necessary drivers. All in all, very positive for both Linux and this laptop.

Windows

You’ll want rid, but I’d suggest checking your hardware with Windows before you start wiping hard disks and installing Linux. Quick checks:

  • Webcam
  • SD Card
  • Touch screen
  • Touch pad
  • WiFi

Firmware Upgrade

Then, being that we’ll want a way of doing updates without Windows, we’ll ensure we can by using a FreeDOS USB stick.

  • Latest firmware from Dell. A06 at time of writing.
  • FreeDOS image. I got the 250MB image. It’s a tiny download when zipped though because it’s all empty space.

The firmware is a DOS executable, and we want it on the FreeDOS image. The image is a disk image though, and not a partition image:

$ fdisk -l freedos.img 

Disk freedos.img: 250 MB, 250000384 bytes
101 heads, 32 sectors/track, 151 cylinders, total 488282 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00052fbf

      Device Boot      Start         End      Blocks   Id  System
freedos.img1   *           1      488281      244140+   e  W95 FAT16 (LBA)

This tell us that the first partition starts at “sector” 1, which is, from this information, 512 bytes in. We can mount this image then with:

$ mount -o loop,offset=512 freedos.img /mnt/

Then you copy the Dell firmware executable to this mounted directory. Then write it to a USB stick with (remembering that this is a disk image, not a partition image so it goes to a disk):

$ dd if=freedos.img of=/dev/sdb bs=4M

Be careful you get the right output device – it would be a disaster to write this to your system hard disk.

Now you can reboot. Push F2 to enter the BIOS setup; then disable Secure Boot, which will in turn enable Legacy Boot; save the new settings; then F12 to choose the USB stick as the boot drive.

C:\> cd XPS15
C:\> 9530a06.exe

And follow instructions. As it happens mine was already loaded with A06, so I didn’t need to do anything. However, this procedure will work for any future updates Dell releases.

Disk partitioning

Initial SSD partitioning (as reported by Windows)

  • 500MB EFI System Partition
  • 40MB OEM Partition
  • 750MB Recovery Partition
  • 7.97GB Recovery Partition
  • 8GB OEM Partition
  • 7.37GB OEM Partition
  • 459.58GB Boot, Page File, Crash Dump, Primary Partition

The Debian installer disagrees with what Windows reports:

  • 1MB Free space
  • 524.3MB (fat32) EFI system partition
  • 41.9MB (fat32) Basic data partition
  • 134.2MB Microsoft recovery partition
  • 786.3MB (ntfs) Basic data partition
  • 493.5GB (ntfs) Basic data partition
  • 8.6GB (ntfs) Microsoft recovery partition
  • 8.6GB Basic data partition (assume this is Rapid Start partition)
  • 335.4kB Free space

This, actually, seems wrong. There is certainly a recovery partition from Dell – where’s that gone? Seems like a Debian bug, but I’ve got no way of confirming.

500MB EFI System Partition (fat32)

This partition is what a UEFI bootloader will look for for its managed boot. A quick peak at it in the debian installer’s console shows:

  • EFI/
    • Boot/bootx64.efi
    • Microsoft/Boot/ contains lots of Microsoft “stuff”
  • en-us/
    • bootmgr.efi.mui

40MB OEM Partition (fat32)

Empty.

134.2MB Microsoft recovery partition

?

786.3MB (ntfs) Basic data partition

?

493.5GB (ntfs) Basic data partition

Main Windows partition.

8.6GB (ntfs) Microsoft recovery partition

Not sure, but this could be the Intel Rapid Storage partition.

Intel Rapid Storage uses an SSD partition to speed up HDD access, acting as a cache. That’s pretty useless here because there is no HDD.

8.6GB Basic data partition

Presume this to be the Intel Rapid Start partition.

Intel Rapid Start takes a copy of RAM and stores it to the SSD, that’s not going to work very well on a 16GB machine which has only 8GB partitions. This seems like a standard mistake by Dell.

Debian

BIOS

You’ll need to disable Secure Boot, which will auto-enable Legacy Boot; however, it seems the Debian’s installation images are happy with UEFI these days, so you can disable Legacy Boot. This has the advantage that the Debian installer will then install the UEFI version of grub in /boot/efi/EFI/debian/grubx64.efi. UEFI seems a much less terrifying way of controlling early boot than rewriting MBRs – the EFI partition is a standard fat32 partitions, and so is pretty easy to alter.

Partitioning

The first decision is the new partition table. There are quite a few of the above partitions that we really don’t need. Here’s my suggested new schema:

$ sudo gdisk -l /dev/sda
GPT fdisk (gdisk) version 0.8.10

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 1000215216 sectors, 476.9 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): A8F5A3A5-C068-4E51-AD31-51AF34E944F8
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 1000215182
Partitions will be aligned on 2048-sector boundaries
Total free space is 2669 sectors (1.3 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         1026047   500.0 MiB   EF00  EFI system partition
   2         1026048         1107967   40.0 MiB    FFFF  Basic data partition
   3         1107968        34662399   16.0 GiB    8400  Intel Rapid Start
   4        34662400        71419903   17.5 GiB    8200  Linux Swap
   5        71419904      1000214527   442.9 GiB   8300  Linux ext4

I’ve left the two fat32 partitions alone, resized the IRST partition to match the memory and added a swap partition for Linux, which should be bigger than main memory for suspend-to-disk purposes (although with IRST, that should never be necessary).

During install, you need only delete all but the first two partitions, then create three appropriately-sized partitions ready for Rapid Start and Linux. It’s important to make sure the Rapid Start partition is 16 GiB exactly (if you’ve got 16 GiB of memory) note that that is gibibytes not gigabytes. If you specify it as 16 GB it will end up too small.

You won’t need much swap, so I abandoned the “twice main memory” rule of thumb and just gave “main memory plus a bit”.

Other than the ext4 partition, I suggest leaving all the type code setting and GUID changes for the Rapid Start partition to after you have a working system. When you do, you can use gdisk to set the partition GUID code (not the “unique GUID”, confusingly) and Code of the partition.

Partition number (1-5): 3
Partition GUID code: D3BFE2DE-3DAF-11DF-BA40-E3A556D89593 (Intel Rapid Start)
Partition unique GUID: D3BFE2DE-3DAF-11DF-BAFC-F23A556D8959
First sector: 1107968 (at 541.0 MiB)
Last sector: 34662399 (at 16.5 GiB)
Partition size: 33554432 sectors (16.0 GiB)
Attribute flags: 0000000000000000
Partition name: 'Intel Rapid Start'

After this, you’ll need a reboot, and a check in the BIOS setup that Intel Rapid Start is enabled. It’s worked without trouble for me with the above in place.

In case you’re unaware, Intel Rapid Start works on a computer in sleep (i.e. suspend to RAM). It waits for a period (default 120 minutes, but I altered mine with the Windows tool before I wiped it to 60 minutes) then does a sort of half-wake, copies the contents of RAM to the Rapid Start partition, which it finds by virtue of the magic GUID, and then does a full power off. On next power up, the BIOS sees that there is data in that partition restores the partition to RAM, then continues as if it were doing a restart from sleep. This means you can always just sleep the laptop, with no worries that the small power drain from keeping the RAM refreshed will drain the battery – it won’t once Rapid Start has kicked in. You can tell it’s working because once it’s in Rapid Start mode you’ll need the power switch (and a slightly longer start time) to get the laptop back on, as opposed to the simple lid opening that wakes a normal suspend-to-RAM. Personally I think this is a really good feature – convenience of a suspend-to-RAM with the safety of a non-volatile suspend-to-disk.

WiFi

There are a few difficulties around the WiFi firmware with Debian. We can work around them though.

There is, supposedly, a Debian netinst image that includes the non-free firmware. Having checked, the non-free firmware blobs are certainly in the image. Unfortunately, they don’t seem to do you much good. They are stored as .debs, which don’t help you at all during installation, as there doesn’t seem to be a way of extracting a deb from the command line in the installer, dpkg isn’t available. All the documentation suggests that it should work. It didn’t for me though.

The alternative is to use the full debian CD1. This is sufficient to install a non-graphical Debian system – just as with a netinst image, only without a network connection needed. Then you are in a position to extract the firmware deb using your fully operational system. Just store, in this case, the firmware-iwlwifi package on a (separate) USB stick.

Once you’ve got a Debian installation, mount this second stick and run

$ dpkg -i firmware-iwlwifi-WHATEVER.deb
$ modprobe -r iwlwifi
$ modprobe iwlwifi

You should now have a wlan0 when you run ifconfig -a. Next you’ll need a way of connecting to a wifi network. network-manager is the easiest way. apt-get install network-manager and it will pull in all the other necessary packages. Then you can run network-manager’s new command line UI.

$ nmtui

Now you’ve got a network connection you can install a proper /etc/apt/sources.list, and start copying your configuration over as you would any other install.

Graphics

The XPS 15 has an integrated Intel graphics card, and a discrete nVidia graphics card. For the most part, the Intel graphics will be fine (and lower power) for us. All we really will want is the nVidia card powered down. The bumblebee project lets us do that, but it needs a kernel module to help it, so you’ll need the headers for your kernel too.

$ apt-get install linux-headers-3.14-2-amd64 build-essential
$ apt-get install bumblebee

Then a reboot to let the bbswitch driver and new config kick in.

This seemed to take about 1W off my power usage (measured by powertop) in command line mode.

X

Installed. Worked instantly with no need to fiddle with any drivers.

The screen DPI, while detected and listed in the xorg log file seemed to be ignored when viewing the output of xdpyinfo | grep -B 2 resolution. Rather than create a load of xorg.conf sections (which ruins all the auto-detect work) I just made a little script in /etc/X11/Xsession.d/90-xps-15:

xrandr --fbm 346x194

This should give KDE enough to work with to make its fonts the right height on screen.

Qt/KDE

With the DPI set correctly as above, you will still probably want to alter your task bar height, window decoration sizes, and icon sizes using KDE’s systemsettings. For the most part I just doubled everything.

There are still a few areas were the HiDPI causes KDE problems. Particular with Qt-supplied widgets. Qt seems to draw widgets at fixed pixel sizes, hopefully they’ll get around to drawing them at fixed point sizes. Checkboxes and radio buttons are particularly awful.

You can tweak Qt a little with the qt4-qtconfig package.

Skype, being a Qt and Microsoft app, has some horrible hard-coded size problems, I think we just have to live with them for now.

Firefox

Go to about:config and change layout.css.devPixelsPerPx to (I suggest) 2.0.

Gtk

Install the kde-config-gtk-style package, and use KDE’s systemsettings to set your Gtk applications to use the same font as your KDE applications.

Touch

Touchpad

After a bit of tweaking, here’s my /etc/X11/xorg.conf.d/10-synaptics.conf:

Section "InputClass"
        MatchDriver "synaptics"
        Identifier "BuiltIn ClickPad"
        # Overall
        Option "ClickPad" "true"
        #                         RBL RBR RBT RBB MBL MBR MBT MBB
        Option "SoftButtonAreas" "60% 0   85% 0   41% 59% 85% 0"
        # Palm
        Option "PalmDetect" "true"
        Option "PalmMinWidth" "2"
        Option "PalmMinZ" "2"
        # Features
        Option "VertEdgeScroll" "false"
        Option "HorizEdgeScroll" "false"
        Option "VertTwoFingerScroll" "true"
        # Buttons
        Option "TapButton1" "1"
        Option "TapButton2" "2"
        Option "TapButton3" "3"
        Option "ClickButton1" "1"
        Option "ClickButton2" "3"
        Option "ClickButton3" "2"
EndSection

Palm detection seems broken at present (problem seems to be in the kernel). Until it’s fixed you’ll get annoying jumps while you’re typing. A good workaround is to use syndaemon to disable the pad whenever the keyboard is in use.

For everything else, set the ‘Features’ and ‘Buttons’ section to suit your own tastes.

Touchscreen

Personally, I’d like to disable this for the vast majority of the time, I’ve got no real use for it on a laptop. I’ve not found a way yet unfortunately. Worse, it seems to use about a Watt of power.

Update: you can disable it with xinput.

$ xinput
⎡ Virtual core pointer                          id=2    [master pointer  (3)]
⎜   ↳ Virtual core XTEST pointer                id=4    [slave  pointer  (2)]
⎜   ↳ SynPS/2 Synaptics TouchPad                id=13   [slave  pointer  (2)]
⎜   ↳ SYNAPTICS Synaptics Large Touch Screen    id=10   [slave  pointer  (2)]
⎣ Virtual core keyboard                         id=3    [master keyboard (2)]
    ↳ Virtual core XTEST keyboard               id=5    [slave  keyboard (3)]
    ↳ Power Button                              id=6    [slave  keyboard (3)]
    ↳ Video Bus                                 id=7    [slave  keyboard (3)]
    ↳ Video Bus                                 id=8    [slave  keyboard (3)]
    ↳ Power Button                              id=9    [slave  keyboard (3)]
    ↳ AT Translated Set 2 keyboard              id=12   [slave  keyboard (3)]
    ↳ Dell WMI hotkeys                          id=14   [slave  keyboard (3)]
$ xinput disable 10

Re-enable it for those times you want it in the same way, but with xinput enable.

Power

Install the laptop-mode-tools package. Install the powertop package.

The screen seems particularly power hungry. When the screen is off, even with the system busy, power drops to about 8W. With it on and busy, 15W.

With non-intense activity (just typing or browsing, nothing going on in the background), mine is sitting at about 12-15W and lasts a solid 6 to 7 hours.

SD Card

Works without fuss.

USB

The XHCI USB 3.0 host controller driver seems not quite up to scratch yet on Linux. It has occasionally stopped me suspending the system, and locked up USB when I plug a Nexus phone into the high speed port. I’ve got no real desire for USB 3.0 at present, so I worked around this by removing the module.

$ cat /etc/modprobe.d/adp-blacklist.conf 
blacklist xhci_hcd

This might improve with later versions of Linux than the one I have at time of writing (3.16-2).

A better solution is to use systemd to remove the module on suspend, and add it back on resume. This means you can have all the USB goodness without suspend being affected.

#!/bin/sh
# /lib/systemd/system-sleep/xhci_hcd

MODPROBE=/sbin/modprobe

[ "$1" = "post" ] && exec ${MODPROBE} xhci_hcd
[ "$1" = "pre" ] && exec ${MODPROBE} -r xhci_hcd

exit 0

Replacement Parts

The battery will be the obvious first thing to start degrading. After four months of pretty much daily use I’ve already experienced this:

$ grep "" /sys/class/power_supply/BAT1/energy_full*
/sys/class/power_supply/BAT1/energy_full:84920000
/sys/class/power_supply/BAT1/energy_full_design:91000000

That is to say: 7% capacity loss, or 1.67% per month. Extrapolating, that will be 29 months until 50% of battery life is lost. The battery can be replaced, it is a 7D1WJ and seems to be between £160 and £320. Sigh.

Posted in FussyLogic | Tagged , , | Leave a comment

Git Alternates And Sparse Trees

I was working on a project that was kept in a large repository. Unfortunately I was working on two parts of it, each on a different branch. I wanted, therefore, two working directories, but I didn’t really want to pay the disk space cost of two checkouts, nor to have the entire project checked out in each directory. The answer:

  • Shared repositories
  • Sparse checkouts

You begin then with a clone of the upstream repository. It’s better not to run git clone to do this. Instead just point at upstream:

$ mkdir project-branch1; cd project-branch1
$ git init
$ git remote add origin UPSTREAM_URL

Now we’ll do configure a sparse checkout of selected directories (note that these paths are absolute relative to the repository root, and without the leading / would match anywhere – similarly to .gitignore paths):

$ echo "/sub/one" > .git/info/sparse-checkout
$ echo "/sub/two" >> .git/info/sparse-checkout
$ git config core.sparseCheckout true

Now let’s say this project is a local clone, and we don’t want to pay for the repository twice.

Let’s now make a local clone that references this first checkout, saving us the need to hold the same set of objects twice. Note carefully the number of parent directory markers used; alternates are done relative to the .git/objects directory, so you need at least three ../ markers to get to another repository.

$ echo "../../../project-local/.git/objects" > .git/objects/info/alternates

The magic sauce here is the use of “.git/objects/info/alternates” to setup the reference. You have to be a little careful when using alternates because orphan references in one repository are not necessarily orphan references in another repository. However, provided you make sure you copy all the branch references between the two, you should be okay.

$ git fetch
$ git checkout -b sparse-branch upstream/branch

In my case, I saved about 670M on the clone-cost and 1.7G on the working-directory-checkout-cost. Not to mention keeping the working directories nicely focussed on their purposes.

It’s very easy to forget you’re working in a sparse tree though. If you add a new directory in that sparse tree, but forget to add it to sparse-checkout, then next time get tries to checkout that directory (probably on a rebase or similar), git will actually remove that directory. It will never destroy anything permanently, since it only removes what has been checked in, but it will give you a nasty surprise.

Once you’ve made a sparse tree, you might find it hard to work out what directories are actually available. To help with that you’ll want the commands

$ git ls-tree HEAD
$ git ls-tree HEAD path/to/sub

Then just pick and choose which paths you want from those lists and write them in .git/info/sparse-checkout.

Posted in FussyLogic | Tagged , | Leave a comment

Renaming The Past With Git

I added a directory to a repository I was working on; made some commits to the files in that directory, added some files, removed some files, then decided that I didn’t like what I’d named it.

When you make a mistake like this in files with git, you can easily use git rebase -i to move a fixup commit into the past so that your mistake appears to have never happened (assuming you haven’t pushed your branch yet). Renaming a file works okay too; git’s pretty good at tracking that change. Renaming a whole directory, especially with adds and removals is harder.

The solution is git filter-branch, which lets you apply alterations to all commits in a sequence of revisions. It’s particularly versatile, but also fairly complicated to use.

I’m going to use the --index-filter mode of git filter-branch as it’s fast and leaves your working directory out of it.

First we’ll prepare our filter; we can do this mostly non-destructively.

$ git ls-files -s
100644 afe54f7da2cd579c8ef46abf90a2e9223e637baf 0       OLDDIR/.gitignore
100644 aa343b709899207effae27e9ce1b877de3ae2aec 0       OLDDIR/Makefile
100644 c9943eee519617204de5c156ea62266e18188b84 0       OLDDIR/README.md

The filenames here are prefixed with a tab, that makes it easy to write a sed command that alters them.

$ git ls-files -s | sed "s|\tOLDIR\(.*\)|\tNEWDIR\1|"
100644 afe54f7da2cd579c8ef46abf90a2e9223e637baf 0       NEWDIR/.gitignore
100644 aa343b709899207effae27e9ce1b877de3ae2aec 0       NEWDIR/Makefile
100644 c9943eee519617204de5c156ea62266e18188b84 0       NEWDIR/README.md

What we’ve done here is made a list that gives new names to particular contents (remember that git identified content by its hash, not by its name). This list is exactly the right format to feed into git update-index.

$ git ls-files -s | sed 's|\tOLDIR\(.*\)|\tNEWDIR\1|' \
    | git update-index --index-info

If you run git status after running this command you’ll see that Git has listed all the renames as if they are new files. That’s because we only updated the existing index, leaving all the current OLDDIR/ entries in place. What we need to do is create a new index then replace the old with it – the new index will not include any old listings so git will see that as a rename rather than an add.

$ git ls-files -s | sed 's|\tOLDIR\(.*\)|\tNEWDIR\1|' \
    | GIT_INDEX_FILE=.git/tempnewindex git update-index --index-info \
    && mv .git/tempnewindex .git/index

After this you’ll see:

$ git status
# On branch temp
# Changes to be committed:
#   (use "git reset HEAD <file>..." to unstage)
#
#       renamed:    OLDDIR/.gitignore -> NEWDIR/.gitignore
#       renamed:    OLDDIR/Makefile -> NEWDIR/Makefile
#       renamed:    OLDDIR/README.md -> NEWDIR/README.md
#
# Changes not staged for commit:
#   (use "git add/rm <file>..." to update what will be committed)
#   (use "git checkout -- <file>..." to discard changes in working directory)
#
#       deleted:    NEWDIR/.gitignore
#       deleted:    NEWDIR/Makefile
#       deleted:    NEWDIR/README.md

The “not staged” deletions are because we didn’t change the working directory. Don’t worry about that, remember it’s only the index that gets committed.

We now use this command with git filter-branch to do the same thing for a series of commits. We need to make a slight modification to cope with git filter-branch’s use of subdirectories, by using the environment variable it sets for us $GIT_INDEX_FILE.

$ git filter-branch --index-filter 'git ls-files -s \
    | sed "s|\tOLDIR\(.*\)|\tNEWDIR\1|" \
    | GIT_INDEX_FILE=$GIT_INDEX_FILE.new git update-index --index-info \
        && mv "$GIT_INDEX_FILE.new" "$GIT_INDEX_FILE"' \
        HEAD^^^^..HEAD

This does the rename on the last four revisions.

Blink and you’ll miss it. I’d advise doing this on a temporary branch until you’re happy that it’s worked then git reset your master branch to the temporary to activate it (although Git will make a backup for you, so it’s not the end of the world if you don’t).

You’ll find almost exactly this final command written on the man page for git-filter-branch; but hopefully the above will let you build up to it gradually so you can test out the change (in particular the sed command line) before doing anything.

Be careful though – it’s not hard to lose any non-revision-controlled files you keep in these directories when you’re experimenting with this.

Posted in FussyLogic | Tagged , | Leave a comment

PostgreSQL and Row IDs

From a blog post

[PostgreSQL] allows for treating INSERT and UPDATE statements as table references, when used with the RETURNING clause. This is quite fancy, even if not very useful in everyday SQL.

“not useful”? Madness I tell you I respectfully disagree.

I only discovered RETURNING in the last year; and it’s incredibly useful in everyday SQL. I’ll come back to what it does later.

Let’s say you write a web app that edits a single table. One page let’s you add a new row. After the new row is added, you want to redirect the browser to the edit page for that row. Your URIs would probably go like this in a RESTful style:

- <http://yourapp.com/rows/new> shows the editor and submits to...
- <http://yourapp.com/rows/insert> reads the `POST`ed form writes
  the record to the database, and redirects to...
- <http://yourapp.com/rows/edit/2029> shows the editor and submits
  to...
- <http://yourapp.com/rows/update/2029> reads the `POST`ed form writes
  the record to the database, and redirects to...
- <http://yourapp.com/rows/edit/2029> shows the editor and submits
  to...
- and so on

Or similar.

I’m interested in the /table/insert step. We’re going to run a query like this in that step (with appropriate escaping to prevent SQL injection):

INSERT INTO Rows
    (Field1, Field2, Field3)
    VALUES ('data#1', 'data#2', 'data#3');

Now we’ll redirect to /table/edit/2029… except… oops, where do we get the 2029 from? This is the problem I’ll discuss here.

Let me first show the schema for the table (PostgreSQL style):

CREATE TABLE Rows (
    ID serial PRIMARY KEY,
    Field1 text,
    Field2 text,
    Field3 text
);

Importantly, PostgreSQL automatically creates a sequence for the ID field and uses it as the default value for that field (sequences are atomic auto-incrementers, assuring that each time it’s next value is fetched, that next value is returned only once – multiple simultaneous INSERTs are therefore guaranteed to get unique IDs); PRIMARY KEY is, really, just a short way of saying “UNIQUE NOT NULL”, but it’s also a way of indicating to the reader that this field is the… primary key. That has the effect of automatically creating an index for that field.

We can see what’s happened in reality if we do a \d rows from the psql prompt.

database=> \d rows
                                   Table "public.database"
  Column   |           Type           |                     Modifiers                      
-----------+--------------------------+----------------------------------------------------
 id        | integer                  | not null default nextval('rows_id_seq'::regclass)
 field1    | text                     |
 field2    | text                     | 
 field3    | text                     | 
Indexes:
    "rows_pkey" PRIMARY KEY, btree (id)

The id column defaults to nextval('rows_id_seq'), being the next atomic value from the named sequence. The sequence might look like this:

database=> \d rows_id_seq
        Sequence "public.rows_id_seq"
    Column     |  Type   |        Value        
---------------+---------+---------------------
 sequence_name | name    | rows_id_seq
 last_value    | bigint  | 9
 start_value   | bigint  | 1
 increment_by  | bigint  | 1
 max_value     | bigint  | 9223372036854775807
 min_value     | bigint  | 1
 cache_value   | bigint  | 1
 log_cnt       | bigint  | 24
 is_cycled     | boolean | f
 is_called     | boolean | t

Before I knew about RETURNING I solved the problem by directly querying the sequence; noting the value and using that in the INSERT and the redirect. That meant that I would completely bypass the default modifier for the id column when I needed the ID; but when I didn’t a normal insert would work fine.

import psycopg2

# You will need a particularly configured pg_hba.conf for this to
# work; along with, of course, the database and user already created.
dbh = psycopg2.connect("dbname='database' user='yourdbuser'")

# get next sequence value
cur = dbh.cursor()
cur.execute("SELECT nextval('rows_id_seq')")
nextid = cur.fetchone()[0]

cur = dbh.cursor()
cur.execute("""INSERT INTO rows (id, field1, field2, field3)
    VALUES (%s,%s,%s,%s)""", (nextid, 'data#1', 'data#2', 'data#3'))

redirectToEditPage(nextid)

This is horrible. Imagine we want to make a cascade of records in various intra-referenced tables and have to go through this every time. That’s what I used to do; and I looked enviously at the one feature of MySQL that I wished I had… mysql_insert_id(). I needn’t have been envious, PostgreSQL has had a better solution for years (since 8.2), I just didn’t know about it — RETURNING. It’s use is simple:

import psycopg2

# You will need a particularly configured pg_hba.conf for this to
# work; along with, of course, the database and user already created.
dbh = psycopg2.connect("dbname='database' user='yourdbuser'")

cur = dbh.cursor()
cur.execute("""INSERT INTO rows (field1, field2, field3)
    VALUES (%s,%s,%s) RETURNING ID""", ('data#1', 'data#2', 'data#3'))
nextid = cur.fetchone()[0]

redirectToEditPage(nextid)

RETURNING makes INSERT, which normally returns nothing, behave a little like a SELECT on the table being altered. Here, RETURNING ID runs the INSERT then returns SELECT ID FROM rows.

(My defence is that I learned PostgreSQL when it was at version 7.1, so didn’t support RETURNING.)

For me, that’s about as useful as it gets in “everyday SQL”.

Posted in FussyLogic | Tagged , , | Leave a comment

ChibiOS/RT

ChibiOS is an open source RTOS for embedded systems. It’s comparable with FreeRTOS; but for my money is a nicer project.

  • Comes with a hardware abstraction layer – i.e. device drivers (kind of). That means we target the ChibiOS API rather than a particular platform. A lot of manufacturers supply a device layer specific to their chip; ChibiOS goes a step further – including a platform-neutral device layer. FreeRTOS has no HAL.
  • Static memory allocation. FreeRTOS supports dynamic allocation only; ChibiOS supports static and dynamic memory allocation. This can be important if you’re really memory constrained; and also ensures you don’t get fragmentation of your heap storage.
  • I get the feeling there is a better community around ChibiOS; it’s more inclusive and faster moving. It’s certainly got less of a corporate feel, concentrating a lot more on the code.

Begin then by downloading the ChibiOS source. You can get it from the ChibiOS website, or use the following git mirror.

$ git clone https://github.com/ChibiOS-Upstream/ChibiOS-RT

ChibiOS is convenient because it comes with startup code and a build environment that’s pretty easy to customise. We’ll begin by building one of the demo applications.

$ cd demos/ARMCM4-STM32F407-DISCOVERY

Now edit the Makefile to point at your compiler (I’d suggest the ARM-maintained gcc). Here’s my modification:

-TRGT = arm-none-eabi-
+TRGT = /opt/gcc-arm-none-eabi-4_7-2013q2/bin/arm-none-eabi-

A simple make should then create you a binary in build/ch.bin.

Now, we’ll want to start our own project. Copy the contents of this demo directory to some empty directory outside of the ChibiOS tree (your own projects aren’t part of ChibiOS so don’t pollute it with your code). Then edit the Makefile again.

 # Define project name here
-PROJECT = ch
+PROJECT = chibios-test
 
 # Imported source files and paths
-CHIBIOS = ../..
+CHIBIOS = ../../../YOUR/PATH/TO/ChibiOS/

Before we even start using the multitasking facilities of ChibiOS its HAL let’s us easily write portable programs. Delete the main.c that you copied from the demo and we’ll start with a clean slate.

#include <ch.h>
#include <hal.h>

#define GPIOD_LED_ORANGE      GPIOD_LED3

int main(void)
{
    halInit();
    chSysInit();

    palSetPadMode(GPIOD, GPIOD_LED_ORANGE, PAL_MODE_OUTPUT_PUSHPULL);

    while(1){
        palSetPad(GPIOD, GPIOD_LED_ORANGE);
        chThdSleepMilliseconds(500);
        palClearPad(GPIOD, GPIOD_LED_ORANGE);
        chThdSleepMilliseconds(500);
    }
}

Note that you will still need the ChibiOS configuration headers, chconf.h, halconf.h, and mcuconf.h. These are pretty self-explanatory and simply enable and disable various ChibiOS components.

Build the binary and then flash it with openocd (available in Debian).

openocd \
    -f /usr/share/openocd/scripts/board/stm32f4discovery.cfg \
    -c "init" \
    -c "reset halt" \
    -c "sleep 100" \
    -c "wait_halt 2" \
    -c "echo \"--- Writing build/chibios-test.bin\"" \
    -c "flash write_image erase build/chibios-test.bin 0x08000000" \
    -c "sleep 100" \
    -c "echo \"--- Verifying\"" \
    -c "verify_image build/chibios-test.bin 0x08000000" \
    -c "sleep 100" \
    -c "echo \"--- Done\"" \
    -c "resume" \
    -c "shutdown"
Open On-Chip Debugger 0.7.0 (2013-08-04-10:13)
Licensed under GNU GPL v2
For bug reports, read

http://openocd.sourceforge.net/doc/doxygen/bugs.html

srst_only separate srst_nogate srst_open_drain connect_deassert_srst
Info : This adapter doesn't support configurable speed
Info : STLINK v2 JTAG v14 API v2 SWIM v0 VID 0x0483 PID 0x3748
Info : Target voltage: 2.876056
Info : stm32f4x.cpu: hardware has 6 breakpoints, 4 watchpoints
target state: halted
target halted due to debug-request, current mode: Thread 
xPSR: 0x01000000 pc: 0x080001f0 msp: 0x20000400
--- Writing build/chibios-test.bin
auto erase enabled
Info : device id = 0x10016413
Info : flash size = 1024kbytes
wrote 16384 bytes from file build/chibios-test.bin in 1.023979s (15.625 KiB/s)
--- Verifying
verified 7580 bytes in 0.142001s (52.129 KiB/s)
--- Done
shutdown command invoked

It might seem large at 7580 bytes for a simple LED flasher, but remember that this is 7580 bytes for ChibiOS too – which we’re using very little of. For interest you will find the assembly listing (which includes a lot of debug sections) in build/lst/main.lst.

It’s perhaps not clear why the above is so magical. Let’s look again:

    while(1){
        palSetPad(GPIOD, GPIOD_LED_ORANGE);
        chThdSleepMilliseconds(500);
        palClearPad(GPIOD, GPIOD_LED_ORANGE);
        chThdSleepMilliseconds(500);
    }

Think about this in the context of our normal way of writing embedded software. If we wanted to flash an LED in a larger application, we would have to keep a timer, store a separate state and ensure that we always got back to the flasher code in our main loop for the decision to be made about whether it’s time to change state. When we’re using an RTOS, we can write routines without regard (within reason) to what else needs doing. This LED flasher code would be the same whether this is all the program is doing, or if it has ten other things to do.

Let’s see that facility in action.

#include <ch.h>
#include <hal.h>

#define UNUSED(x) (void)(x)

#define GPIOD_LED_ORANGE      GPIOD_LED3
#define GPIOD_LED_GREEN       GPIOD_LED4
#define GPIOD_LED_BLUE        GPIOD_LED6

static WORKING_AREA(waThread1, 128);
static msg_t Thread1(void *arg)
{
    UNUSED(arg);

    chRegSetThreadName("blinker");

    while (TRUE) {
        palSetPad(GPIOD, GPIOD_LED_BLUE);
        chThdSleepMilliseconds(333);
        palClearPad(GPIOD, GPIOD_LED_BLUE);
        chThdSleepMilliseconds(333);
    }

    return 0;
}

int main(void)
{
    halInit();
    chSysInit();

    palSetPadMode(GPIOD, GPIOD_LED_ORANGE, PAL_MODE_OUTPUT_PUSHPULL);
    palSetPadMode(GPIOD, GPIOD_LED_BLUE, PAL_MODE_OUTPUT_PUSHPULL);

    chThdCreateStatic(waThread1, sizeof(waThread1), NORMALPRIO, Thread1, NULL);

    while(1){
        palSetPad(GPIOD, GPIOD_LED_ORANGE);
        chThdSleepMilliseconds(500);
        palClearPad(GPIOD, GPIOD_LED_ORANGE);
        chThdSleepMilliseconds(500);
    }
}

This is exactly as before, but we’ve added an additional thread on top of the so-called main thread, flashing the blue LED at a slightly different rate. Both flashers are simple, linear code. Have a think about how much infrastructure we would have to implement without ChibiOS just to do this simple task.

Posted in FussyLogic | Tagged , , , | Leave a comment