this is aaronland

Bake me a painting

Because I am a cheap bastard

Against my better judgement, I have started poking at Twitter. I do not have an unlimited SMS plan on my phone so I have been relying on the Jabber endpoint which is sometimes on and sometimes not. Or my IM client is wigging out and it just seems that way. Did I mention I am impatient?

Then it occurred to me that I do have an unlimited data plan and use a phone with a Python interpreter and Twitter has an API. Then I remembered that I am using a freedom-hating Series60 3rd Edition phone. Then I decided to write something (in pure Python) to fetch and display my contacts recent mutterings, anyway.

What follows is not something you can download to your phone and just start using. It is only the guts of the application divorced of all the usual Series60 setup, GUI bindings, polling and error handling. I suppose I will get to that over the weekend. Otherwise, you can run it from the command line.

# -*-python-*-

import anydbm
import re
import base64
import httplib

user = "twitter@example.com"
pswd = "s33kret"

seen  = anydbm.open("c:\\twitter", "c")

auth    = base64.encodestring(user + ":" + pswd)
headers =  {"Authorization":"Basic %s" % auth} 

conn = httplib.HTTPConnection("twitter.com")
conn.request('GET', "/statuses/friends.xml", None, headers)


# BLARGH! What fucking part of making "Internet"
# devices without a built-in XML parser doesn't
# Nokia understand...

pattern  = "<name>(.*?)</name>.*?"
pattern += "<id>(.*?)</id>.*?"
pattern += "<text>(.*?)</text>.*?"
pattern += "<relative_created_at>(.*?)</relative_created_at>" 

for m in re.findall(pattern, xml, re.DOTALL) :

    uid = m[1]

    if seen.has_key(uid) :
       continue

    # As in :
    # Bob said : 'OMGWTF - the lettuce is pink!', about 5 hours ago

    msg = "%s said : '%s', %s" % (m[0], m[2], m[3])
    print msg

    # But wait! There's more!!
    # Other possibilities include :

    # import audio
    # audio.say(msg)

    # import misty
    # misty.vibrate(500, 100)

    # and so on...

    seen[uid] = "1"

seen.close()    
                    

Obviously, the best thing would be to create a new inbox message and let the phone take over at that point, beeping and blooping according to whatever rules a person has set up . This does not appear to be part of any of the APIs yet.

If I had to pick, though, I'd still choose a built-in XML parser first....

Update:

Oh, and still it gets better. If by better you mean I should want to write my own XML parser this weekend. It turns out that Series60 Python does not support the re.DOTALL flag, for pattern matching. The result of which is that the following piece of nastiness :

res = conn.getresponse()
xml = res.read()
matches = re.findall(pattern, xml, re.DOTALL)
                    

Becomes the jaw-droppingly stupid :

res = conn.getresponse()
xml = res.read()
xml = xml.split("\n")
xml = "".join(xml)
matches = re.findall(pattern, xml)
                    

Which is really salt on the still sore wound created by the goofy pattern nonsense, but at least it works. Until it doesn't. When I tested this on my phone, xml ended up being a string 7144 characters long. Somewhere between 0 and 7144 is the wire that will trip the SRE_ERROR_RECURSION_LIMIT wire in your phone's regular expression library :

    # Careful readers will note that the USE_RECUSION_LIMIT
    # stuff is absent from the standard Python _sre.c code

    96 |#if defined(SYMBIAN)
    97 |#define USE_RECURSION_LIMIT 5000
    98 |#endif
                    

Or, put another way : E_EXCESSIVE_FOAD_LIMIT

Update my update :

I have a working version for a Series 60 2nd edition phone that uses the PDIS Element Tree XML magic. I wrote it largely out of stubbornness since my phone's lack of vibrate-ability makes the whole thing sort of pointless. Some day the stars will align themselves in a way to make sense of all this crap and when they do I will be ready.

I have also chunked out the guts of the program in to a base library from which I've written a tool for displaying notifications using Growl and its Windows equivalent Snarl. It would be reasonable to ask why I don't just continue to use the Jabberbot to display messages. All I can tell you is that I like eye-candy as much as the next person. I was also thinking of writing a proxy API which would keep track of which messages I had already seen regardless of the device I was using but that seems like something better done at the service layer.

(And I wanted to play with Growl some more now that we've released the recent activity methods for Flickr.)

Ladies and gentlemen, twnotifier.py 1.0

Tommi asks Symbian Signed - should it be changed somehow?

I've tried to post a reply to this post three times now, but Movable Type has cranked the suck knob and keeps asking me to re-enter my security code without providing any input field for doing so. By the time you read this my comments may have been added thrice-fold but on the assumption that they haven't, this is what I said :

In so far as S60 Python development is concerned, the 3rd edition signing process has effectively poisoned the well.

None of the scripts I've written for 2nd edition phones work due to permissions issues despite the fact that the (3rd edition) interpreter itself is signed with access to all of the APIs in question. Even then, my understanding is that access to the calendar API has been removed from the spec altogether.

I may get around to requesting a certificate and going through the process of having each app signed but I might as well also start writing everything in C++ or Java. The "killer app" in the Python application is the ability to quickly brainstorm and prototype tools that can actually do something *with* the phone : the calendar, the address book, the camera, cell tower information, etc.

I haven't really tested Raccoon for the 3rd edition yet but I can only assume that the same (signing) restrictions apply to mod_python scripts. I love the fact that I can run a web browser locally on my phone (BTW, browser-app people : requests to http://127.0.0.1 do not need to connect to the Internets). The ability to use the browser's rendering engine to handle all the boring UI stuff combined with JavaScript and Python to manage and manipulate the information on my phone is in a word : Awesome!

Sadly, I don't think it's a realistic prospect on 3rd edition phones given the burdensome nature of the signing process. There are some people for whom the cost of signing their scripts won't be an issue. For some people it will be. For most people, it will just be a barrier too high for them to want to care.

Perhaps a signed certificate for SIS files is prudent (I remain unconvinced) but to extend the same policy to scripts whose interpreter already has a complete permissions set shuts down an avenue of possibilities that was only just beginning to open up.

locatr-er

BUT WAIT! THERE'S MORE!!

As of version 0.2 locatr is also able to determine your location by
passing your device's cell tower information to the ZoneTag celltower-based geocoder 
API and then again through Yahoo Local's geocoder API.
                    

locatr 0.2

The Illusion of Easy

Prologue

A:
also, just in case you don't have enough fear in your life : 
A:
{{ #if: {{{2}}} | [[{{{1}}}:={{{2}}}|[[{{{2}}}|{{ #if: {{{3}}} | {{{3}}} | {{{2}}} }}]]]] | [[{{{1}}}:={{{3}}}|[[{{{3}}} {{ #if: {{{4}}} | {{{4}}} | {{{3}}} }}]]]] }}
D:
wtf is that?!?
A:
wikipedia templating nonsense
A:
awesome, huh?
D:
gah, looks like lisp
D:
intuitive 
A:
yes, but lisp has the advantage of actually being powerful
A:
the above is just a pain in the ass
D:
haha
D:
So wikipedia is a real programming language now then, huh?
A:
hahaha
A:
hardly
A:
the {{ #if: }} stuff is a whole other extension / set of terrifying code
A:
I am experimenting with the semediawiki extensions and trying to get stuff working with as little "looking under the hood" as possible
D:
Bah, no entry for semediawiki on wikipedia.
A:
semantic mediawiki
D:
Heh, I guessed that part, was looking for juicy details.
D:
I can't help but feel that semediawiki looks a lot like building an internet within an internet, is this what the semantic web turned into?
A:
it's still an experiment but the idea is that :
A:
* wikis are easy to use
A:
* and easy to read
A:
* and provide some measure of "search"-iness
A:
* and with judicious use of templates
A:
* you can convert {{author|rdc|Reverand Dan Catt}} in to [[dc:creator:=rdc]]
A:
* and then use semediawiki to export the RDF-iness of the dc:creator-iness
A:
or something like that
A:
I've only been poking at it for a day or so and the documentation for wiki* really kind of sucks
D:
sounds genius! ... kinda a grownup academic version of myspace.
D:
Yes, I thought it'd be leading to there.
A:
the problem with the wiki stuff being : 
A:
* it is easier than writing N3
D:
heh ... you should get together with C. ... we've been plotting a recipe engine thing with flickr as a back end...
A:
* but its easiness is illusory because of the massive amounts of wikipedia-related infrastucture needed to make it useful
D:
... so you get to build a new "thing" out of other "things" (ingredients) ... and each thing contains calories/protean/fat etc :)
A:
semediawiki is really terrible at importing third party namespaces like...say....dublin core
D:
Easiness is illusory... I'm going to print that and stick it above the desk.
A:
and you have to do a lot of insanely boring leg work
A:
well, the original idea for the wiki stuff was to use it for ingredients and measures
A:
so, you could write I:beer where I: was just a prefix to some.wiki.org?title=
D:
Ah see, it's a perennial good idea
A:
and then you could add metadata about "beer" to your heart's content
A:
and if you used the semediawiki stuff it would automagically get exported in the happy RDF
A:
when the wifi on the bus dies I work on the python libraries to convert a subset of wiki text to/from N3 (eatdrinkfeelgood 2.0)
D:
Heh, bus hacking, when-ever will you find the time when the office moves to SF?
A:
I just won't have to leave the house so early ;-)
D:
You'll miss it

Easy is a state of mind

This is easier than writing N3 :

= Name =

{{title|Baked Beans|Beaners and weiners}}

= Categories = 

{{tags|beans|dottie|comfortfood}}

= Ingredients =
 
* 1 {{qt}} {{ing|navy bean}}s
* 3/4 {{lb}} {{ing|salt pork}} 
* 2 {{ing|onion}}s  
* 1 {{tbsp}} {{ing|salt}} 
* 2 {{tsp}} {{ing|mustard}} (hot)
* 4 {{tbsp}} {{ing|molasses}}
* 5 {{tbsp}} {{ing|dark brown sugar}}
* 1/2 {{cup}} {{ing|dark rum}}
* 2 {{cup}} boiling water from beans
* 2 {{tbsp}} {{ing|bacon}} drippings
* 1 scoop {{ing|pepper}}

<!-- and so on -->
                    

It yields (hands wave frantically) the following warm, fuzzy RDF goodness :

<smw:Thing rdf:about="http://eat.spum.org/index.php/_Baked_Beans">
        <rdfs:label>Baked Beans</rdfs:label>
        <smw:hasArticle rdf:resource="http://eat.spum.org/index.php/Baked_Beans"/>
        <rdfs:isDefinedBy rdf:resource="http://eat.spum.org/index.php/Special:ExportRDF/Baked_Beans"/>
        <dc:title rdf:datatype="http://www.w3.org/2001/XMLSchema#string">Baked Beans</dc:title>
        <dc:alternative rdf:datatype="http://www.w3.org/2001/XMLSchema#string">Beaners and weiners</dc:alternative>
        <dc:subject rdf:resource="http://eat.spum.org/index.php?title=Beans"/>
        <dc:subject rdf:resource="http://eat.spum.org/index.php?title=Dottie"/>
        <dc:subject rdf:resource="http://eat.spum.org/index.php?title=Comfortfood"/>
        <e:foodstuff rdf:resource="http://eat.spum.org/index.php?title=Navy_bean"/>
        <e:foodstuff rdf:resource="http://eat.spum.org/index.php?title=Salt_pork"/>
        <e:foodstuff rdf:resource="http://eat.spum.org/index.php?title=Onion"/>
        <e:foodstuff rdf:resource="http://eat.spum.org/index.php?title=Salt"/>
        <e:foodstuff rdf:resource="http://eat.spum.org/index.php?title=Mustard"/>
        <e:foodstuff rdf:resource="http://eat.spum.org/index.php?title=Molasses"/>
        <e:foodstuff rdf:resource="http://eat.spum.org/index.php?title=Dark_brown_sugar"/>
        <e:foodstuff rdf:resource="http://eat.spum.org/index.php?title=Dark_rum"/>
        <e:foodstuff rdf:resource="http://eat.spum.org/index.php?title=Bacon"/>

        <!-- and so on -->

</smw:Thing>
                    

Which is kind of awesome, really.

Easy as a means of control

It is hardly easy, however, when you consider all the stuff that is hidden behind the curtain :

  1. Wikipedia, and all its dependencies including PHP and setting up MySQL.
  2. Installing and configuring the Semantic Mediawiki (SMW) extension.
  3. Installing the ParserFunctions extension.
  4. 18 (and counting) custom templates plus 2 others not part of the standard Wikipedia distribution — for a single recipe.
  5. A special wiki page for every schema used, plus individual pages for all the properties defined. Why is there no batch import an RDF Schema file functionality in SMW?
  6. A cheat sheet.
  7. The burden of any one of those pieces breaking.

Compared to e(r)dfg, in its current iteration :

  1. A text editor.
  2. A cheat sheet.
  3. Your programming language of choice and an RDF parser.

To be fair, I have never planned on using a wiki to store recipes. Now that I've played with it a bit I definitely like the idea of using the SMW tools, as a collaborative space, to reference and define ingredients and measures.

Say hello (in a couple days) to : http://eatdrinkfeelgood.org/terms#.

{{Easy|like pie}}

Recipes are really tempting, though, because it's hard not to love the Wikipedia syntax. It's ruleset is easy to memorize and, better yet, lends itself to wild guesses. It can be written offline in nothing more than a text-editor. The wiki-ness of the format makes it more forgiving of stupid mistakes and the wiki-ness of the storage makes it easier to fix those mistakes.

But : Exporting anything besides wiki-text out of Wikipedia remains a daunting prospect, at best. Daunting is probably unfair given that any recipe markup will be a limited subset of all the possible syntax and easy enough to parse although there appears to be a deficit of tools for creating custom dumps of data in Wikipedia.

I think I can tolerate using wiki-text as the principle storage medium for ingredients and measures. The problem with using if for recipes is that the markup is basically useless for doing anything with it outside of Wikipedia. Doing stuff in Wikipedia is also pretty limited but that's mostly made up for by the fact that what little is does do it does very well. Ultimately, I am not willing to pour all these stories in to a twisty maze of database dumps and lost, or forgotten, backups. (Never mind upgrades to the underlying PHP code and interpreter.)

I have started adding hooks to read and write semediawikitext in erdfg.py. If I can make it do reliable round-trips (import, export, edits, etc.) between a set of static (N3) files and a site running the SMW stuff I will bundle up all the various templates, extensions and config tweaks and post them for your pleasure.

Easy.