Because I am a cheap bastard
Against my better judgement, I have started poking at Twitter. I do not have an unlimited SMS plan on my phone so I have been relying on the Jabber endpoint which is sometimes on and sometimes not. Or my IM client is wigging out and it just seems that way. Did I mention I am impatient?
Then it occurred to me that I do have an unlimited data plan and use a phone with a Python interpreter and Twitter has an API. Then I remembered that I am using a freedom-hating Series60 3rd Edition phone. Then I decided to write something (in pure Python) to fetch and display my contacts recent mutterings, anyway.
What follows is not something you can download to your phone and just start using. It is only the guts of the application divorced of all the usual Series60 setup, GUI bindings, polling and error handling. I suppose I will get to that over the weekend. Otherwise, you can run it from the command line.
# -*-python-*- import anydbm import re import base64 import httplib user = "twitter@example.com" pswd = "s33kret" seen = anydbm.open("c:\\twitter", "c") auth = base64.encodestring(user + ":" + pswd) headers = {"Authorization":"Basic %s" % auth} conn = httplib.HTTPConnection("twitter.com") conn.request('GET', "/statuses/friends.xml", None, headers) # BLARGH! What fucking part of making "Internet" # devices without a built-in XML parser doesn't # Nokia understand... pattern = "<name>(.*?)</name>.*?" pattern += "<id>(.*?)</id>.*?" pattern += "<text>(.*?)</text>.*?" pattern += "<relative_created_at>(.*?)</relative_created_at>" for m in re.findall(pattern, xml, re.DOTALL) : uid = m[1] if seen.has_key(uid) : continue # As in : # Bob said : 'OMGWTF - the lettuce is pink!', about 5 hours ago msg = "%s said : '%s', %s" % (m[0], m[2], m[3]) print msg # But wait! There's more!! # Other possibilities include : # import audio # audio.say(msg) # import misty # misty.vibrate(500, 100) # and so on... seen[uid] = "1" seen.close()
Obviously, the best thing would be to create a new inbox message and let the phone take over at that point, beeping and blooping according to whatever rules a person has set up . This does not appear to be part of any of the APIs yet.
If I had to pick, though, I'd still choose a built-in XML parser first....
Update:
Oh, and still it gets better. If by
better
you mean I should want to write my own XML
parser this weekend. It turns out that Series60 Python does
not support the re.DOTALL
flag, for pattern matching. The result of
which is that the following piece of nastiness :
res = conn.getresponse() xml = res.read() matches = re.findall(pattern, xml, re.DOTALL)
Becomes the jaw-droppingly stupid :
res = conn.getresponse() xml = res.read() xml = xml.split("\n") xml = "".join(xml) matches = re.findall(pattern, xml)
Which is really salt on the still sore wound created by
the goofy pattern
nonsense, but at
least it works. Until it doesn't. When I tested this on my phone, xml
ended up
being a string 7144 characters long. Somewhere between 0 and
7144 is the wire that will trip the
SRE_ERROR_RECURSION_LIMIT
wire in your phone's
regular expression library :
# Careful readers will note that the USE_RECUSION_LIMIT # stuff is absent from the standard Python _sre.c code 96 |#if defined(SYMBIAN) 97 |#define USE_RECURSION_LIMIT 5000 98 |#endif
Or, put another way : E_EXCESSIVE_FOAD_LIMIT
Update my update :
I have a working version for a Series 60 2nd edition phone that uses the PDIS Element Tree XML magic. I wrote it largely out of stubbornness since my phone's lack of vibrate-ability makes the whole thing sort of pointless. Some day the stars will align themselves in a way to make sense of all this crap and when they do I will be ready.
I have also chunked out the guts of the program in to a
base library from which I've written a tool for
displaying notifications using Growl and its Windows
equivalent Snarl. It would be reasonable to ask why I don't
just continue to use the Jabberbot to display messages. All
I can tell you is that I like eye-candy as much as the next
person. I was also thinking of writing a proxy API which
would keep track of which messages I had already seen
regardless of the device
I was using but that seems
like something better done at the service layer.
(And I wanted to play with Growl some more now that we've released the recent activity methods for Flickr.)
Ladies and gentlemen, twnotifier.py 1.0
This blog post is full of links.
#pytwitterTommi asks Symbian Signed - should it be changed somehow?
I've tried to post a reply to this post three times now, but Movable Type has cranked the suck knob and keeps asking me to re-enter my security code
without providing any input field for doing so. By the time you read this my comments may have been added thrice-fold but on the assumption that they haven't, this is what I said :
In so far as S60 Python development is concerned, the 3rd edition signing process has effectively poisoned the well.
None of the scripts I've written for 2nd edition phones work due to permissions issues despite the fact that the (3rd edition) interpreter itself is signed with access to all of the APIs in question. Even then, my understanding is that access to the calendar API has been removed from the spec altogether.
I may get around to requesting a certificate and going through the process of having each app signed but I might as well also start writing everything in C++ or Java. The "killer app" in the Python application is the ability to quickly brainstorm and prototype tools that can actually do something *with* the phone : the calendar, the address book, the camera, cell tower information, etc.
I haven't really tested Raccoon for the 3rd edition yet but I can only assume that the same (signing) restrictions apply to mod_python scripts. I love the fact that I can run a web browser locally on my phone (BTW, browser-app people : requests to http://127.0.0.1 do not need to connect to the Internets). The ability to use the browser's rendering engine to handle all the boring UI stuff combined with JavaScript and Python to manage and manipulate the information on my phone is in a word : Awesome!
Sadly, I don't think it's a realistic prospect on 3rd edition phones given the burdensome nature of the signing process. There are some people for whom the cost of signing their scripts won't be an issue. For some people it will be. For most people, it will just be a barrier too high for them to want to care.
Perhaps a signed certificate for SIS files is prudent (I remain unconvinced) but to extend the same policy to scripts whose interpreter already has a complete permissions set shuts down an avenue of possibilities that was only just beginning to open up.
This blog post is full of links.
#poisonlocatr-er
BUT WAIT! THERE'S MORE!! As of version 0.2 locatr is also able to determine your location by passing your device's cell tower information to the ZoneTag celltower-based geocoder API and then again through Yahoo Local's geocoder API.
This blog post is full of links.
#ztlocatrThe Illusion of Easy
Prologue
Easy is a state of mind
This is easier than writing N3 :
= Name = {{title|Baked Beans|Beaners and weiners}} = Categories = {{tags|beans|dottie|comfortd}} = Ingredients = * 1 {{qt}} {{ing|navy bean}}s * 3/4 {{lb}} {{ing|salt pork}} * 2 {{ing|onion}}s * 1 {{tbsp}} {{ing|salt}} * 2 {{tsp}} {{ing|mustard}} (hot) * 4 {{tbsp}} {{ing|molasses}} * 5 {{tbsp}} {{ing|dark brown sugar}} * 1/2 {{cup}} {{ing|dark rum}} * 2 {{cup}} boiling water from beans * 2 {{tbsp}} {{ing|bacon}} drippings * 1 scoop {{ing|pepper}} <!-- and so on -->
It yields (hands wave frantically) the following warm, fuzzy RDF goodness :
<smw:Thing rdf:about="http://eat.spum.org/index.php/_Baked_Beans"> <rdfs:label>Baked Beans</rdfs:label> <smw:hasArticle rdf:resource="http://eat.spum.org/index.php/Baked_Beans"/> <rdfs:isDefinedBy rdf:resource="http://eat.spum.org/index.php/Special:ExportRDF/Baked_Beans"/> <dc:title rdf:datatype="http://www.w3.org/2001/XMLSchema#string">Baked Beans</dc:title> <dc:alternative rdf:datatype="http://www.w3.org/2001/XMLSchema#string">Beaners and weiners</dc:alternative> <dc:subject rdf:resource="http://eat.spum.org/index.php?title=Beans"/> <dc:subject rdf:resource="http://eat.spum.org/index.php?title=Dottie"/> <dc:subject rdf:resource="http://eat.spum.org/index.php?title=Comfortd"/> <e:dstuff rdf:resource="http://eat.spum.org/index.php?title=Navy_bean"/> <e:dstuff rdf:resource="http://eat.spum.org/index.php?title=Salt_pork"/> <e:dstuff rdf:resource="http://eat.spum.org/index.php?title=Onion"/> <e:dstuff rdf:resource="http://eat.spum.org/index.php?title=Salt"/> <e:dstuff rdf:resource="http://eat.spum.org/index.php?title=Mustard"/> <e:dstuff rdf:resource="http://eat.spum.org/index.php?title=Molasses"/> <e:dstuff rdf:resource="http://eat.spum.org/index.php?title=Dark_brown_sugar"/> <e:dstuff rdf:resource="http://eat.spum.org/index.php?title=Dark_rum"/> <e:dstuff rdf:resource="http://eat.spum.org/index.php?title=Bacon"/> <!-- and so on --> </smw:Thing>
Which is kind of awesome, really.
Easy as a means of control
It is hardly easy, however, when you consider all the stuff that is hidden behind the curtain :
- Wikipedia, and all its dependencies including PHP and setting up MySQL.
- Installing and configuring the Semantic Mediawiki (SMW) extension.
- Installing the ParserFunctions extension.
- 18 (and counting) custom templates plus 2 others not part of the standard Wikipedia distribution — for a single recipe.
- A special wiki page for every schema used, plus
individual pages for all the properties defined. Why
is there no
batch import an RDF Schema file
functionality in SMW? - A cheat sheet.
- The burden of any one of those pieces breaking.
Compared to e(r)dfg, in its current iteration :
- A text editor.
- A cheat sheet.
- Your programming language of choice and an RDF parser.
To be fair, I have never planned on using a wiki to store recipes. Now that I've played with it a bit I definitely like the idea of using the SMW tools, as a collaborative space, to reference and define ingredients and measures.
Say hello (in a couple days) to : http://eatdrinkfeelgood.org/terms#
.
{{Easy|like pie}}
Recipes are really tempting, though, because it's hard not to love the Wikipedia syntax. It's ruleset is easy to memorize and, better yet, lends itself to wild guesses. It can be written offline in nothing more than a text-editor. The wiki-ness of the format makes it more forgiving of stupid mistakes and the wiki-ness of the storage makes it easier to fix those mistakes.
But : Exporting anything besides wiki-text
out of Wikipedia remains a daunting prospect, at
best. Daunting
is probably unfair given that any
recipe markup will be a limited subset of all the possible
syntax and easy enough to parse although there appears to be a
deficit of tools for creating custom dumps of data in Wikipedia.
I think I can tolerate using wiki-text as the principle storage medium for ingredients and measures. The problem with using if for recipes is that the markup is basically useless for doing anything with it outside of Wikipedia. Doing stuff in Wikipedia is also pretty limited but that's mostly made up for by the fact that what little is does do it does very well. Ultimately, I am not willing to pour all these stories in to a twisty maze of database dumps and lost, or forgotten, backups. (Never mind upgrades to the underlying PHP code and interpreter.)
I have started adding hooks to read and write
semediawikitext
in erdfg.py. If
I can make it do reliable round-trips (import, export,
edits, etc.) between a set of static (N3)
files and a site running the SMW stuff I will bundle
up all the various templates, extensions and config tweaks
and post them for your pleasure.
Easy.
This blog post is full of links.
#easy