Sunday, September 30, 2012

Pinned

I'm currently pinned.

There's a proposal draft I need to read and mark notes on, but I don't have a pen and the nearest one is three feet above me on a shelf.  I've read as far as I could, but then got to a section I really need to mark up, and I just can't go forward without making the marks I need to make, or I'll lose all the value of the read-through.

I could try to annotate on the electronic copy on my laptop, but I'm typing this one-handed and full of typos, which you won't see because I'll clean it up before posting.  A MacBook Air, by the way, is a wonderful machine for a parent since it is so light and can be balanced on your chest, arm, whatever, without any problem.  But although I can use the machine, I sure can't produce content on it right now given my hilariously egregious current typo rate and terribly low words-per-minute while doing low-light upside-down one-awkward-finger hunt-and-peck.

Pinned, flat on my back, able to move legs and arms, but only so far.  Not the easiest position in the world, and if I could still sleep, it would be a fine time for it, but it's morning enough that I'm fully awake and no longer able to drowse.

Drowned my sorrows in blogs for a bit, but now my creative side is itching to get started with the day and do some science.  That's the thing, you know----I think you can't be a scientist without some level of obsession and inability to just let things alone and be content like a normal person. I certainly can't, and given a long enough time of stillness, I'm always going to start try to create something---bring something of value into the world that wasn't there before, my own little strike against entropy and time. Not necessarily science, maybe just cooking dinner or organizing our room or watering my plants.

Not that I can do any of that right now.

But at least I can do something meaningful with my brain, more than playing zombie content consumer on my favorite blogs and web-comics.  And now finally I think I have a solution: this is an excellent time to try to catch up on the literature a bit---or at least plug my fingers in the dike. Not that it will be particularly easy to read, lying here upside down and holding my laptop up above me.

But hell if I'm going to wake the baby when she's decided to sleep sprawled on my chest like a cat.
Even if it does mean I'm pinned.

Sunday, September 23, 2012

Raising kids with science

Last week, I took my daughter Harriet to the IgNobel awards.  This was a terrible idea, of course, since she's only two months old, but in the spirit of the ceremony I figured that a terrible idea might just turn out to be great and went for it.  Fortunately, my decidedly risky reasoning turned out to be correct---she enjoyed some parts of the raucous performance (especially the opera), ignored most of the rest, drank quite a bit of formula, squawked loudly only once or twice, had one fast and discrete diaper change, and slept through the last fifteen minutes or so.  During the paper airplane barrage, the nice folks sitting near me formed a missile shield and deflected wayward planes that might have otherwise hit her.  But beyond all the parenting and silliness, I had some serious thoughts as well: sitting there in that theatre, listening to a celebration of the strangeness in science, made me think a lot about the question of raising kids with science.

I'm a scientist---as well you know from the tagline on this blog.  So's my wife, and our shared love of inquiry in one of the standing waves of our relationship.  So I have a feeling that my daughter is likely to either embrace science from the start, or to end up running screaming away as fast as she can.  So, how should I think about this as a parent?  What's the responsible way to approach this whole area of life?

Well, an important place to start is getting a clearer idea of what I mean by "science" in the first place.  The obvious starting point is that, yes, I do SCIENCE! for a living, and write papers and grants and take data and stuff.  But my study of the more obscure types of questions that makes up my career has had a backward effect on the rest of my life as well.  During graduate school, one of the most liberating lessons that I learned was that "I don't know" is a totally respectable answer---quite liberating for somebody who used to be an obnoxious know-it-all have-to-be-right kid in grade school.  You mean I don't have to stake my ego on having answers?  Later, one of the hardest struggles toward my thesis was staring at the pile of conjecture and mechanisms I was working with and asking how I could really justify what I thought I knew.

Those lessons I learned in graduate school boil down to two simple questions that I believe are the root of science, and they are eminently applicable to everyday life:
1) What do you actually know, and what do you not know?
2) How do you know what you know?
Once you know where you stand, the obvious and tempting extensions are "Let's go find out..."and "Would this help?"

Science, the profession, is simply about answering those questions in places that other people are also interested in and where the answers are not yet known and finding the answers typically requires rare knowledge or equipment.  But you can practice the same things anywhere.  Does taking this shortcut actually help me get home faster at rush hour?  Should I pack my lunch or eat at the cafeteria?  Day care or nanny or stay-at-home parent? Get the baby her vaccinations on schedule?  None of this requires the trappings and ceremonial indicators of science, just a willingness to recognize that you may have bias in your preferences, to ask how you can test what makes sense, and then get the information.

Let me give an illustrative example---good scientific practice, giving the reader a cross-check of what's been said so far.  When Harriet was but a young fetus, we faced a common modern pregnancy dilemma: an expecting mother is supposed to eat lots of fish because it's jam-packed with Omega-3s and other Good Nouns, yet must limit her fish intake to one to two servings per week to avoid mercury.  What's a loving parent to do?  So as the designated reader of medical horror material, I went digging around to try to understand where the one to two servings limit was actually coming from, since it's cited everywhere but typically doesn't actually come with hard numbers about how much mercury is the actual recommended dose limit (unlike, say, caffeine, where the recommendations almost always come with milligram dose numbers).  It turns out, though, that with a little bit of Googling you can actually get hard per-species numbers directly from the FDA.  Taking canned tuna as a reference point (recommended 1 serving per week, 0.128 mean ppm), it quickly becomes obvious that the species lumped together into "two services per week" vary wildly in their typical mercury load.  Herring is clearly at the right level (0.084 ppm), as is mackerel (0.050 ppm for Atlantic), but you can safely eat an order of magnitude more sardines or tilapia (both 0.013 ppm) and scallops until you're sick (0.003 ppm).  Moreover, since mercury is an accumulative toxin that is flushed out of your system over a long period of time, it's the mean rate of consumption that matters rather than the particular time period (again, unlike caffeine). That means that if you've had a week where you didn't eat fish, you can eat double as much the next week with little worry.  So science showed us a clear way out of the dilemma: we just wrote down a list of all the seafood where appetite was a bigger limiter than mercury, taped the list to the fridge, and had our seafood without fear.  Science to the rescue, needing just a little math.

I give that example to show how thinking scientifically can help us sort through the blizzard of information that makes up normal daily life.  The science in that story is not about how the FDA got those numbers to put on its website, but the fact that we realized we didn't know why "two servings" was given as the magic number, went and found a reliable source of information (the FDA), and then solved our real problem ("What should we cook for dinner?") by turning that complex information into a simple list on the fridge.  So, in the putative words of Socrates,  "[I am wise] because I do not fancy I know what I do not know," or to quote another more recent philosopher: Science. It works, bitches.

Coming back to the original point...
Do I care if my daughter becomes a Scientist?
Not in the least.  But I care deeply that she understands the process of science, that it's something for her as natural as breathing and as basic as talking.

What exactly that translates to in terms of actionable policy recommendations for parenting a particular sample size of one (viz: Harriet) is a subject of ongoing study, but at least I know what I'm trying to do as a parent...

Monday, September 17, 2012

A modest proposal for reviewers

Peer review is a necessary evil in the life of every scientist.  On the one hand, pretty much every meaningful paper you ever publish will go to a bunch of peer reviewers, including the infamous Reviewer #3, who always suggests more experiments.  On the other hand, a significant chunk of your professional service will be reviewing on program committees, reading some good papers and a lot of others that are painful messes where the authors clearly need to do more experiments.  On the third hand, sometimes you'll find yourself wrangling reviewers yourself, and trying to get the damned procrastinators to actually turn in their reviews so you can let the authors know whether their paper is being blessed with publication or cursed with rejection or a request for more experiments.

Journal papers are particularly bad in this regard, since there's no particular schedule on which the paper has to be accepted or rejected, and sometimes a paper can languish for more than a year in limbo, unable to be cited or even submitted elsewhere.  And what's happening during that time? Well, from my own experiences wrangling reviewers, half the time the editor is waiting to see whether the reviewers will actually do the reviews they promised or not.  See, as a reviewer you get told, "We'd like to have you review this paper, and you've got six weeks to do it," or some similarly long time.

Six weeks?  No problem!  There's got to be a time in the next six weeks when you'll be able to read this paper... and then other projects and deadlines intervene, and the time slips away, and you end up at the end of six weeks trying to find a time to actually give the paper its fair shot.  I'm pretty faithful about turning in on time, but some people definitely aren't.  So when I'm acting as an editor or program chair, I spend a lot of time cajoling or tearing my hair and trying to get somebody else to review at the last moment when a reviewer fails.  And as an author waiting for a response, I'm always wondering whether the reviewers are doing anything or not...

So here's my modest proposal for fixing peer review timing: if the reviewers are going to review at the last moment, why not bring that last moment much closer?   Why don't we give reviewers only a single week, no matter how massive a paper they're going to review.  Then the process of negotiation back and forth can start much earlier and we can toss out the reviewers who aren't going to review much faster.  Everybody wins: authors get responses quickly, editors get their reviews back faster, and it will even lower the load on reviewers, since the editor no longer needs to recruit extra reviewers in case some fail.

So, dear reader, would you be in favor of such a fast-tracked world?

Thursday, September 13, 2012

Au Revoir, SASO


Today's is the last day of SASO, the IEEE International Conference on Self-Adaptive and Self-Organizing systems.  There are more workshops tomorrow, but I've been away long enough and I'm hopping on an early morning plane to go home to my wife and daughter.

It's been a good conference, not just for my personal aggrandizement as a scientist, but also for me to learn things and have good conversations with colleagues.  Overall, I had about a 30% hit rate on talks I was interested in---pretty high for a conference---and I'm taking away a couple of things I need to look into more, and some possible collaborations to continue.

Right at the end, I had the privilege to sit on this conference's panel discussion, which focused the topic of "New Research Directions."  My own slides were a subject of much discussion and debate, as they challenged people to spend more time focusing on the refinement of SASO material into reliable and reusable engineering building blocks.  

The bit from the whole discussion that sticks best in my mind, however, was Mark Jelasity's declaration that SASO is "a place to send your rejected papers---but only the odd ones, not the bad ones."  I found that an apt description, and based on the discussion, I think a lot of other people did too.  The same ideas were reflected somewhat in my own slides---SASO, I find, is a place filled with people wrestling with excedingly difficult problems, and looking outside of their own domains to find solutions.  It's hard and slow and produces a lot of false starts, but damn it's an interesting breed of science.

It's been a good conference, and now it's time to go home.  You may now expect this blog to go back to its normal weekly posting schedule.

From Lyon, good night.

Wednesday, September 12, 2012

Yesterday WAS a Good Day to Demo!


Just back from the SASO conference banquet, at which awards were handed out... and we won one!  Our Proto demo of self-stabilizing robot team formation won the "Best Demonstration" award.  We were cited, among other things, for being:
  • simple and easy to understand,
  • an excellent example of self-adaptation and self-organization, and
  • freely available for anybody to download and play with themselves.

Major kudos to Jeff Cleveland and Kyle Usbeck for the work they contributed to building the demo and also to ensuring we had a nice webpage and movie to show it off to best advantage.  Go and check it out for yourself!

Fast Demand Response: ColorPower 2.0


I'm quite happy to take the cap off of this paper: at long last, the paper on the ColorPower 2.0 paper, Fast Precise Distributed Control for Energy Demand Management, is officially published, and I can put it up as well.  The pictures have been up before, in these two posts, and now you can learn all the key ideas about how we're doing our distributed energy management.

This builds on the prior work from Vinayak Ranade's thesis and paper in SASO 2010, where we showed that fast distributed control of energy demand was possible.  The controller we used in that paper was terrible, though, and we acknowledged that right there in the paper---it just wasn't the focus then, and we hadn't had a chance to study that aspect of the problem well.

Over the last year, however, first my colleague Jeff Berliner at BBN figured out the right representation for understanding the control problem, and then I was able to turn that into an algorithm to actually do the control correctly.  Together, and with the help of Kevin Hunter, we refined it into the shining gem presented in this paper: the ColorPower 2.0 algorithm (can you tell I'm excited about it?).  We simulated it at all sorts of scales, with all sorts of problems, and it always stands up well---and better, matches our theoretical predictions nicely too.  Plus the experiments produce beautiful looking figures like these, showing the convergence and quiescence times of the algorithm for abrupt changes of target:
The bottom line: we've got a system that should be able to shape the energy consumption of millions of consumer devices in only a few dozen seconds.  Now we just have to get it out of the lab and into the field. Come talk to us if you want to use it, though, since it's also protected by patents...

Tuesday, September 11, 2012

Today is a Good Day to Demo


One of the things I always feel indebted to Jonathan Bachrach for is how pretty Proto looks.  When we were first developing the language together, he was the one who hacked together the original simulator with OpenGL, based on the previous work he'd done with multimedia processing languages.  So Proto's simulator got built by somebody to whom appearance really mattered, and with an artists touch and attention to detail.  And so, to me at least, the simulations we make look gorgeous, and I just love playing with them.

Well, today I got to show off my toys in the SASO demo session.  This spring, I put together a set of self-stabilizing algorithms for robot team formation. Kyle Usbeck and Jeff Cleveland then helped turn these into a nice demonstration---well, it was a contest entry originally, but SASO didn't get enough entries, so they just rolled us into the demo session.  In any case, the robots form up into little "snakes" for each team, and go crawling randomly around in 2D or 3D in wonderfully distracting colorful patterns.

Kyle and Jeff made a nice movie showing off the algorithms, and how they're resilient to pretty much any way you can think of the break them---adding robots, destroying robots, moving robots, changing goals and communication properties, etc.:


The upshot of all of this today is that I got to talk myself hoarse in front of a projector for two hours while lots of folks enjoyed the beauty of our simulation, and hopefully even got to understand a bit about Proto and the continuous space abstractions that made it all so easy to do.

If you want to play with this stuff too, feel free: you can read about it and download it all here.

Monday, September 10, 2012

How resilient is it anyway?

As engineers and scientists, we worry a lot about how well the things we build hold up.  Anything that goes out into the real world will suffer all sorts of buffets from unexpected interactions with its environment, strange behaviors by its users, idiosyncratic failures of components, and myriad other differences between theory and reality.  So we care a lot about knowing how resilient a system is, but don't currently have any particularly good way of measuring it.

Oh, there's lots of ways to measure resilience in particular aspects of particular systems.  Like if I'm building a phone network, I might want to know how frequently a call fails---either by getting dropped or failing to connect in the first place.  I might also measure how call failures increase when there are too many people into one place (like a soccer match) or when atmospheric conditions degrade (like a thunderstorm) or when a phone goes haywire and starts broadcasting all the time.

But these sorts of measures leave a lot to be desired, since they only look at particular aspects of a system's behavior and don't have anything to say about what happens when we link systems together to form a bigger system.  That's why I'm interested in generic ways to measure the resilience of a system.  My hope is that if we can design highly resilient components, then when they're connected together to former bigger components, that we will be more easily able to ensure that those larger components are resilient as well.  

Even better is if we can get compositional proofs, so that we know that certain types of composition are guaranteed to produce resilient systems---just as there are compositions of linear systems that produce linear systems and digital systems that produce digital systems, etc.  This is the type of foundation that lays the groundwork  for explosions in the complexity and variety of artifacts that we can engineer, just like we've seen previously in digital computers or clockwork mechanical systems.  I want to see the same thing happen for systems that live in more open worlds, so that we can have an infrastructure for our civilization that helps to maintain itself and that can tolerate more of the insults that we crazy humans throw at it.

But first, small and humble steps.  In order to be able to even formulate these problems of resilience sanely, we need to better quantify what this "resilience" thing might mean.  In my paper in the Workshop on Evaluation for SASO workshop at IEEE SASO, I take a crack at the problem, proposing a way to quantify "graceful degradation" using dimensionless numbers.  The notion of graceful degradation is an important one for understanding resilience, because it gets at the notion of margins of error in the operation of a system.  When you push a system that degrades gracefully, you start seeing problems in its behavior long before it collapses.  For example, on an overloaded internet connection that shows graceful degradation, things start going slower and slower, rather than going directly from fast communication to none at all.

In my paper, I propose that we can measure how gracefully a system degrades in a relatively simple manner.  Consider the space formed by all the parameters describing the structure of a system and of the environment in which it operates.  We break that space into three parts: the acceptable region where things are going well, the failing region where things have collapsed entirely, and the degraded region in between.


If we draw a line slicing through this space, then we get a sequence of intervals of acceptable, degraded, and failing behavior.  We can then compare the length of the acceptable intervals and the degraded intervals on their borders.  The longer the degraded intervals that separate acceptable and failing intervals, the better the system is.  So in order to know the weakest point of a system, we just look for the lowest ratio between degraded and acceptable on any line through the space.

What this metric really tell us is how painful is the tradeoff between speed of adaptation and safety of adaptation.  The lower the number, the easier it is for changes to drive the system into failure before it can effectively react, or for the system to accidentally drive itself off the cliff.  The higher the number, the more there is a margin for error.

So, here's a start.  There are scads of open questions about how to apply this metric, how to understand what it's telling us, etc., but it may be a good point to start from, since it can pull out the weak points of a system and tell us what they are...

Sunday, September 09, 2012

Enter SASO 2012


This week is the 6th annual SASO: the IEEE Conference on Self-Adaptive and Self-Organizing Systems.  If any conference is my "home conference" at the moment, this is probably it.  The attendees of this moderate size (~100 people) single track conference tend to be all over the map in terms of interests and applications, but there is one thing that clearly unites us: a dissatisfaction with the brittleness of ordinary complex systems engineering, and a desire to address it by making the systems smarter in some way.

It makes for a very diverse and rather messy conference, with a lot more proof-of-concept and early work than finished systems or grand results.  There's also a lot of folks out there in the greater scientific world with really flaky ideas about how to go about this type of resilient engineering---lots of magical thinking of the form "it smells kinda like Nature, and Nature is awesome, so it must be awesome too!"  The conference has tightened itself up quite a bit over time, though, and by now the quality of the papers is generally pretty good.  My Ph.D. advisor, Gerry Sussman, once told me that he judges the quality of a conference by how long he remembers something that he learned there, and in that sense SASO does quite well for me.

I'm much looking forward to SASO this year, the more so because I'm going to be quite busy talking about interesting things.  In particular, the highlights of the conference for me are:
  1. Monday, I will be talking about metrics for graceful degradation in the Workshop on Evaluation of SASO Systems
  2. Tuesday, I will be chairing a session in the morning and giving a demo of our self-stabilizing robot team formation algorithms in the afternoon.
  3. Wednesday, I will be presenting my work on the ColorPower algorithm for distributed energy demand management.
  4. Finally, on Thursday, I'll be sitting on a panel on New Research Directions, talking about my views on the need for nature-inspired systems to move beyond one-off applications and toward the extraction of principles and laws.

And then there's colleagues to catch up with and other talks to see... it will be a busy week, and also my first time away from home without Harriet since she was born.  But fear not, dear reader, I shall salve my wounds and fatigue by writing some extra blog posts about all of the cool things I get up to this week.

Monday, September 03, 2012

Pretty BioCompiler GRN Diagrams

One of the pieces of work I'm rather proud of is my Proto BioCompiler.  Back in 2008, as I was hanging around at the synthetic biology lunches at MIT, I realized that there was a nice tight mapping between the genetic regulatory network diagrams that the biologists were drawing and the dataflow computation graphs that I was using to express the semantics of Proto.  Basically, in biology you can represent a computation with a bunch of reactions evolving in parallel over continuous time, with the products of one reaction being used as the inputs for others.  Similarly, in Proto my model of computing had a bunch of operations evolving in parallel over continuous space and time, with the outputs of one operation being used as the inputs for others.

So after giving a couple of exploratory talks in the lunch meetings, I wrote up the ideas in this paper in the first spatial computing workshop, and then went ahead and and actually built the thing a couple years later.  We've been refining it ever since, and the lovely thing about the BioCompiler is that it takes all of the guesswork out of designing biological computations.  Most of the really effective genetic regulations that we can engineer with right now are repression, which means that designs are typically implemented in negative logic.  For example, "A and B" might be implemented as "not either not A or not B", which is just much harder for us poor humans to think about, and means that, for me at least, anything over a few regulatory elements is almost certain to contain mistakes in the first design.  When you're dealing with biological experiments, where it can be weeks from design to the very first test, you really don't want to get it wrong the first time.

So the BioCompiler lets us just sidestep that whole mess: you write the program in normal red-blooded American positive logic, and it turns all inside out on its own and then optimizes.  Won't work for every program, but for the range that it knows how to work in, it's beautiful.  In an instant, out of the BioCompiler comes the new design and also a simulation file for testing it in silico.

The only problem is that the output of the BioCompiler looks like this:

BioCompiler supposedly "human readable" output for an XOR circuit


or, worse, like this:

Just the top bit of the machine-readable XML generated by BioCompiler for an SR-latch circuit


Bleah.  Sure, there's a genetic regulatory network buried in there, but to actually communicate it to another human we need to turn that soup into a diagram, and that's not only a pain but another really good way to introduce errors into the works.

No more.  I just recently sat down with the manual for GraphViz and figured out how to do build complicated custom nodes, with which a diagram can be created.  Then I hacked a new output option into the biocompiler, for creating GraphViz diagrams.  The result is still a bit rough on the artistic front, but intelligible and can turn into an SVG file for editing.  Booyah!
Diagram of a circuit for computing ordering relations