|FEATURES$type=ticker$count=12$cols=4$cate=0

ToDo or ToodleDo, that is the question

Two "todo" apps are vying for my iPhone's heart. Here's how I decided on a winner.

I like PDAs because they help me manage the things I have to do - and I'm all about the "todo" lists. I don't know if I've become dependent on lists because I have a bad memory, or if my memory is failing because I use lists for everything.  Still, it is as it is.

Over the past year or so, a number of todo apps have come out for my beloved iPhone, and I've been trying most of them. It's surprising how I keep coming back to the same two apps, and equally surprising (to me) that after months of playing around with them, I still can't quite decide which one I prefer.

The two apps is Appigo's ToDo, and ToodleDo for the iPhone. Both cost only a few dollars, and both are very well-rated by the public at large.

So, I figured, lets use some design analysis tools to evaluate the two apps, and see what the numbers say.

I'm going to use two tools: pairwise comparison, and a weighted decision matrix. These tools aren't only useful for analyzing designs - they're basic decision-making tools, and they've always done right by me to evaluate designs, conceptual or otherwise.

Both tools depend on having a good set of criteria against which the two apps will be compared. You might not know what decision to make, but you need to know how you'll know that you've made the right one. In our case here: How do I know when I've found a good todo app?

The formal term for what I'm doing here is qualitative, multi-criterion decision-making. It generally comes involves four tasks, which in my case are:
  1. Figure out criteria that apply to any "best" todo app.
  2. Rank the criteria by importance, because the most important criterion will affect my decision more than the others.
  3. Develop a rating scale to rate each app.
  4. Rate the apps with the rating scale and the weights.
Here's my criteria, in no particular order of importance, based on years of using other task management tools:
  • Fast. No long delays when telling the app to do something.
  • Easy. Minimal clicking (e.g. hitting "accept" for everything or burrowing into deeply nested forms and subforms).
  • Repeats. Repeating items at regular intervals.
  • Priorities. At least three levels of priority for tasks.
  • Checkoff. One-touch checking off of done items.
  • Backup. Easy backup (or sync) to some remote server that is fairly robust, using standard formats.
  • Groups. Group items by tag or folder or project or whatever.
  • Sorting. Multiple ways to sort items.
  • Hotlist. Some overview page showing only near-term, important items.
  • Restart. Picks up next time I run it where I left off last time (oddly, not every iPhone app does this).
  • Recovery. Uncheck items that were accidentally checked off.
  • Conditional deadlines. Due dates based on due dates of other items (e.g. task B is due two weeks after task A is completed).
  • Links. Link an item to a folder of other items.
Oddly, not a single iPhone app I've checked out so far meets all my requirements.  In particular, I've not found any apps that even try to meet the last two requirements. I say "oddly" because I don't think these requirements are excessive. Still, there it is.

Next, we have to develop weights to assign relative importance to the criteria. The word relative is key here; we're not going to say that one criterion is certainly and universally more important than any other. What I want is to know how important each is with respect to the others and my own experience. Remember, one size never fits all.

This is where pairwise comparison comes in. Details on how this works are given in another web page (it ain't hard).  The chart below is just the end results.  In each cell is the criterion that I thought was more important of the pair given by that cell's row and column. Since it doesn't make sense to compare something to itself, and since these comparisons are symmetric (comparing A and B is the same as comparing B and A), then I only need to fill in a little less than half of the whole chart.  If you're thinking this took a long time, you'd be wrong. It took me about 15 minutes to fill in the whole thing.

FastEasyRepeatsPrioritiesCheckoffBackupGroupsSortingHotlistRestartRecoveryCond. DeadlinesLinks
Fast-EasyRepeatsPrioritiesFastFastGroupsSortingHotlistFastFastCond. DeadlinesFast
Easy-RepeatsPrioritiesEasyEasyGroupsSortingEasyRestartEasyEasyEasy
Repeats-RepeatsRepeatsRepeatsRepeatsSortingRepeatsRepeatsRepeatsCond. DeadlinesRepeats
Priorities-PrioritiesBackupGroupsSortingPrioritiesPrioritiesRecoveryPrioritiesLinks
Checkoff-BackupGroupsSortingHotlistCheckoffCheckoffCond. DeadlinesLinks
Backup-BackupsSortingBackupBackupBackupBackupBackup
Groups-SortingHotlistGroupsGroupsGroupsGroups
Sorting-SortingRestartSortingSortingLinks
Hotlist-HotlistHotlistHotlistHotlist
Restart-RestartCond. DeadlinesLinks
Recovery-Cond. DeadlinesLinks
Cond. Deadlines-Cond. Deadlines
Links-

This leads to the following weights:

Fast6%
Easy9%
Repeats13%
Priorities8%
Checkoff3%
Backup10%
Groups10%
Sorting13%
Hotlist9%
Restart4%
Recovery1%
Cond. Deadlines8%
Links6%

So this tells me that I think having repeating tasks and good sorting of items are the two most important criteria.

The point of this process is that the human mind is not good at juggling a bunch of variables, but it is very good at comparing one thing against another. Take the trivial case of choosing between three alternatives, A, B, and C. If you prefer A to B, and B to C, then you should accept the logic that A is the most preferred item.  To do otherwise just isn't rational.  That's exactly what pairwise comparison does. And there's good evidence that this technique actually works.

The next step is to choose a rating scale.  This scale will be used to rate each app with respect to each criterion.

There's a variety of scales I could use, and a great deal of research into qualitative measurement scales has been done.  The scale that works best for me - and seems to be the most general - is a five-point scale from -2 to +2, where 0 means "neutral," -2 means "horrible," +2 means "excellent," and -1 and +1 are in-between values.  If you prefer something a little finer, you can use a 7-point scale from -3 to +3.  I think it's important to have  a zero value to indicate neutrality, and I find it meaningful to have negative numbers stand for bad things and positive numbers for good things.

It's interesting to note that in some industries (e.g. aerospace), I've noticed a tendency to use an exponential scale - something like (0, 1, 3, 9).  This is because aerospace people tend to be extremely conservative (for reasons both technical and otherwise), so they tend to underrate the goodness of things.  This scale inflates any reasonable rating to make up for that conservatism.

But I'm neither an aerospace engineer nor particularly conservative, so I'll use the -2 to +2 scale.

Now we can do the weighted decision matrix. The gory details are given elsewhere.  The weights come from the pairwise comparison above.  In a decision matrix, we rank each alternative to some well-defined reference or base item.  We need a reference because we need a fixed point against which to measure things.  If we were evaluating design concepts, none of them would be suitable as references since a "concept" design is not well-defined.  In this case, we're evaluating two existent web apps, so we can choose either one of them as the reference.  For no particular reason, I'll use ToDo.

I worked up a weighted decision matrix comparing ToodleDo to ToDo.  Here it is:

Reference (ToDo)ToodleDo
WeightRatingScoreRatingScore
Fast0.060000
Easy0.0900-1-0.09
Repeats0.130000
Priorities0.080000
Checkoff0.030000
Backup0.1000-1-0.1
Groups0.100000
Sorting0.130010.13
Hotlist0.090010.09
Restart0.040000
Recovery0.010000
Cond. Deadlines0.080010.08
Links0.060000
00.11

This table might not look like much, but it tells a bit of a story.  ToDo is the reference, so I've given it zeros in every category.  That way, when I compare ToodleDo to it, a positive number means it beats ToDo and a negative number means it's worse than ToDo.  Obviously, they're very close to one another.

If you look at the ratings for ToodleDo, you see that it's a bit better than ToDo on some points, and a bit worse on others.  But the +1's don't actually cancel out the -1's because of the weights.  The criteria on which ToodleDo beat ToDo are more important to me than the others, because the weights are higher.  That makes ToodleDo just a little bit better than ToDo.

And that jives nicely with my intuition.  I got ToDo first, and enjoyed it.  But ever since I got ToodleDo, I've preferred it.  Every once in a while, I switch back to ToDo, but it never lasts very long.  And up until I did this decision matrix, all I had was a vague intuition that ToodleDo was better for me; now, I actually have an explanation.

But there's a problem.  ToDo handles repeating events internally; that is, when I check off the current instance of a repeating event, ToDo immediately creates the next one in the series.  ToodleDo, on the other hand, generates subsequent repeating events only when you sync the app with the ToodleDo website.

This is a problem for me when I travel.  I was in Berlin recently, for a conference.  And I don't have a data plan for my iPhone (that's a whole separate story), so I couldn't sync either app.  But that means ToodleDo  couldn't roll repeating items over properly.  So before I went to Berlin, I sync'd up ToDo and used it while I was gone.  When I came back, though, I switched back to ToodleDo.  When I go to Sweden at the end of March, I'll be using ToDo again.

Does the evaluation consider that?  No it doesn't, because I didn't.  The evaluation is only as good as the evaluator.  When I evaluated the two apps, I was nestled snugly at home, WiFi at the ready - and sync'ing either ToDo or ToodleDo is a non-issue.  If I'd've done the evaluation in Berlin, I'm sure I'd've gotten different numbers, because the repeating events problem would have been right there in my face.

So this underscores a limit with the evaluation method - indeed, a limit with any method: it's only as good as the situation you're in when you use it.  Some people might say a method is only as good as the information you use, but it's more than that.  My situation, in this case, includes me, my goals (at the time), my experiences, all the information I have handy, constraints, and anything else can possibly influence my decisions at the time.

The problem, then, is that a method depends on the situation when it's used.  But that situation may be different for the person doing the evaluation than for the person(s) who will have to live with the decision being made.  Indeed, it's virtually guaranteed that the situations will be different, if for no other reason than the implications of a decision will only occur later.

Does this put the kibosh on these kinds of methods?

Not at all.  It just means that we must be vigilant and diligent in their application.  If I did the evaluation in Berlin, ToDo would have won, because in that situation, ToodleDo would have scored poorly on repeating events.  This is as it should be.  That means that in the two different situations, the method worked.  The problem is that in any one given situation, there's no way to take into account any other situations.

Happily, there is fruitful and vigorous research concerned exactly with this.  Some people call it situated cognition; others call it situated reasoning.  We've not yet figured out how to treat situations reliably, but I think it's only a matter of time before we do.

In the meantime, there is at least one other possible way to treat other situations.  A popular technique to help set up a design problem is the use case (or what I call a usage scenario).  These are either textual or visual descriptions of the interactions involved in using the thing you'll design.  They can be quite complex and detailed.  Usage scenarios try to capture a specific situation other than the one that includes the designers during the design process.  So it's at least possible that usage scenarios could help designers evaluate designs and products better.

One final caveat: this evaluation is particular to me.  It is unlikely that anyone will agree completely with my evaluation, because their situations are different from mine.  So I'm not saying ToodleDo "is better" than ToDo.  I'm just saying it seems to be better for me.

As they say: your mileage may vary.

COMMENTS

BLOGGER: 6
  1. Very nice post, albeit more of a tutorial on how to properly set up and conduct a comparative review than an actual review of the two productivity apps :o)

    ReplyDelete
  2. True. But I wanted to give readers the chance to run their own "analysis" taking into account their own interests and personal characteristics. While I came to one conclusion, there's no reason why others would reach the same conclusion running the same type of analysis.

    ReplyDelete
  3. I just read your evaluation of TODO and TOODLEDO, and because I was wondering about the same things. I acknowledge your articulation of and expression of the subject. I feel like I am not alone anymore, there is someone else in the world who is jumping back and forth with these two APPS.
    How do you feel about the NEW TODO ONLINE SYNC? I love it and I prefer TODO NOW because of the ONLINE SYNC. it works pretty good.

    ReplyDelete
  4. I haven't tried the new ToDo sync service. No need. I only use it as a backup medium, and the free Toodledo service is enough for me. I know Toodledo's free service doesn't capture some relationships that ToDo can support (like checklists) but it does capture all the tasks - even tasks in checklists. And that's enough for me.
    Remember, I'm a minimalist about these things.
    You might enjoy other posts of mine about productivity at my dedicated blog: http://dofastandwell.blogspot.com/

    Cheers.
    Fil

    ReplyDelete
  5. A very good explanation. Congratulations!!!

    ReplyDelete

Name

academia,15,academic,2,activism,1,adaptation,1,additive manufacturing,1,admin,14,aesthetics,7,affect,1,ageing,2,AI,18,analogy,2,android,1,animation,1,anthropology,3,anticipation,1,app,1,architecture,51,art,2,arts,73,Asia,3,assistive technology,2,authority,1,automobile,1,award,1,balance,28,biology,5,biomimetics,17,book,8,branding,4,building,3,built environment,4,business,7,CAD,5,Canada,29,care,1,case,11,cfp,689,change revision,1,children,2,cinema,1,Circa,3,circular design,1,circular economy,4,codesign,3,cognition,12,collaboration,4,colonization,1,commercialization,3,commonplacing,1,communication,3,communication design,12,competition,5,complexity,5,computation,24,computer science,1,computing,18,concept map,4,conference,354,constructivism,1,conversation,1,conversational analysis,1,covid-19,4,craft,11,creative arts,1,creativity,15,crime,1,CSCW,1,culture,35,cybernetics,2,data science,1,decision-making,1,decolonization,1,degrowth,1,dementia,4,design,111,design science,1,design thinking,12,digital,3,digital media,5,digital reproduction,1,digital scholarship,1,disability,3,dissertation,1,drawing,7,economics,23,education,71,effectiveness,14,efficiency,12,emotion,1,engineering,45,entertainment,1,entrepreneurship,6,environment,28,ergonomics,3,ethics,51,ethnography,2,Evernote,1,evolution,4,exhibition,3,exoskeleton,1,experience,5,experimental studies,3,fail,1,fashion,15,featured,10,film,1,food,5,function modeling,1,futurism,16,gender,1,gender studies,3,geography,2,Germany,2,globalization,3,grantsmanship,1,graphic design,32,Greece,1,HCI,53,health,29,heritage,2,history,33,HMI,1,Hobonichi,1,housing,2,human factors,3,humanism,56,humanities,2,identity,1,illustration,2,image,4,inclusivity,2,industrial design,6,informatics,4,information,9,innovation,19,interaction,26,interdisciplinarity,4,interior design,9,internet of things,3,intervention,1,iphone,16,jobs,1,journal,194,journalism,1,justice,2,landscape,6,language,5,law,2,library,1,life,105,life cycle,3,lifehack,10,literature,1,literature review,1,logistics,2,luxury,1,maintenance,1,making,5,management,12,manufacturing,9,material culture,7,materials,6,mechanics,1,media,17,method,46,migration,1,mobile,2,mobility,1,motion design,2,movie,3,multimedia,3,music,1,nature,3,new product development,5,Nexus 6,1,olfaction,1,online,2,open design,2,organization,1,packaging,2,paper,19,participatory design,16,PBL,1,pengate,1,performance,1,PhD,34,philosophy,46,planning,5,play,1,policy,9,politics,52,postdoc,1,practice,26,predatory,3,preservation,2,printing,1,prison,1,proceedings,1,product,1,product lifetime,1,product longevity,1,productivity,106,project management,1,prototyping,4,public space,6,publishing,3,reading,1,Remember The Milk,1,repair,1,reproduction,1,research,117,research through design,2,resilience,1,resource-limited design,1,reuse,1,review,74,robust design,1,Samsung,3,scale,1,scholarship,54,science,48,science fiction,5,semiotics,5,senses,1,service design,12,simplicity,5,society,136,sociology,11,software,61,somatics,1,space,5,STEM,1,strategic design,6,student,8,sustainability,68,sustainable consumption,1,sustainable design,1,sustainable production,1,systems,67,tactile,1,tangibility,1,technology,25,textile,7,theatre,3,theory,7,Toodledo,2,Toronto,3,tourism,2,traffic,1,transhumanism,1,transnationalism,1,transportation,3,tv,3,typography,1,uncertainty,1,universal design,4,upcycling,2,urban,30,usa,9,usability,1,user experience,8,virtual reality,1,visualization,24,waste management,1,wearable,3,well-being,17,women,1,workshop,74,writing,2,
ltr
item
The Trouble with Normal...: ToDo or ToodleDo, that is the question
ToDo or ToodleDo, that is the question
The Trouble with Normal...
https://filsalustri.blogspot.com/2009/03/todo-or-toodledo-that-is-question.html
https://filsalustri.blogspot.com/
https://filsalustri.blogspot.com/
https://filsalustri.blogspot.com/2009/03/todo-or-toodledo-that-is-question.html
true
389378225362699292
UTF-8
Loaded All Posts Not found any posts VIEW ALL Readmore Reply Cancel reply Delete By Home PAGES POSTS View All RECOMMENDED FOR YOU LABEL ARCHIVE SEARCH ALL POSTS Not found any post match with your request Back Home Sunday Monday Tuesday Wednesday Thursday Friday Saturday Sun Mon Tue Wed Thu Fri Sat January February March April May June July August September October November December Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec just now 1 minute ago $$1$$ minutes ago 1 hour ago $$1$$ hours ago Yesterday $$1$$ days ago $$1$$ weeks ago more than 5 weeks ago Followers Follow THIS PREMIUM CONTENT IS LOCKED STEP 1: Share to a social network STEP 2: Click the link on your social network Copy All Code Select All Code All codes were copied to your clipboard Can not copy the codes / texts, please press [CTRL]+[C] (or CMD+C with Mac) to copy