Proudly South African, Proudly Mobenzi

September 15, 2010  |  by Mark  |  Features, Pilot Project  |  ,  |  8 comments

Travelling to KwaNyuswa, through the Valley of A Thousand Hills was a journey that taught me a vital life lesson: There is a solution to almost every problem, and finding that solution can make you proud.

The reason for my expedition into this rural community was to meet local Mobenzi agents who participated in the application’s pilot launch, and to find out more about the impact this application has made in their lives.

I expected to hear how happy these 20 agents were to have found employment, how delighted they are to work from home, not having to struggle for transport and how excited they were to be surrounded by media. But what they shared surprised me.

All 20 agents were filled with resounding pride.

They were proud to be involved with such pioneering technology like Mobenzi, that they not only had jobs but they are their own bosses and that they were learning about things they never dreamed of having the opportunity to.

Data analysis, being critical thinkers, working with internet applications like Twitter, learning new business terms and phrases and being the brain behind tasks are just a few of the ‘business life skills’ these trailblazers boasted.

Going back to the lesson I learnt about how finding solutions makes you proud – Mobenzi’s entire creation was born out of finding a solution to address unemployment. Mark Fowles, a director and partner of Clyral, explained how.

‘We started by looking at our country’s horrific unemployment rate as an opportunity, not only to make a difference socially, but also to create a valuable business. The driving idea was that there must be certain types of business problems that normal South African people could solve using their cell phones as tools. To prove the concept, we started building the software and the result was Mobenzi. Agents can do the tasks in their spare time, using their own phones, without the need for transport. And on the other side of the coin, Mobenzi is providing exciting opportunities for businesses in need of real human input,’ said Fowles.

In explaining this, Mark’s smile was just as broad as those of the Mobenzi agents.

And then, the second solution came in the form of the agents themselves. Their problem was unemployment and being part of the devastating 65 per cent of South Africans under the age of 35 who are unemployed. They took on something so new and so foreign to them, and grabbed the opportunity to learn. They found their solution to fighting poverty.

All involved have every reason to be proud. Kudos to the entire Mobenzi team.

Janay Manning
Proud Mobenzi Supporter

Unexpected insight from Mobenzi agents

June 29, 2010  |  by Mark  |  Features, Pilot Project, Press  |  , ,  |  4 comments

During a press event we held recently, we got a chance to get qualitative feedback from some of our agents about Mobenzi and what it means to them. At the time the majority of the tasks they were completing involved analysing the sentiment of Tweets about a few prominent South African brands. We were quite surprised to get so much feedback about how the nature of work itself seems to have a positive impact in agents’ lives.

Nokhuthula Njoko said that she feels empowered by being involved with Mobenzi.

“It helps me focus on the work I’m doing, knowing that for each task I complete I will be paid. I’ve learnt to be a critical thinker and enjoy the challenge of learning new words and abbreviations in different languages and in business terms. I’ve even made myself a book where I write down all the words I don’t understand, then later find their meanings. Mobenzi has given me employment, empowerment and an education in the business world”.

Trevor Ngcobo said being a Mobenzi agent enables him to study and still work at his own leisure.

“I never have to worry about transport problems, being late for work or not having time to attend college. I can make money, study and even do my Mobenzi tasks in a taxi on my way to lectures. It’s helped me in more ways than I thought when I first started”.

Msizi Phewa relishes in the fact that he can tell his peers that he ‘works on the internet and analyses data’.

“It makes me feel so important when I tell people that I work with analysing information from social networking sites like Twitter and Facebook. And because it’s something you can do in your spare time and you are paid for it, your mind is not focused on distractions of drugs, alcohol and hanging out with unproductive and negative people on the streets. I want the world to eventually be plugged into Mobenzi so that we can have an entire planet of productive people”.

It was great to hear such positive and interesting feedback.

Launching the next phase of the Mobenzi pilot

June 18, 2010  |  by Mark  |  Features, Homepage, Pilot Project, Press  |  , , ,  |  12 comments
On May 26th we invited some representatives from the press to the launch of the next phase of our Mobenzi pilot project in KwaNyuswa (Valley of a thousand hills, KZN).
During our two week trial run in December 2009, agents had completed Mobenzi tasks using our company owned phones, under supervision and together at a central location. Since May 26th however, a group of agents have been working independently as private contractors to Mobenzi.
These are some of the major factors that make the launch of this phase of the pilot a significant step forward.
1. Agents work in their own time, requesting batches of tasks whenever they have a few minutes spare.
2. They complete tasks while at home, travelling on public transport or even between lectures at college.
3. Most of the agents are using their own mobile phones after having installed the Mobenzi application from a link we sent to them.
4. With each task that agent’s complete, associated credit is built up in their account. Once credit reaches a certain thresh-hold, funds are disbursed electronically to their phones using FNB’s SendMoney platform. Although some agents had to borrow our company phones, many have earned enough income from Mobenzi to purchase their own, brand new compatible Nokia phones.
These changes in the way the pilot is being run are allowing us to test the scalability of the concept. We can now manage recruitment of new agents, assignment of tasks, monitoring of quality and disbursement of funds all from our central office. With this platform in place, it is only the demand from businesses for the services of our agents that will slow the growth of Mobenzi.
Some of the agents had a lot to say about Mobenzi and what it means to them.
Nokuthula Njoko said that she feels empowered by being involved with Mobenzi.
“It helps me focus on the work I’m doing, knowing that for each task I complete I will be paid. I’ve learnt to be a critical thinker and enjoy the challenge of learning new words and abbreviations in different languages and in business terms. I’ve even made myself a book where I write down all the words I don’t understand, then later find their meanings. Mobenzi has given me employment, empowerment and an education in the business world”.
Civil engineering student, Trevor Ngcobo said being a Mobenzi agent enables him to study and still work at his own leisure.
“I never have to worry about transport problems, being late for work or not having time to attend college. I can make money, study and even do my Mobenzi tasks in a taxi on my way to lectures. It’s helped me in more ways than I thought when I first started”.
Msizi Phewa relishes in the fact that he can tell his peers that he ‘works on the internet and analyses data’.
“It makes me feel so important when I tell people that I work with analysing information from social networking sites like Twitter and Facebook. And because it’s something you can do in your spare time and you are paid for it, your mind is not focused on distractions of drugs, alcohol and hanging out with unproductive and negative people on the streets. I want the world to eventually be plugged into Mobenzi so that we can have an entire planet of productive people”.
It is awesome to hear such positive feedback from agents, especially when referring to some of the benefits they are realising that we had never considered.

On May 26th we invited some representatives from the press to the launch of the next phase of our Mobenzi pilot project in KwaNyuswa (Valley of a thousand hills, KZN).

During our two week trial run in December 2009, agents had completed Mobenzi tasks using our company owned phones, under supervision and together at a central location. Since May 26th however, a group of agents have been working independently as private contractors to Mobenzi.

These are some of the major factors that make the launch of this phase of the pilot a significant step forward.

  1. Agents are now working in their own time, requesting batches of tasks whenever they have a few minutes spare.
  2. They complete tasks while at home, travelling on public transport or even between lectures at college.
  3. Most of the agents are using their own mobile phones after having installed the Mobenzi application from a link we sent to them.
  4. With each task that agent’s complete, associated credit is built up in their account. Once credit reaches a certain thresh-hold, funds are disbursed electronically to their phones using FNB’s SendMoney platform. Although some agents had to borrow our company phones, many have already earned enough income from Mobenzi to purchase their own, brand new compatible Nokia phones.

These changes in the way the pilot is being run are allowing us to test the scalability of the concept. We can now manage recruitment of new agents, assignment of tasks, monitoring of quality and disbursement of funds all from our central office.

With this platform in place, it is only the demand from businesses for the services of our agents that will slow the growth of Mobenzi.

About the participants and what they thought of Mobenzi

January 5, 2010  |  by Mark  |  Features, Pilot Project  |  , , , ,  |  4 comments

Our first pilot project for Mobenzi ended on December 4th 2009 and on the final afternoon we assigned a survey to the participants’ phones to find out information about them as well as their thoughts on the pilot.

Although we had 25 participants in the pilot, 2 members of the team were not present on the Friday afternoon. The following statistics are therefore based on the remaining 23 team members who completed the self-administered survey using the Nokia 3120 mobile phones we provided for the pilot.

Age, Gender and Language

The 25 pilot participants were all from the local community of Kwanyuswa. The average age of the team members was 24 and there was an even gender split. Each of the participants had completed grade 12 and could speak fairly good English. Their first language is isiZulu but each of them studied English as a second language.

Employment history

17 of the participants (70%) had never had a full time job at the time of running the pilot. A few participants had part time jobs but were able to make the 5 hour sessions each morning.

Household information

The 17 participants that were willing to answer questions about their households have on average 7 people living permanently at home. 16 homes had stoves (94%), 14 had running water (82%), 14 had a television (82%), only 10 owned a fridge (60%) and none of the households owned a motor vehicle.

Mobile phone usage

19 of the 23 participants (82%) owned their own mobile phone (53% Nokia, 21% Samsung, 16% LG). Most participants (60%) had used MXIT (a mobile instant messaging client) in the month preceding the pilot. 9 team members (40%) had used their phones within the last month to browse the web and download pictures, music or games. The average airtime expenditure per person over the preceding 3 months was R100 per month.

Demand for mobile tasks

If employed full time in another position, the participants expressed on average that they would probably like to do Mobenzi tasks for about 3.5 hours per weekday to subsidise other income. If working only part time in another position, the desired commitment increased to 5.5 hours. Over weekends the average expected commitment was 10 hours (Including Saturday and Sunday). This works out at between 27 and 37 hours per week. 5.5 hours of concentrated work is probably the ceiling for how much time someone could spend doing Mobenzi tasks in a single day.

Everyone agreed that most Mobenzi tasks would be completed at their homes, but most participants also mentioned they would probably complete tasks while on public transport (buses and taxis) and while walking around the local community.

Thoughts on Mobenzi

The major reason the participants noted for what they liked about Mobenzi was that the work was interesting and entertaining. Only one person answered that the work was boring. The biggest challenge the team raised was that some classification tasks were ambiguous and deciding on the most appropriate answer was sometimes very difficult.

Fatigue was a problem for some participants who mentioned that their hands started hurting by the end of the day or they battled to concentrate for so long (We ran the pilot for about 5 hours each day with short breaks every hour and a longer break for lunch).

The participants were generally very excited about Mobenzi. Some of their comments are included in a related article: Feedback from pilot participants about mobile tasks

Pilot Project Summary: Creating jobs using mobile phones in an African township

January 5, 2010  |  by Mark  |  Features, Homepage, Pilot Project  |  , , , ,  |  14 comments
For two weeks, from November 20th to December 4th 2009, we conducted a pilot project in the Valley of a Thousand Hills in South Africa. We hope that this project will lead to a revolutionary new service that will create a new type of job for thousands of underprivileged people.

About Mobenzi

Mobenzi is a software service that empowers people to be rewarded for completing simple tasks on their mobile phones. These tasks involve certain types of problems that are difficult for a computer to solve without assistance from a real person – even someone without expert knowledge of the problem.

Find out more about how Mobenzi works

Purpose of the pilot project

For two weeks we equipped pilot participants with the Mobenzi software application installed on standard mobile phones to assess whether they could effectively complete simple business tasks using only their phones.

These were some of the guiding questions we were attempting to answer during the pilot.

  • Is the concept easy to understand?
  • Is the technology easy to use?
  • What types of tasks are feasible?
  • What types of people are most suitable for doing Mobenzi tasks?
  • What is the best way to present a given task to an agent?
  • How long does it take to complete different types of tasks?
  • What quality should be expected in the results of completed tasks?
  • What issues are involved that may affect attrition rates (fatigue, boredom etc)?
  • Could the service grow through viral expansion (Can participants teach each other)?
  • Based on other findings, what are the financial implications with regard to agent remuneration and the cost of the service to organisations?

Project location and venue

A view of The Valley of a Thousand Hillslight_providers_community_centreView from Light Providers

We ran the pilot project from the Light Providers community centre in KwaNyuswa. The area lies on the outskirts of urban development, west of the Inanda Dam, about 40 minutes outside of Durban in KwaZulu Natal, South Africa. It is one of the largest of the various tribal authorities that make up the Valley of a Thousand Hills.

Due to the gross unemployment rates in the region, and our close proximity to the area (Only 14km from our office), we selected KwaNyuswa as the location for our pilot project.

Format of the pilot

We started the first week of the pilot with 5 participants who would later act as mentors when 20 new recruits joined them for the second week. We spent the first week testing out various types of human intelligence tasks and discussing issues surrounding understanding the use of the mobile application as well as the various types of tasks themselves.

During the second week we had more participants to help work through large sets of tasks. We assigned participants various types of tasks and recorded completion times and responses for all participants so that we could crunch the data to assess what factors affect quality and efficiency.

We focused on text-based human intelligence tasks

We decided to focus on “Text to Form” tasks for the pilot project. These types of tasks involve extracting structured data from free-text.

Some examples of this type of task include:

A simple task to assist in sorting sms survey responses.
An example of a simple task to assist in sorting sms survey responses.

For all of these tasks, we displayed a short instruction for the task, followed by the content (such as an SMS or a tweet) and then a series of questions about the content (Such as whether the SMS included a person’s name). The participant worked through each task one step at a time.

Find out about other types of human intelligence tasks

Results of the first phase of our pilot project

One of the critical factors affecting the feasibility of Mobenzi is whether or not the mobile application is easy to use for people who have had little exposure to the internet and other software applications. A quote from the summary of the first day of the pilot shows how easily the participants understood both the concept of doing work on their phones as well as how to use the application itself:

Without any instruction, most of the participants had the application open and simply started completing tasks. Although I had high expectations, I still thought there would be many questions and a fairly slow start. But within half an hour of me arriving at the venue, the participants had their heads down and were completing tasks. A few questions popped up during the day, but none that the other participants couldn’t answer themselves.

Using the software to complete tasks came very naturally and required almost zero training. From the participant comments, it is also clear that there would be a huge demand for Mobenzi tasks. I believe we could easily find thousands of Mobenzi agents who already own compatible phones within just half an hour’s drive of our offices in Hillcrest, let alone the rest of South Africa and the world.

We have not yet done much analysis on the quality or efficiency of the completed tasks, but initial assessments are very positive. Over the next few weeks we will be crunching the data to help answer some more of the questions we outlined at the start of the pilot.

The results so far have exceeded our expectations and at this stage I would guess that our biggest challenge in moving forward will be to generate a sufficient supply of tasks to keep Mobenzi agents busy.

Scaling up the pilot in April 2010

This pilot was a short 2 week project to get an early feel for what to expect. In April next year we will scale our efforts up and take on a much larger group of participants to pilot the concept further. Until then we will be tweaking the software and preparing the systems to handle the logistics of a much larger project.

We are very open to suggestions if you have any ideas for types of tasks or even real world data that we could get Mobenzi agents to process during our pilot later this year.

Feedback from pilot participants about mobile tasks

December 13, 2009  |  by Mark  |  Features, Pilot Project  |  ,  |  9 comments
One of the tasks we assigned participants (or so-called Mobenzi Agents) on the final day of the pilot was a simple text form where we asked them to submit any comments they had regarding Mobenzi. These were some of their responses.

Workin wit mobenzi ws great n hp w’l start soon. Al d best:-)

This is a great example of Textese (‘SMS language’ involving abbreviations and slang). This comment translates to regular English as ‘Working with Mobenzi was great and I hope We’ll start soon. All the best. (said with a smile)
[61 vs 93 characters = 34% compression].

Mobenzi is a good program/organisation which will bring many job opportunities to people, its interesting and entertaining and at the same time its challenging you to think before answering each question. Last but not least it will improve English language for many people who work with mobenzi because most of time it all about English

This was a very positive comment from one of the participants. Internally, we had discussed the potential impact Mobenzi work could have on education (such as English comprehension), but we certainly never expected participants to pick that up as a benefit during a short pilot project (it’s becoming very clear that we should stop underestimating participants).

Establish marketing strategies for mobenzi to ensure availability of tasks and more employment.

This participant seemed eager to see us succeed and offered some business advice.

I would like to work for Mobenzi.!!!

No comments, it will be a previlage working at mobenzi.

It was fun ,challenging and informative about the world that we live in.

No comment everything is new and perfect I enjoy mobenzi.

Mobenzi is very interesting and it challenges my knowledge in English and makes you think. But mostly it’s going to give us some sort of employment. THUMBS UP MOBENZI!!

This final comment sums up the sentiment of the team. I don’t think we could have expected a more positive reception to the project from the participants themselves.

Twitter sentiment analysis using mobile phones in South Africa

November 29, 2009  |  by Mark  |  Features, Pilot Project, Task Types  |  , , , ,  |  58 comments

Yesterday I aggregated some data from Twitter that referenced KFC, Nandos, Debonairs or McDonalds and sat with the Mobenzi pilot participants as we answered two simple questions about each tweet.

  1. Was the message positive, negative or neutral in reference to the brand?
  2. If it was negative, was it due to customer service, taste, health or some other reason?

The work was entertaining for the participants, they completed tasks efficiently and the results seem to be very accurate.

About sentiment analysis

With the growing use of online services like Twitter, blogs and forums, there is a vast amount of publicly available information generated by everyday people about millions of different topics (companies, products, movies etc.). Knowing the sentiment of messages (e.g. whether they are positive or negative) can be extremely valuable to the people or organisations involved, especially when monitoring trends over time.

Sentiment analysis or opinion mining refers to a broad area of natural language processing, computational linguistics and text mining. Generally speaking, it aims to determine the attitude of a speaker or a writer with respect to some topic.

The rise of social media such as blogs and social networks has fuelled interest in sentiment analysis. With the proliferation of reviews, ratings, recommendations and other forms of online expression, online opinion has turned into a kind of virtual currency for businesses looking to market their products, identify new opportunities and manage their reputations.

Find out more about sentiment analysis on Wikipedia

This kind of work is well suited to Mobenzi agents

Sentiment analysis seemed like a very appropriate type of task for processing by Mobenzi agents on their phones as tweets are very short (only 140 characters). We also felt that there would be a demand for an efficient human sentiment rating service since computer algorithms face many difficulties in trying to understand the tone of human messages.

Twitter includes a lot of slang, humour, Textese and other informal language that makes automated analysis especially difficult. In a multi-cultural country like South Africa, many tweets also combine words from a variety of local languages which would make analysis very challenging to a computer.

Example Tweets that reference take-out brands

These were some of the messages included in our sample set of data from Twitter.

"Damn you debonairs" in this context is not negative.

"Damn you debonairs" in this context is meant in jest and is not negative.


The sentiment is not obvious in this message.

The sentiment is not obvious in this message.


The tone in this tweet changes totally at the end of the message.

The tone in this tweet changes totally at the end of the message.


The results were ‘positive’

The focus of this study was to assess issues relating to the completion of tasks. We only looked at a small sample of tweets, and could have been a lot more scientific in our approach, so the sentiment results themselves should not be taken too seriously.

There were six participants (including myself) and we each stepped through the analysis of Twitter messages that mentioned KFC, Nandos, Debonairs or McDonalds. Each task took only a few seconds to complete and the team found the work interesting and engaging. None of the participants (except myself) use Twitter themselves, but they were all very familiar with the concept and frequently use Mxit which is similar in some aspects.

One of the measurements we look at to gauge the accuracy of results, is the agreement between different participants for the same task. All six participants rated the sentiment of each tweet, so we were able to look at where our answers differed. It was very encouraging to see that most answers had 100% agreement (Especially if we exclude where participants stated that they were unsure of the sentiment). There are only a few cases where we disagreed on whether a particular tweet was positive or negative. In these cases, the majority was correct and in some cases the disagreement actually helped to balance the rating where the sentiment was ambiguous.

The summary across all brands came out at 48% positive, 35% negative and 17% neutral or unclear. Of the negative tweets, 29% were service related, 16% to do with taste, 9% health related and the rest for other reasons.

These charts show the summary of the sentiment analysis for all four brands.

These charts show the summary of the sentiment analysis for all four brands.

In the following results, we excluded tweets that were either neutral or unclear with regard to sentiment. Out of the four brands, Nandos was clearly the favourite with 80% of tweets being rated as positive.

Breakdown of Positive versus Negative sentiment for each brand

Breakdown of Positive versus Negative sentiment for each brand

To have a look at what people are saying right now about these brands, simply go to www.twitter.com and search for #Nandos, Debonairs, #KFC or #Mcdonalds.

Interestingly, a quick analysis of these keywords on Tweetsentiments.com (A service that attempts to automate the analysis of tweets) returns fairly similar results in terms of rank, but with some significant variations in the actual sentiment rating. Nandos: 68% positive, Debonairs 59% positive, Mcdonalds 56% positive and KFC 52% positive. The ranking of the brands is the same as our result, except that Mcdonalds moved in front of KFC with the automated analysis. This may have to do with the fact that we only looked at tweets in English and other South African dialects. Perhaps English speaking people are the least positive about Mcdonalds? Looking at some of the tweets in their data sets, I would trust our result over the automated one. Try the service out yourself at  http://tweetsentiments.com/analyze


Yesterday’s Twitter sentiment analysis pilot was a huge success and we are excited to continue testing next week. I am confident that we will take this idea further in the coming months.


It is no coincidence that we ended up having Nandos and Debonairs for lunch.

It is no coincidence that we ended up having Nandos and Debonairs for lunch.

Initial thoughts on mobile crowdsourced translation

November 26, 2009  |  by Andi  |  Pilot Project, Task Types  |  ,  |  3 comments

The possibility of having Mobenzi agents convert English content into local languages (and vice versa) was one which really excited us.

As with all Mobenzi tasks, we split up a large, complex challenge into small, discrete tasks which can be performed in a few minutes.

Obviously, there are limitless reasons people require translation services. One of our own immediate applications for translation is for surveys conducted using our Mobile Researcher platform. We frequently have clients embarking on multilingual studies who need questions to be available in several local languages (in South Africa alone we have 11 official languages); and need qualitative responses given in other languages available in English for analysis. Normally, surveys are designed, reviewed, debated and eventually finalised in English before they are sent for translation and back-translation. Our objective is to crowdsource this activity instead – making near real-time translations available to survey designers in Mobile Researcher at the click of a button.

In our first trial we took a client’s survey questions which we already had in English and professionally translated Zulu and converted them into two sets of translation tasks: from English to Zulu and vice versa.

As most people don’t speak Zulu (and mine isn’t exactly fluent either), I’ll use a Zulu-to-English example taken from the actual exercise:

Original English (not visible to Mobenzi agent): What is the household’s average monthly income?
Professionally Translated Zulu (shown to Mobenzi agent): Ungayilinganisela kumalini isiyonke imali engenayo ngenyanga?
Mobenzi Agent Translation (from Zulu to English): How much is the monthly income for the household?

Now, clearly the ideal would be for the original phrase to match the version coming back from the Mobenzi agent (and this did happen frequently) but in most cases, it does prove that, for the most part, crowdsourced translation works to the extent that the concept is sufficiently conveyed.

Some of the key challenges and limitations of crowdsourced translation tasks which our agents helped us identify were:

  • Lack of context. Just as computer translation algorithms struggle without context – so do people. We noticed that, on several occasions, what agents had responded with made sense if read in isolation, but not within the context of the broader scope. It has to be said that this problem is exacerbated by our approach of segmenting text for translation.
  • Language formality. The mobile communication culture is almost exclusively an informal one. Some agents seemed to struggle to snap out of this mode and became frustrated having to use formal language.
  • Input mode. Although an obvious observation, translation requires a lot more typing than other types of tasks. Somewhat foolishly, we didn’t enable predictive text on during the first day of our translation trial which didn’t help matters.
  • Multiple alternatives. Just as there are a number of ways to phrase a question in English, translation is not a one-to-one mapping. This is not a problem inherent to mobile crowdsourced translation however – two professional translators may also come up with different ways of phrasing something based on experience and personal preference.

An interesting observation is that the value in using crowdsourced translation is not only that one might not necessarily need to use traditional translation services but more that crowdsourced translation can give insight into how “normal” people understand a question in their mother tongue.

Another example:

Original English (not visible to Mobenzi agent): What is the household’s average monthly income?
Professionally Translated Zulu (shown to Mobenzi agent): Ungayilinganisela kumalini isiyonke imali engenayo ngenyanga?
Mobenzi Agent Translation (from Zulu to English): Estimate how much money do you earn a month?

In this case, the agent has incorrectly translated the question back into English by missing that the question wants to know the cumulative income of the household – not just that of the respondent. Now, this might be a mistake by the agent – but a mistake is not, in itself valueless. In this case, it can help a survey designer identify where confusion or areas of potential misunderstanding might occur in the real study. The response from the agent can guide the designer, in collaboration with their translator (professional or otherwise) in refining the question to ensure optimal wording which will make sense to the actual respondents.

We were rather pleased by the quality of translation provided by agents without any kind of moderation, statistical or other methods being applied but clearly there is still a lot of room for improvement in this area. We’ll continue working on some ideas before our main pilot takes place next year.

Using mobile phones to support people doing Human Character Recognition (HCR)

November 24, 2009  |  by Mark  |  Pilot Project, Task Types  |   |  6 comments
I thought it would be pretty interesting to handwrite a blog article and get the team of pilot participants to type it up on their phones.

Like many other tech enthusiasts, I am a big fan of Moleskine notebooks & diaries. I decided to write each sentence of this article on a new page of my pocket size, yellow note book. For the sake of saving some very small trees, I will try to keep this article short. I think it will be very interesting to see how long it takes to type up each page. We will calculate the average time taken, per word & per character, to type (capture) this content. Each participant will type up each handwritten page so that we have a bit more data to assess time and quality.

Each sentence was hand-written on a separate page.

Each sentence was hand-written on a separate page.

Results

The above text was typed up into a form on a mobile phone by three of the participants. There were a few minor errors (double spaces and some miss-typed characters) but the quality was decent. At least one of the three participants got each of the sentences correct without errors.

There were 8 sentences including the title with a total of 145 words or 777 characters (with spaces). It took participants just over 12 minutes on average to complete the 8 tasks. The average speed for the tasks was therefore between 12 and 13 words per minute (WPM).

One study of average computer users, the average rate for transcription was 33 words per minute, and only 19 words per minute for composition. In the same study, when the group was divided into “fast”, “moderate” and “slow” groups, the average speeds were 40 wpm, 35 wpm, and 23 wpm respectively. Wikipedia

This result (in terms of typing speed) is not very impressive (almost half the speed of a ‘slow’ typist) but we have to consider a few points.

  • Participants had to capture META data about the task (the task number) and read an instruction about the task for each sentence. These steps could be avoided.
  • The form used to capture the text did not make use of predictive text. Predictive text could definitely improve typing speed.
  • The convenience of being able to type on your phone while you are travelling on public transport, walking, lying in bed etc. must definitely be taken into account in comparing typing on a personal computer to typing on a mobile phone.
  • The types of HCR tasks we have been discussing do not involve typing up long documents, but rather short strings of text like those used by Recaptcha.

In summary, the accuracy was pretty good but I had hoped that the speed would have been better. We will have to conduct some further testing in this regard if we seriously consider targeting Human Character Recognition (HCR) as a business service to offer through Mobenzi. This is just one of many applications.

Great first day for the Mobenzi pilot

November 24, 2009  |  by Mark  |  Pilot Project  |  , ,  |  No Comments
Today was the first day of piloting Mobenzi in the field. The pilot is being hosted at a local community center called Light Providers which is situated in The Valley of a Thousand Hills. Siyanda, general manager of the community center, helped recruit the initial five participants from the local community who will assist with supporting the pilot as we scale up next week.

Siyanda brought together an excellent group of people who I really enjoyed working with today. Mbongwa (featured in the title image who calls himself Kingdom), Ayanda, Nobuhle, Nieh and Bonga are all between the ages of 20 and 26 and are all currently seeking part or full time employment. Their first language is isiZulu but they all speak English fluently.

The team of mentors who will help support the extra 20 pilot participants who will join us next week.

The team of mentors who will help support the extra 20 pilot participants who will join us next week.

I had fairly high expectations for how easily the participants would pick up the concept and would be able to process tasks. We had discussed last week whether an introductory training session was necessary – to explain how the mobile application works, how to skip between questions and complete the various question types etc. But based on my interactions with youths from the area, I decided to try and see what progress the participants could make without any training at all.

I started the session by introducing myself to the team and giving them a brief overview of Mobile Researcher and how we created Mobenzi to try and leverage the platform for completing tasks. This took about 15 minutes and the team really picked the idea up quickly and were eager to get started. I handed out the 5 new Nokia 3120 classic phones and told them there was a shortcut to the application on the main screen.

The Nokia 3120 classic that we installed the Mobenzi application on for the pilot.

The Nokia 3120 classic that we installed the Mobenzi application on for the pilot.

I was very encouraged to hear discussion about the make and model of the phone and it’s various features without me saying anything about it. In their community a person’s phone is a hugely significant status symbol and everyone seems to know about each others phones (it took some convincing to get them to agree to hand the phones back after each session).

Without any instruction, most of the participants had the application open and simply started completing tasks. Although I had high expectations, I still thought there would be many questions and a fairly slow start. But within half an hour of me arriving at the venue, the participants had their heads down and were completing tasks. A few questions popped up during the day, but none that the other participants couldn’t answer themselves.

It is difficult for us to understand how central a phone is to youth in communities like Kwanyuswa. Their familiarity with the technologies made the transition to ‘working’ on their phone completely natural.

I jotted down some notes from our discussions that illustrate how important phones are to them.

Everyone uses Mxit around here. Even our parents.

Mxit is a South African instant messaging system that millions of people use for cheap, quick communication on their phones. One of the guys said that he installed Mxit on his mom’s phone so that he could chat to her from home about what to buy when she goes shopping in town.

I installed Opera mini on my phone and at one stage used to spend over 8 hours a day browsing the internet and using applications. I used to spend at least R100 per week on airtime, but it was still cheaper than the internet cafes. I would only go to the internet cafes if I needed to print.

This was a quote from Kingdom who really flew through tasks today. I am definitely expecting experience with services like Mxit to play a huge role in how easily new Mobenzi agents can get started and how productive they are in their work.

The tasks themselves involved structuring free text sms messages by answering a series of questions about the sms. The participants varied in the time taken to complete each task, averaging at around 2 minutes (for about 5 questions per task). I did a brief analysis of the quality and I was very pleased to see almost 100% accuracy on the small set of tasks that I looked at.

I found the first day of the pilot incredibly interesting and I am now even more excited for the future of Mobenzi.