Evaluating the Usability of PowerMeeting

  

Evaluating the Usability of PowerMeeting

ALEX TSE KYRIAKI TSIAPARA

  VASITPOL

XUEBIN XU GANG YANG DENISSE ZAVALA QI ZHANG

  The University of Manchester

The University

of Manchester

  The University of Manchester The University of Manchester

  The University of Manchester [email protected]. uk kyriaki.tsiapara

  @postgrad.mbs. ac.uk vasitpol.voravitv [email protected] s.ac.uk

xuebin.xu@

cs.man.ac.uk

[email protected] c.uk [email protected] n.ac.uk qi.zhang@postgr ad.mbs.ac.uk

  ABSTRACT (Gang Yang)

  The University of Manchester The University of Manchester

  KEYWORDS

  PowerMeeting, scenario, usability testing, heuristics, collaborative work

  INTRODUCTION (GANG YANG & XUEBIN XU) Background Research (Gang Yang)

  In modern society, team work is becoming more and more popular in universities, corporations and research institutions. It tends to be more effective than individual work if the team is cooperative. For achieving a common goal, team work often involves a series of activities, such as problems discussion, assignments distribution, decision making, voting, brainstorming, etc. In the past, efficient team work could only be conducted face to face in a meeting. Sometimes, in order to hold a meeting, large amount of capital and time has to be wasted in travelling by team members. Along with the development of computer and telecommunication technology, some web-based collaborative software tools (also called Groupware) are emerging to help people work together over the Internet [4]. The common features of these collaborative software tools consist of wikis, online chat, blogs, teleconference, videoconference, document sharing, etc. Since a significant number of relevant multinational corporations, such as Google, Microsoft, Apple, etc. are never tired of promoting their online Groupware each year, it is certain that some relevant products have been applied more and more in people‟s daily work and life.

  (Xuebin Xu)

  Groupware includes the functions that enable people chat, share resources, and collaborate with each other, even hold videoconference or multi-user games. This kind of interaction requires a design that allows efficiency). As Hilts and Turoff [6] mentioned, groupware has existed since the 1970s in the form of the inter-organizational computer-mediated communication systems, such as within-company electronic mail systems. Since then, both the quality and quantity of groupware were considerably improved to meet various needs of collaboration all over the world. However, information and communication technologies cost most of the capital of a company especially to international companies and information technology companies, which rely strongly on groupware to perform a set of activities. On the other hand, groupware demonstrates its value not only through the convenience it provides but also because of the profit it could bring to the companies, which is demonstrated by Brézillon et al. [2] in their case study of a news organization in one of Ireland‟s major cities, where the investment on groupware plays a core role over all the investments on IT. Like Bhatt et al. [1] specified, groupware could allow its users to express and share their feelings and knowledge, and construct an efficient working environment and peaceful virtual community, without the constraints of time and location. Furthermore, as Wang [10] mentioned “Synchronous groupware is software that enables real-time collaboration among collocated or geographically distributed group members”.

  As noticing the importance of groupware, a large amount of organisations released their groupware applications gradually. For instance, Cornell University launched CU- SeeMe, an Internet video conferencing client, in 1992, and Microsoft developed the SharePoint, a web platform that marry content management with document management. There are also a lot of other well-known collaborative applications as mentioned before, however, there is a common issue for all these groupware tools, which is the requirement of high cost desktop applications, with the restriction of the operative system (OS) as well. For example the applications built for Windows cannot be used

  VORAVITVET

  Nowadays, Human Computer Interaction (HCI) usability testing has been widely adopted during the process of software interfaces design, implementation and evaluation, for improving their performance and enhancing user satisfaction. PowerMeeting is a real-time web-based collaborative system. It provides a range of collaborative tools, including micro blog, brainstorming tool, text editor, voting tool, etc. In order to investigate the interface and functionality of PowerMeeting platform, a HCI usability test is conducted using a series of tools, such as scenario design, open-ended interview and heuristic questionnaires. The main purpose of the report is to demonstrate the test, analyse its result and provide suggestion to the future design. on the Mac OS. One way around this issue is to develop web-based groupware applications. The advantages and disadvantages of this method are obvious; web-based applications overcome the OS limitations and user licenses can be significantly less costly than desktop applications. In addition, web-based applications are easier to be maintained and updated. However, the web introduces a set of limitations that are concerned with technology available in the web. PowerMeeting is a Web-based synchronous groupware framework developed by using features provided by the Web 2.0. PowerMeeting is built based on the Google Web Toolkit (GWT) [5], which offers a rich user experience by providing rich graphical user interface and concurrent client-server connection as mentioned by Dewsbury [5, also cited by 10]. Furthermore, the aim of PowerMeeting is to provide the real-time client-to-client collaborative application framework. Due to the continually improved technology such as Web 2.0 and AJAX, PowerMeeting is able to supply the functions such as online document sharing and co-editing, online chatting (both text and voice), online conference and group activities such as brainstorming and voting. Compared with the current groupware applications, PowerMeeting demonstrates some benefits.

  1. PowerMeeting

  It can be seen from the table that, apart from the Web conferencing, the common feature, PowerMeeting beats the other best-known groupware in the areas of cost, operation system dependency and group activities, such as brainstorming and voting.

  Demographics (Vasitpol Voravitvet)

  During the group usability testing, one of the students was selected to be a session chair; he was responsible for controlling the meeting.

  The experienced users were randomly assigned. They were 7 students from HCI class. The group expected more objective feedback, which would help on identifying the issues of the system. On the other hand, 9 non-experienced users were volunteered to do the test. The group expected some issues that might be overlooked by the experienced users. Moreover, the group also expected some suggestions from the first hand experienced user.

  Given the size of the project and the time available to complete the evaluation study, a sample of convenience was used to perform the evaluation. Although the group decided to test the software with two groups of people, the number of participants was kept to a small number on both groups.

  Sample Size (Vasitpol Voravitvet)

  Any system that tries to achieve this purpose should be able to satisfy not only the users‟ needs, but also, it should be designed in a way that it can be easy to understand and easy to use. There are different definitions of usability, but all of them involve a user, a task and a product [9]. Furthermore, usability is concerned with how well a user can perform a task in a product. Considering these reasons, a usability evaluation is needed to identify ways in which PowerMeeting can be enhanced to support adequately users in performing collaborative tasks, and thus, make it an integral part of collaboration support on the Web.

  By providing a set of functionality that a group of distributed users can find useful and appealing, PowerMeeting tries to “make synchronous collaboration an integral part of collaboration support on the Web” [11].

  VORAVITVET) Usability evaluation (Denisse Zavala)

  METHODS (DENISSE ZAVALA & VASITPOL

  The rest of this report is organised in the following manner: the methods utilised in the evaluation are described in the next section. Then, the results obtained from the evaluation are presented, along with tables and diagrams. A section to discuss and analyse the results is included after that; in this section improvement suggestions are included. Finally, the conclusion summarises the findings of the evaluation.

  Report Organisation (Gang Yang)

  Yes Yes Yes No Yes Yes Open office Yes Yes Yes Yes Yes ? Cost No Yes Yes No No Yes Brainstorming Yes No No No No No Voting Yes No No No No No E-mail No Yes No No Yes No

  2. Novell Groupwise

  Yes Yes No Yes No ? Instant Messaging Yes Yes No Yes Yes ? Task management

  6 Operation No Yes Yes No Yes Yes Support for mobile devices

  5

  4

  3

  2

  1

  6. Lotus Notes [7]

  5. Oracle Beehive

  4. Google Docs

  3. Microsoft Sharepoint

  Participants came from different academic backgrounds, but the majority came from computer-oriented and business-oriented backgrounds. The participants were grouped in two clusters depending on their experience with PowerMeeting: experienced and inexperienced. 7 out of the 16 participants were students from the “Human Computer Interaction and Web User Interface” module in Manchester Business School and they had already have experience using PowerMeeting; they represent the “experienced” group. The rest of the participants, on the other hand, were students from other courses and they had never used PowerMeeting before; they form the “inexperienced” group.

  The inexperienced group was included in the evaluation in order to have a clear understanding on how the experience affects the interaction with PowerMeeting, in other words, we wanted to know how different the interaction with the system for inexperienced users is from that of the experienced users.

  The tasks designed for the evaluation aimed at testing the following features of PowerMeeting:  Brainstorming, including categorisation, voting and reporting sub-features.  Richtext  Microblog (or blog)  Chat Although not a particular feature of PowerMeeting, the evaluation also comprised the general function of logging in.

  1. Log in to the system. Users were provided with a username and a shared session name to access the system.

  In order to evaluate the collaborative nature of the system, a set of group tasks was designed. A meeting chair guided these tasks; they required a single item to be used by all the participants at once. The tasks are the following:

  Group tasks

  5. Agenda items: Users were asked to reorder the agenda items, moving the last item in the list to the top of the list.

  4. Microblog: For this task, users were asked to create a new microblog agenda item. After that, they were asked to add certain text in certain order, and then to delete and edit some the text. They were asked to report the time of their first blogpost.

  3. Richtext: For this tasks, participants needed to create a new richtext agenda item, then type a given text and try the formatting capabilities, such as making the text bold, changing the font size, colour and style. Users were asked to add an image to the text and then save their changes.

  2. Brainstorming: This task required participants to create a brainstorming agenda item and then to add the different ideas provided. They were also asked to create three categories, and to organise the ideas into these categories. Deletion and edition of ideas was also included in this task. After ideas were categorised, participants were asked to vote on them and analyse the report.

  1. Log in: Participants were asked to create a new session and then log in. They were provided with a session name and username. They were also asked to log out of the system and log in again.

  The tasks in this category were designed to evaluate each feature on its own, they are independent and the order of execution is not important.

  Individual tasks

  Tasks (Denisse Zavala)

  The 16 participants were in an age range from 20 to 35 years old. However, the majority was in the ranges 20-25 and 25-30 years old. All the experienced participants expressed they had used groupware systems before, and they stated to often use this kind of software.

  3. After the participants finished the whole test, they were asked to fill a questionnaire that contained questions related to the ease of use and satisfaction of the tasks. After the participants completed all the tasks, they were asked to complete the system questionnaire which asked about satisfaction with the system. Moreover, some of the users, the session chair and other 4 participants, were asked

  The session chair was provided with some additional tasks to perform before and during the meeting.

  2. One of the participants was selected to be a session chair; he was responsible for controlling the meeting.

  1. The group of participants was provided with a scenario, which explained the situation and the aim of the meeting. They were also provided with a set of tasks to complete.

  Group Testing

  2. After the participant finished each module of the test, they were asked to fill a questionnaire by rating the difficulty scales to complete each task.

  1. The participant was provided with a set of individual tasks to complete.

  Individual Testing

  The test is divided into 2 parts. The first part is the individual testing. The second part is the group testing. The test by experienced users was conducted in the lab where a set of predefined individual and group tasks were given to them. The experienced users performed the test by using PC computers that were connected to wire network. Similar to the experienced users, the non-experienced users were provided with a set of predefined individual and group tasks, but the test was conducted on the weekend using laptops that were connected to the wireless network.

  Procedure (Vasitpol Voravitvet)

  2. After locking the meeting mode, the session chair was asked to create four categories, and the participants were asked to add ideas related to those categories. They then had to categorise the ideas.

  3. Voting. Participants were asked submit their votes regarding four categories that were previously created in a different agenda item. They needed to reach a decision and they were encouraged to use the chat to contact their teammates. The session chair was responsible for leading the conversation and trying to make the decision in a timely manner.

   Task success: To measure how effectively users were able to complete the tasks we used binary success. The tasks designed for the evaluation had a clearly stated goal which participants were able to determine whether they had completed it or not. Success was determined by participants‟ expression of having completed the task and confirmed by monitor assigned to the participant.

  The first aspect of usability tested in our report is the effectiveness, measuring whether the experienced and inexperienced users were able to complete the tasks.

  RESULTS (KYRIAKI TSIAPARA) Performance: Effectiveness

  Descriptive statistics, such as mean and standard deviation, were used to analyse the data collected. A T-test was used in the analysis to find out if there is a significant difference in task completion time between experienced and inexperienced users.

  Data analysis (Denisse Zavala)

  The usability evaluation of PowerMeeting included an expert evaluation (heuristic evaluation). We asked 5 participants to rate PowerMeeting using each of Nielsen‟s Heuristics [8].

  Heuristic evaluation (Denisse Zavala)

   Likert scales: We used 5-point scales of ease-of-use and of satisfaction, where the highest rate (5) represented very easy to use and very satisfying, respectively.  Semantic differential scales: 5-point scales were used in the evaluation such as the following: useless to useful, inefficient to efficient and confusing to easy-to- understand.

  In order to obtain information about the participants‟ perception of the PowerMeeting and their interaction with it we used the following methods

  Self-reported metrics

   Time-on-task: We measured efficiency through this metric. For individual tasks monitors started measuring a task after the participant had read the instructions, the timer was started when the participant clicked the first item (or hit the first key on the keyboard) involved in the task. The timer was then stopped when the monitor confirmed the task was completed. For collaborative tasks only one timer was used, measuring the total time elapsed to reach to the conclusions included in the task.

  Performance metrics were used to measure how well participants were interacting with the system. We used the following performance metrics to measure individual and group tasks:

  4. Richtext. The meeting chair unlocks meeting mode and then steps away from the meeting. Participants were asked to create a new richtext item and add some text to it. With no previous notice, the meeting chair returns and locks back the meeting mode.

  Performance metrics

  The usability metrics used in the evaluation were performance metrics and self-reported metrics.

  Usability metrics (Denisse Zavala)

  We used a closed-question questionnaire to gather general information about participants, such as gender, age group, academic background, familiarity with groupware software. At the end of the session we used a similar questionnaire to find out the likelihood of the participant to use PowerMeeting again. Finally, after the end of the usability test, we asked the users in an open-ended interview about their opinion concerning PowerMeeting.

  Preference data were collected during the evaluation through questionnaires that were answered by participants at the end of each task and one general questionnaire answered at the end of the evaluation.

  In order to collect data during the evaluation each participant was observed by a designated monitor. Monitors used a stopwatch to measure how much time participants took to complete each task.

  Data collection (Denisse Zavala)

  For all the tests with the non-experienced users, a laptop with Windows Vista or Windows 7 operation system was used to conduct the tests. The laptop was connected to the Internet via the University of Manchester wireless network. Internet Explorer 7, Mozilla Firefox, or Google Chrome was used to perform the test.

  For all the tests with the experienced users, the test was conducted using a Windows XP-based PC computer. The computer was connected to the Internet via LAN. Internet Explorer 7 was used to perform the test.

  Testing Environment (Vasitpol Voravitvet)

  Individual tasks

  Inexperienced Users Experienced Users To measure the satisfaction and beliefs of the users towards The second lower average is for deleting ideas (average PowerMeeting, we conducted a questionnaire of three units. 3/5) and all the other items are ranked as easy, with the item Both experienced and inexperienced users answered the

  critical

  83.86 321.60 256.00 127.00 24.86 241.52 543.50 450.56 208.28 36.54 Time needed for completing each task (in sec)

  Agenda

  RichTextEditor Blog

  Login again Brainstorming

  200 400 600

  Inexperienced Users Experienced Users

  100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 88.9% 66.7% 77.8% 33.3% The completion of individual tasks by the users

  Agenda

  RichTextEditor Blog

  Login Brainstorming

  Satisfaction 0.0% 50.0% 100.0%

  √n Based on the above formula we perform the calculations and the results are interpreted as following: “We are 95% confident that the true mean time for the completion of the Brainstorming task is likely to be found between 428,10 and 215,04 seconds”.

  For a 95% confidence interval, the critical value of the two- tailed t with n-1=7-1=6 degrees of freedom is 2.447. As a result the constructed confidence interval is: Mean ± t

  Figure 1. Task success.

  The research had a sample of 7 persons. As a result, in order to construct a 95% confidence interval for the population mean time needed to complete each one of the above tasks, we will use the student t-distribution, since the sample size is very small in order to use the z values of normal distribution.

  Interval

  Confidence Interval 428,10 345,70 194,37 46,30 121,00 839,95 428,10 Lower 95% Confidence

  Mean 321,57 256,00 131,71 24,86 83,86 702,86 321,57 Standard Deviation 115,18 96,99 67,75 23,18 40,16 148,23 115,18 Upper 95%

  Confidence intervals Bra ins to rm in g R ich Te x t Edi to r Bl o g A g end a Lo g in A g a in To ta l t im e Bra ins to rm in g

  (for example, inexperienced user 4 had to login again and re-type the text in RichTextEditor because the system had stopped responding). For each task, the additional time needed in percentages from the inexperienced users (for those users who were allowed by the system to complete each task) was:  188% to login again (some users needed to change three browsers to be able to login again and restart the system)  69% for the brainstorming (the system did not respond in many cases)  76% for the RichTextEditor (mostly spent on typing the text)  64% for posting and editing the Blog  47% for the agenda For the group task, the experienced users needed 1202 seconds in total.

  The inexperienced users needed overall 82% more time than the experienced users (813.32 seconds for experienced users, opposed to 1480.41 seconds for the other cluster), partially because of technical pitfalls of the system, which forced the inexperienced users to repeat parts of some tasks

  Figure 2. Time needed for completing each task (in seconds)

  The second aspect of our performance testing of usability was the efficiency of PowerMeeting. Here, the time required to complete each task was measured. As far as inexperienced users were concerned, only those who managed to complete the tasks are included in this measurement.

  Performance: Efficiency

  The experienced users managed to complete the collaborative task, but the inexperienced users were not able to use collaboratively the PowerMeeting, due to problems of the system.

  Collaborative/group task

  Initially the users were asked to complete the tasks individually: all experienced users completed all the tasks, while the inexperienced users faced several problems. But the problematic function of PowerMeeting during our testing was, to some extent, responsible for the lower percentages.

  • Standard Deviation /

  „creating a brainstorming idea‟ being on top (average same questions. Since one of the experienced users and 4,67/5). some inexperienced users did not reply to all questions, we

  On the other hand, the inexperienced users found this task base our results only to those who responded, to achieve more difficult, in comparison with the other cluster. data that are more valuable to analysis and comparison.

  Deleting (2,13/5) and modifying (2,63/5) ideas are the two

  Individual tasks

  most difficult tasks in this group too, while creating and adding ideas are the only tasks that obtain an average over

  Login 3.

  Figure 5. Overall brainstorming experience rating Figure 3. Ease of login

  As shown in figure 5, the inexperienced users are generally The groups of users replied how easy they consider login dissatisfied from the system, while one out of two and re-login are. The experienced users believe that is an experienced users are satisfied. easy procedure, while the inexperienced ones find it more difficult or have a moderate opinion („nor difficult, nor easy‟).

  Brainstorming

  The users rated how easy or difficult they believe each brainstorming activity was. The value 1 was considered to be very difficult and value 5 very easy. In the figure below the average of each item is provided for both clusters. Averages until 3 are considered as difficult; around 3 are moderate and above 3 are easy.

  Figure 6. Voting and reporting satisfaction rating Figure 4. Brainstorming easiness/difficulty

  More specifically, the satisfaction from voting and Overall, the experienced users perceive the brainstorming reporting items was measured (Figure 6). Overall, the tasks as easy. The most difficult item is the modification of opinions of experienced users are spread (with a slightly ideas, which is the only item that gets a lower grade than 3. better overall result for reporting), while three out of four inexperienced users are dissatisfied by both options. Additionally, these functions are perceived as difficult by half inexperienced users, while the experienced ones tend to find them moderate or easy (Figure 7).

  Figure 9 Blog Figure 7. Voting and reporting ease of use rating.

  RichTextEditor Figure 10

  The easiness of each function in blogging was also measured (Figure 10). Here, the inexperienced users faced more problems with deleting (2.67/5) and editing (2.83/5) the posts.

  Figure 8

  First of all, the easiness of each function of RichTextEditor was tested. Although we expected that the experienced users would find the system easier, saving changes is considered to be less difficult from the inexperienced users. Overall, it seems that all users do not face many difficulties in editing the text. As a consequence they are satisfied by this function (Figure 9).

  Figure 11

  Another unexpected finding was that the experienced users found the order of posts more confusing than the inexperienced users.

  1

  2

  3

  4

  5 16,66% 66,66% 16,66%

  Difficult Easy to use to use 16,66% 33,33% 50%

  Rich Poor 16,66% 16,66% 33,33% 33,33%

  Slow Fast Table 1

  The researchers asked the users to grade two potential improvements (Figure 14). While the ability to change the size of the chat in the screen gets better grades, overall the users did not intensively supported these improvements.

  Figure 12

  Overall, according to figure 12, the users are satisfied or have a moderate attitude towards blogging, with inexperienced users scoring slightly better.

  Agenda

  Only two experienced and three inexperienced users have answered how they rate the use of the agenda. Both experienced users are not satisfied by this item (they rated 1 and 2 out of 5). Two out of three inexperienced users are not satisfied (rated 1 and 2), while the third user is satisfied (giving 4 out of 5).

  Figure 14

  Finally, the chat loses in comparison with a face-to-face

  Collaborative/group task meeting (Figure 15).

  Only the experienced users have responded to this part of the questionnaire.

  Chat Figure 15

  Brainstorming Figure 13

  As shown in Figure 13, the users have a moderate or positive attitude toward chatting in PowerMeeting. Also, they believe that it is easy to use, fast but looks poor (Table 2).

  The chat is…

  1.

  „It is a serious problem/pitfall of the system, making me feel really unsatisfied‟ (50%)

  2.

  „I did not like it, but not a very important problem and thus there is no need to change it‟ (33.33%)

  3.

  „It is accepted because this way we can work all together, viewing the same screen‟ (16.66%)

  4. None chose the „other option‟

  General Questions about groupware software

  At the end the users completed three questions relating to their experience with groupware software and PowerMeeting. Their responses are:  All the experienced users and almost none of the inexperienced users had used groupware software before (Figures 18, 19).

   The experienced users often use groupware software,

  Figure 16 while the inexperienced users never (Figures 20, 21).

   More than half of the experienced users will not use The brainstorming is… PowerMeeting in the future, while inexperienced users

  1

  2

  3

  4 5 are not sure about that (Figures 22, 23).

  33,33% 66,66% Difficult Easy to to use use

  16,66% 50% 33,33% Useless Useful

  16,66% 50% 33,33% Inefficient Efficient

  Table 2

  Overall the users have a moderate opinion towards brainstorming (Figure 16) and they believe that it is generally easy to use, useful and efficient (Table 3). Additionally, t he users‟ stronger wish in voting is to rate the options from the most suitable to the least suitable option (66.66%), followed by the current approach of percentages

  Figure 18 Figure 19 RichTextEditor

   Figure 20 Figure 21 Figure 17

  A moderate opinion for RichTextEditor is expressed as well (Figure 17). Also, the participants were asked „While you were working on your text, the session chair selected „In meeting‟. This resulted to losing all your work. How would you comment

   Figure 22 Figure 23 that?‟. They had to choose only one option. Because of the poor samples, the correlations found cannot be generalized to the population. The results indicated that there are not strong correlations between the aspects of performance.

  Correlations

  10. There is no signal for incoming messages from chat.

  The zero standard deviation in inexperienced users is explained in cases with * due to identical inputs (e.g. all answered „1‟) and in the other cases because of only one number input

  (the other answer was „NA‟).

  Interviews

  After the end of the test, we asked the inexperienced users for their opinions concerning the system. The main points were:

  1. In General, the system cannot arrange anything in a nice way

  2. Rename ideas in Brainstorming is very difficult 3. They did not like the arrangement for brainstorming.

  Brainstorming generates in a random order and they mentioned that sometime they cannot see all brainstorming items.

  4. Foreground / background in RichTextEditor are confusing for changing text color.

  5. They can have the same username in the system

  6. Once in meeting mode, all unsaved document disappeared.

  7. If session name does not exists, they need to refresh the login page again.

  8. Closing the browser without logout will cause the system think that the connection is disconnected (eg. their username was still appearing in the system).

  9. The system cannot display everything is full screen and has low resolution.

  11. The brainstorming needs a scroll bar and the ideas start overlapping due to space.

  1 Help and Documentation

  12. It is very difficult to use the edit function for brainstorming. The mouse pointer needs to point at the tiny bar and double click it.

  13. They were unable to move topics

  14. Deleting blog did not work

  15. They were unable to delete agenda items

  16. It was difficult to find how to edit text in RichTextEditor

  17. The system did not respond when writing text in RichTextEditor (it stopped typing)

  18. When the users put bullets, they lost the previous format in colours

  19. They disliked that they had to type the URL.

  „The system should allow us to put our own pic tures‟

  20. The blog looks like twitter, but there is no point for that because blogs are supposed to be seen from everyone (not only the limited number of participants in the session). „The way it works now is like chatting‟

  21.

  „Things happen too fast (when the system works), I do not know what it is doing and why ‟

  22. They liked the most that they did not have to install it, since it is available online

  23. Their suggestions for the future: brainstorming tree, private chat sessions

  3,33 1,633 „NA‟

  Errors 2,67 1,633

  SUS - the System Usability Scale

  deviation Mean St. deviation

  The System Usability Scale was used right after testing PowerMeeting and before the discussion takes place [3]. Four experienced and two inexperienced users answered this questionnaire. The results were:

  User SUS Score

  Experienced users

  User 2 47,5 User 4 (session chair) 37,5

  User 6

  35 User 7 42,5

  Inexperienced users

  User 4

  55 User 9

  30 Table 3

  Nielsen’s Heuristics

  Also this questionnaire was answered by four experienced and two inexperienced users. The results were: Experienced users

  Inexperienced users Mean St.

  Visibility of System Status

  5 Help users to Recognize, Diagnose and Recover from

  2,33 1,506

  1 Match between System and the Real

  World 2,83 1,169 4,5

  0,7071 User Control and

  Freedom Consistency and

  Standards 2,67 1,506

  3 Error prevention 2,67 1,366

  1 0*

  Recognition rather than Recall 3,33 1,211

  4 Flexibility and Efficiency of Use

  3,17 1,722

  1 0*

  Aesthetic and Minimalist Design

  2,83 1,602

  • Table 4

DISCUSSION (QI ZHANG AND ALEX TSE) (QI ZHANG)

  „foreground/background‟ in Rich Text Editor are quite confusing, according to the users. Additionally, all participants assumed that „background‟ means the background colour of the page rather than the selected text. During this test, one inexperienced user came across with the following: when he was adding bullets, he lost the previous colour format.

  The chat function in PowerMeeting is conceived as easy and fast. But it looks poor and simple without further functions, such as changing the colour or font size and animations. Most inexperienced users prefer to have a

  Collaborative/group task Chat

  The common problem users confronted is the difficulty in moving items from the list as they (5 experienced and all inexperienced) did not find this function is hidden in the pull- down menu of the „Edit‟ bar. In the meantime, we found that the first time most users tried to move the item by dragging it to the bottom (where the recycling bin was) and then two experienced users realized that they should go to the „Edit‟ pull-down menu to move items. This could illustrate why users are not satisfied.

  Agenda

  In general, over 60% of users had a satisfied or moderate experience using the blog and they all managed to complete this task successfully. Those who were not satisfied specified as reasons the fact that there was no option for deleting the blogpost, as well as the confusing order of the posts. One inexperienced user mentioned in the interview that the blog looked like twitter (but for limited viewers) and that the way it works now is like chatting.

  Blog

  When the users were doing each task, we monitored their time and actions and recorded the difficulties they came across. In order to gain an insight understanding of the result and to increase the creditability of data interpretation, open-ended interviews were conducted to collect user reviews.

  Following the provision of “what” that this study has found from previous parts, the aim of these interviews is to disclose “why”. The advantage of collecting reviews from both experienced and inexperienced users with different knowledge backgrounds is to develop improvements not only concerning a technical perspective but also the end- user‟s point of view. Finally, we conclude in some core issues as below.

  Rich Text Editor

  „right click‟ on the mouse. This illustrates that it would be more convenient and easier for users if the „right click‟ function were applicable. These two typical examples show the importance to improve accessibility in the user interface. In the meantime, two inexperienced users mentioned that they do not like the arrangement of brainstorming items, since ideas appear randomly and sometimes overlap. As a result, the users are not able to see all brainstorming ideas. Thus, it is suggested that a scroll bar could be useful.

  Meanwhile, an interesting issue that occurred during the usability test was that many inexperienced users tried to edit by using

  Overall, the experienced users perceived brainstorming tasks as easy whilst the inexperienced users confronted more difficulties. Among these tasks, deleting and modifying ideas are the most difficult tasks perceived by both groups. The two tasks aim to test whether users know that they can edit ideas by clicking the top dark bar of each idea. Nonetheless, most experienced users took a little time to find the dark bar to edit ideas while two out of seven experienced users did not know how to edit. Particularly, one experienced user claimed that it was very difficult to use the double click function on the ideas to gain access to the edit and delete function. On the other hand, three out of nine inexperienced users who watched the tutorial video needed additional time to find the instruction and complete the task, while the rest six inexperienced users did not know how to access this function. One possible reason for this difficulty is that the instruction icon „click the top bar for editing‟ is not obvious and the dark bar is too thin.

  Brainstorming

  Figure 3 shows that 83.33% of users found that login is easy or very easy. However, they had to refresh the page to re-login, which is perceived to be a serious pitfall. More specifically, if the user entered a wrong session name in the log-in page, an error message was displayed and the log-in button was somehow disabled, preventing the user from re- logging into the session. Currently, in order to re-log in to the same session again users have to click the refresh button on the web browser to reload PowerMeeting log-in page. Also, the automatic session name example (room 2) is viewed to be unnecessary for some users, whilst one user thought it is annoying to change it every time without the functionality of memorizing the previous session name.

  Individual Tasks Login

  The overall satisfaction of Rich Text Editor is above 60% and experienced users tend to be more satisfied. Surprisingly enough, one out of seven experienced users did not know how to insert a picture and most users found really annoying that they had to type a URL. Also, the system does not allow users to upload pictures from their own computers, which is seen as a drawback that needs to be addressed. Furthermore, one experienced user and four inexperienced ones mentioned that it was difficult for them to edit text in Rich Text Editor. Notably, all users needed time to manage changing the font colour, as well as the text highlight colour. This happened because there are no clear icons for changing colour, like (this appears at the Microsoft office suite). Instead, these functions named as private chatting room. Also, they are not happy with the fact that they cannot change the size of the group chat window. Additionally, 50% of users suggest the incorporation of a sound alarming or any other kind of notification when receiving a new message. In comparison with a face-to-face meeting, PowerMeeting is in losing ground, according to the users. However this has various reasons and some of them are not related to the functionality of PowerMeeting. For instance, some users believe that a physical meeting is more effective than any type of virtual meeting.

  Rich Text Editor

  From the results showed above, 50% of users claimed that losing all their work on the text when the session chair selected „In meeting‟ is a serious pitfall of the system. Thus, an automatically saving function should be built into this system to avoid such problem. Meanwhile, many inexperienced users suggested that the system should provide a small window which allows them to do their individual work without the control of the session chair even in the meeting mode.

  System Usability Scale (SUS)

  As far as the System Usability Scale was concerned, we can see that the score varies a lot from 35 to 47.5 for experienced users and from 30 to 50 for inexperienced users. However, this score depends on the effort and resources used to achieve the user‟s goals and the degree of satisfaction.

  Heuristic Evaluation

  The experienced users‟ opinions on each issue specified in the Nielsen‟s Heuristics questionnaires tends to have a small variance from 2,17 to 3,33 indicating their bad or moderate opinion for each aspect of PowerMeeting. Nevertheless, we pointed a significant standard deviation, considering the limited size of our sample and the limited availability of different responses. The trend is different for the inexperienced users, who achieve a score that varies from the absolute bad to the perfectly good opinion.