Tag : Intern Project Date : Sep 2016 - Dec 2016
Category : UX Design, Collaborative Tool Team : Yeshuang Zhu, Shichao Yue
Do you help polish papers for co-authors and friends?
A majority of English users are non-native and writing is the most demanded language task for them. Non-native speakers may ask co-authors or friends to help polish their papers by collaboration tools.
However, current tools are inconvenient for both version control and revision synthesis which may not enable multiple non-native co-authors to efficiently share their language knowledge to compensate individual deficiencies.
How can the wisdom of masses exceed that of a master?
Competitor analysis and business value
Synchronous collaboration tools are very popular, such as Microsoft Word, Google Docs.
However, it has been proved to suffer from several shortcomings. The most critical of which is that in synchronous documents, users refrain from contributing to other co-authors' text due to social considerations. This leads to that co-authors cannot all adequately contribute to the editing process.
The underlines and strike-throughs…
It's difficult to understand. Users cannot propose a different version of the text that is already edited by others.
If the co-authors edit on multiple copies of the same document, we find that this paradigm is promising for improving language quality and is acceptable to users. Users generate diverse revisions, recognize better candidates, and mutually inspire each other. But, the user interface adds too much cognitive load.
Literature review, user interview, insights
We analyzed previous work on tools for collaborative writing and editing, as well as that on summarization and visualization of edits from multiple collaborators. In addition, we also reviewed recent work on crowdsourcing approaches to improving writing quality.
Tools for Collaborative Writing and Editing
These tools enhance collaborative writing with communication and information sharing functions including annotation, messaging, computer conferencing, and notification. However, they are inefficient for common tasks in collaborative language editing.
Edit Tracking and Summarization
In collaborative editing, appropriate ways to communicate about changes to documents are important for effective understanding and information sharing.
Crowdsourcing has been found to be capable of accomplishing complex tasks by decomposing them into micro and context-independent sub-tasks. However, the key focus of crowdsourcing approaches is how to break writing into micro-tasks and recruit public crowd workers to complete the subtasks.
Contribution In Editing
To gain insights on users’ problems and requirements for collaborative language editing, we conducted a pilot study with four Chinese graduate students. we mocked up an asynchronous, multi-versioned environment by presenting four copies of the same text in four separate Google Docs windows. After editing, we interviewed participants about their strategy and experience in such a collaboration task and environment.
We found that in this pilot study, participants interacted with edits by collaborators actively and intensively. For example, when a participant identified and marked an error, others would notice that and also try to fix it; and when a participant had proposed a reasonable edit, others would follow her/him by copying the edit into their own text. All participants emphasized the incompetence of the edit mode for presenting the editor’s intent.
Overall, participants felt that collaborative editing could improve the language quality. Specifically, they mentioned that edits by others enabled them to notice more errors and acquire more alternatives for correction.
Who? What? Why?
A non-native speaker needs to use collaborative tools to polish essays when writing professional articles and papers. Because the sharing of language knowledge among non-native co-authors will improve language quality of writing.
How should the interface be designed?
After conducting the research above, We summarized three design insights of collaborative editing interface for non-native speaker authors:
Asynchronous, multi-versioned collaboration paradigm is promising for improving language quality and is acceptable for non-native speaker writers. Users generate diverse revisions, recognize better candidates, and mutually inspire each other.
The efficiency of locating and grasping edits in different versions is key to users’ reception of collaborative editing. Hence, the visualization of edits should go beyond presenting the character-level histories as-is, by providing more relevant, semantically meaningful, and cumulative results.
For effective knowledge sharing, the interface should enable users to directly interact with co-authors’ edits, including to incorporate, comment, and vote.
Brainstorming, workflow, storyboard, sketch
Based on the result of our research and users' need. We brainstormed to design the system on two theoretical bases. First, individual NNS authors have a different level and corpus of English knowledge, dependent on their unique learning trajectory, and thus can generate language expressions of varying qualities. Second, although non-native authors cannot recall appropriate expressions sometimes, they have the ability to compare different expressions and recognize good ones
After discussion, we decided the workflow and begin to develop a web-based system to allow NNS author to post a draft to the server and have multiple co-authors to edit in separate parallel versions. Individual co-authors can generate their own expressions, which will then be directly incorporated by others or inspire new ones.
Using collaborative editing tools in a new way
Alice was working for her paper and felt tired when manually synthesizing revisions from co-authors
One day, she found a website, which allowed her to check co-authors' edit in a concise way, and automatically generated summary
She first uploaded a copy of her paper
Them she called the other four co-authors to proofread her paper on the website
Co-authors can see the existed revisions by previous people and glance at the summary to quickly realize others work
Finally, Alice can make the decisions on each revision with the help of co-authors' votes and comments
Finally, we concluded several important features that, the novel system should have and quickly drew a user interface to show the idea.
#1. Asynchronous and Multi-Versioned Workflow
cross-version sentence mapping for edit tracking; aligned sentences view
#2.Edit summary view
Summarization of edits from multiple co-authors
#3.Aggregation and Visualization
A collaborative editing interface that enables co-authors to examine, comment on, and borrow edits of others; refined edit presentation
Design features, hifi-prototype, implement
Based on the user feedback in the pilot study and ideation, we propose CEPT, a collaborative editing tool targeted at improving the quality and efficiency of L2 writing by facilitating language knowledge sharing in the editing process.
Left is the working copy of the user. Each collaborator has this same view. They can just edit the text as in a local document. Edits are tracked.
Right is the aligned sentence view. CEPT aligns corresponding sentences across different co-authors’ revisions.
The aligned sentences are listed in the following order.
By default, all deleted words and the surrounding sentences in the context are hidden. The buttons can toggle their visibility.
First, a mirror of the current author’s sentence.
The automatically summarized sentence for all the versions of the co-authors.
Other sentences edited by co-authors whose edits are already merged in summary. With these parallel versions of sentences aligned together, it is convenient for users to browse and compare different revisions at a glance.
Then the sentences with cross-sentence edits that cannot be summarized into one sentence, if there is any.
We combined A/B testing with usability test to evaluate our prototype. A/B testing helped us to compare and examine the effectiveness of CEPT for improving editing quality in collaborative proofreading, compared with a traditional interface without any assistance of aggregation, visualization, or interaction of language editing. The usability test helped us identify the specific problems of our design and prototype.
In test A, we developed a baseline interface which was similar to CEPT but without the novel features of sentence mapping, edit aggregation and interaction.
In test B, users used the CEPT system to proofread the paragraphs. We measured participants’ time spent, improvement of editing quality, interaction patterns, and subjective feedback.
We learned from participants’ free-form feedback that the experience of baseline interface is more like an individual editing process for users. Eleven out of the 12 participants expressed that when using the baseline interface, they read through the text first and tried to edit by themselves. In CEPT, users behaved differently from the baseline interface in that almost all participants (11 out of 12) quickly browsed collaborator’s revisions first.
Particularly, half of them mentioned that they browsed the summary view with the highest priority. Overall, CEPT users interact and ponder more on edits of collaborators, and they are more willing to accept them for the feasibility of the one-click borrowing function.