

how changing a form sequence improved performance by 55%
Through an iterative design and testing process, adjusting the sequential order in a Wikipedia payments form had enormous impact.

solution
Seemingly simple, but complex at scale and across countries: We tried a monthly ask at the very end of our payments form. We saw a 55% increase in total revenue from recurring donations in our initial test, and a 74% increase in the total number of recurring donations.
challenge
Our team noticed a sizable number of donors to Wikipedia asked for ways to “contribute regularly” and not just once per year, even though there was an option to do so, in the form.
We wondered: Were donors not seeing the option to contribute monthly? Was it not in a format they preferred?

overview
Tools
-
HTML, CSS, Javascript
MediaWiki
-
Thumbtack A/B testing tool
-
Figma, Adobe
Team
-
1 designer
-
2-3 developers
-
My role: Sr. Manager (design, research analytics)
Timeline
-
Overall: 
 8+ weeks
-
Discovery, research:
 2+ weeks
-
Design & testing:
 6+ weeks

existing monthly ask
Below is a visual of the existing form in Sweden, where we decided to run initial tests.
Why Sweden? Because donors there behave similarly to those in the U.S. and the payment methods were nearly identical. Also: Wikipedia traffic in Sweden is less than in the U.S., so the impact would be smaller, and thus it was a safer space for testing.


past sequential testing
We had previously tried and tested different form sequences and there was, in fact, an optimal way of ordering the form, based on performance metrics paired with design best practices.
Sometimes we'd try unusual experiments, but a healthy rule of thumb for our team was that: If it isn’t harming user experience and it seems to make sense, we'd try it if there there was an opportunity to improve metrics.
Here are some past examples of sequential testing flows ...

test 2
test 1
control

... and what they might look like, in the forms:



control
test 1
test 2

sketches & lo-fi design
The idea of a monthly ask at the very end of this form came to us from design partners who'd seen some success with this and suggested we try it at scale.
It seemed counterintuitive at first; wouldn't a user want the frequency option at the start of the form, when they're deciding the amount? Since we were lucky to work with large data sets and the space to test unusual ideas, we figured we’d give it a try.
We moved on to brainstorming:
-
Where would this live in the flow?
-
What would this ask look like, what would it say?
-
Since it meant adding a decision for users, were there other decisions we could remove?
It seemed counterintuitive at first; wouldn't a user want the frequency option
at the start of the form, when they're deciding the amount?






current and proposed test sequence
control
test sequence
subsequent test



moving toward hi-fi design


user flow
It was important to turn our complex, country-specific payments flow into an easier-to-read visual solution, so I documented the existing user flow, reflecting a complicated legacy architecture. The flow reflected the payment methods we offered for Sweden, where we planned to test first.
The slightest change to it could have meant unexpected problems or bugs, and more complexity means a higher risk that things could break at scale.
Also: One of our goals was to try to not to disrupt the user experience; we needed to visualize the number of decisions we were asking the users to make.


I also mapped out the proposed changes.




prototype
A full prototype is available in Figma.


results
The results from the initial test were a runaway success from the quantitative side. But we gained some interesting feedback on the qualitative, and new questions surfaced.



iterative tests
The feedback from the initial test informed some of our iterative testing. After running dozens of tests we'd discuss which iterations to adopt and why. Here are some examples:
Test #1: No frequency option Step 1 Results: Wash

Test #3: Add dynamic asks based on initial amount selection. 

Results: Wash

Test #2: The ask doesn’t appear when initial amount is below a certain threshold.

Results: Wash



other improvements
The feedback also inspired us to adjust other features in the payment path, so as to provide a better experience for these users, particularly so that they did not feel bombarded by other messages:






localization
Since we operated across dozens of countries and languages, we wanted to understand where & when this adjustment to the form should be tested, and possibly rolled in.
Working closely with our localization specialists and following their guidance, we repeated the process for over a dozen countries. We repeated the same process outlined earlier, and it was important to first document the user flow for each country (since payment methods and processors might differ); also because of the cultural or linguistic implications of rolling out a new feature.


next steps
We were on to learn more! The work is never done.
As a team, we strove to continuously reconcile performance with a good user experience, and also to localize as appropriate, based on donor feedback across countries and languages.
Thank you for reading!
