Automatically Calculate Animation Coverage

I propose a computer vision tool which records the user's screen to automatically detect which apps and websites have and haven't implemented animations and microinteractions.

The output of this tool would be an "animation coverage" index and it would likely be shown as a percentage. For example: "60% of all state changes in App_Name have animations, whereas 40% of the state changes are abrupt and immediate"

The driving force behind this is that I believe that animations can be used to communicate what's actually happening with the information model the user is interacting with.

A sense of continuity and history is important to metaphorize user interfaces to reduce confusion for the user. If everything just pops up out of nowhere, that can cause confusion about the underlying data model.

For example, when clicking the color picker in macOS, it immediately appears without an animation. Worse yet, it sometimes appears far away from the clicked button, and sometimes even on a different display entirely.

One can easily imagine (many of us having been there ourselves) frustratedly clicking the color picker button not realizing that something is actually happening.

This could be solved by animating the color picker out from the button, which makes sense from a storytelling perspective; clicking the button ought to cause something to emerge out of the clicked button.

Another example of animation coverage is macOS's top left menu bar. The red x "close" button doesn't animate. The minimize and fullscreen buttons, however, do. The former could have a subtle "disappear" animation to help the user know it's not "hidden" (cmd+h) but rather the window has been closed.

The purpose of this computer vision tool would be to monitor for changes to the screen, and measure whether or not those changes were instantaneous or if they animated from some other place.

Perhaps user input could be captured at each animation to rate the helpfulness of each animation. i.e. some are just cosmetic, whereas others actually help to build out the cognitive model of the metaphor being engaged with.

I propose a computer vision tool which records the user's screen to automatically detect which apps and websites have and haven't implemented animations and microinteractions.

The output of this tool would be an "animation coverage" index and it would likely be shown as a percentage. For example: "60% of all state changes in App_Name have animations, whereas 40% of the state changes are abrupt and immediate"

The driving force behind this is that I believe that animations can be used to communicate what's actually happening with the information model the user is interacting with.

A sense of continuity and history is important to metaphorize user interfaces to reduce confusion for the user. If everything just pops up out of nowhere, that can cause confusion about the underlying data model.

For example, when clicking the color picker in macOS, it immediately appears without an animation. Worse yet, it sometimes appears far away from the clicked button, and sometimes even on a different display entirely.

One can easily imagine (many of us having been there ourselves) frustratedly clicking the color picker button not realizing that something is actually happening.

This could be solved by animating the color picker out from the button, which makes sense from a storytelling perspective; clicking the button ought to cause something to emerge out of the clicked button.

Another example of animation coverage is macOS's top left menu bar. The red x "close" button doesn't animate. The minimize and fullscreen buttons, however, do. The former could have a subtle "disappear" animation to help the user know it's not "hidden" (cmd+h) but rather the window has been closed.

The purpose of this computer vision tool would be to monitor for changes to the screen, and measure whether or not those changes were instantaneous or if they animated from some other place.

Perhaps user input could be captured at each animation to rate the helpfulness of each animation. i.e. some are just cosmetic, whereas others actually help to build out the cognitive model of the metaphor being engaged with.

I propose a computer vision tool which records the user's screen to automatically detect which apps and websites have and haven't implemented animations and microinteractions.

The output of this tool would be an "animation coverage" index and it would likely be shown as a percentage. For example: "60% of all state changes in App_Name have animations, whereas 40% of the state changes are abrupt and immediate"

The driving force behind this is that I believe that animations can be used to communicate what's actually happening with the information model the user is interacting with.

A sense of continuity and history is important to metaphorize user interfaces to reduce confusion for the user. If everything just pops up out of nowhere, that can cause confusion about the underlying data model.

For example, when clicking the color picker in macOS, it immediately appears without an animation. Worse yet, it sometimes appears far away from the clicked button, and sometimes even on a different display entirely.

One can easily imagine (many of us having been there ourselves) frustratedly clicking the color picker button not realizing that something is actually happening.

This could be solved by animating the color picker out from the button, which makes sense from a storytelling perspective; clicking the button ought to cause something to emerge out of the clicked button.

Another example of animation coverage is macOS's top left menu bar. The red x "close" button doesn't animate. The minimize and fullscreen buttons, however, do. The former could have a subtle "disappear" animation to help the user know it's not "hidden" (cmd+h) but rather the window has been closed.

The purpose of this computer vision tool would be to monitor for changes to the screen, and measure whether or not those changes were instantaneous or if they animated from some other place.

Perhaps user input could be captured at each animation to rate the helpfulness of each animation. i.e. some are just cosmetic, whereas others actually help to build out the cognitive model of the metaphor being engaged with.