Best Strategy for Designing and Implementing UI for Different Platforms?

How do you design and implement UI across devices of different sizes and input types in your games? In the past I’ve just designed for desktop and patched in changes to mobile/console based on input type as an afterthought, but it normally results in buggy/ugly behavior for non-desktop users.

For my next game I considered designing separate UI for desktop, mobile, and console, and then just mounting whichever one was relevant based on the last input type, but that wouldn’t necessarily cover all cases since I also need to tailor the UI to screen size and input type isn’t necessarily tied to screen size.

What’s considered “best practice” for this kind of thing?

In bad situations this can turn into an m*n problem where you have m screen sizes and n input modes. So some kind of unification along one of those axes seems like a good idea.

Screen size seems to be more of a pain to deal with for the typical game in my opinion, so I tend to tailor my interfaces to that. (e.g. the relative size of elements such as text can change with different screen sizes/DPIs which also affects the layout of a design.)

On the other hand, a single well-designed interface can handle multiple different input modes without much difficulty in my experience. There may be times when you do need a different design per input mode but this seems to be pretty rare so I treat it as the exception not than the rule.

Tailoring to screen sizes means you do not need to create a new UI for each input type. Typically the only things that visually change based on input type are the input guides (e.g. button icons) and highlights/selections (mostly for gamepad). Of course this means you do need to put a bit more effort into designing the single UI so that it works well with all of these.

I think this is good for players as well because it means they don’t need to deal with a different interface for each input mode of the same game (which they will encounter frequently, especially if they using different input modes on the same platform). You could say the same thing about screen sizes, but I disagree because I think designing the interface based on the input method would result in more unnecessary variation in the design than designing the interface based on the screen size.

I’m not sure if there is an overall best-practice, there will almost always be exceptions (we are talking about user interfaces after all!). Separating by screen sizes is what seems to work the best for me.

One method of doing support for different input modes

The goal of showing this it to give an example of automating/systematizing something to help support different input modes since having some method of managing the complexity is important. It can be something as simple as providing lifecycle management so you have a simple place to bind/unbind things, or a bit more complicated like what I have.

I have been experimenting with a system recently that automatically manages GUI menus including gamepad selection management and ContextActionService bindings. The idea is to have only one menu considered active at a time, then update various aspects of that menu based on whether it is active or not.

  • Only the action bindings relevant to the active menu are actually enabled.
  • Any changes to the gamepad selection pass through a callback function that can choose whether a selected instance is valid in the active menu or not (and can do any other arbitrary work as well that would happen on a selection change). This is useful for keeping the janky default selection system under control.
  • Any changes to the menu’s active state can trigger code that will update the menu’s visibility, and similarly with input mode any change will trigger code that apply the correct appearance for that input mode (showing/hiding icons on the active menu etc.).

Those made reliable gamepad support almost trivial, which was very helpful. It also handles keyboard bindings, and provides lifecycle management that is useful across the board.

This is a limited way to do things, but there are benefits, and it’s been working pretty well for me so far.

There are ways it can be extended to handle extra stuff like layering menus on top of each other. For me this has mostly been special cases - I have not implemented a generic way to do layering, but it can be done for specific menus if desired by choosing whether a menu is cosmetically visible or not based on what the active menu is and keys/values it may have (since in my implementation a menu is represented as a table that can be freely interacted with).

An example of the usage code:

local GuiGroups = require(Path.To.GuiGroups)
local InputMode = require(Path.To.InputMode)

local Frame = Path.To.Menu.Frame
local GamepadA = Frame.IconGamepadA

local Group = GuiGroups.Create("UniqueMenuName")
Group.SomeArbitraryValue = true

local function CloseFunction(UserInputState)
   if UserInputState == Enum.UserInputState.Begin then
      -- Close the menu and return to the default state where no menus are open.
Group:AddBinding(CloseFunction, Enum.KeyCode.ButtonB, Enum.KeyCode.X)

   -- Close the menu and return to the default state where no menus are open.

Group:SetSelectionHandler(function(NewSelection, OldSelection)
   if NewSelection:IsDescendantOf(Frame) then
      return NewSelection
   elseif OldSelection.Parent then
      return OldSelection

local function UpdateVisiblity()
   Frame.Visible = (GuiGroups.ActiveGroup == Group)
   GamepadB.Visible = InputMode.IsGamepad()

-- Example of special cased menu layering.
-- Of course only one of GroupEntered or Entered+Exited signal callbacks should be used, not both. This is just for the example.
   Frame.Visible =
      (NewActiveGroup.GroupIAmArbitrarilyASubMenuOf == Group) or
      (NewActiveGroup == Group)

1 Like

I’ve gone the “create different apps for each type” route. To reduce the mountain of work that entails, I’ve made sure that my UI is made from generic components that each work on all screens and inputs, then build apps that piece those together in ways that make sense for fluid UX on the given device.

In these examples, you can see similar content being displayed with the same building blocks but pieced together differently, such as the action buttons on replies being right justified for easier thumb reach.