There are generally two large categories of computer users: those preferring the use of the mouse, and those who prefer to use the keyboard over the mouse. Most will probably use a mixture of both: Obviously typing text using the keyboard, and performing options such as interacting with dialogs, working menus by use of the mouse. Often-used keyboard shortcuts may be remembered by most so the hands do not have to move constantly between the keyboard and mouse.
And then there are those who have to rely on using the keyboard instead of the mouse, because using the mouse is close to impossible for them. The reason is this: They cannot see what they are doing. They are blind, and the keyboard is the only reliable means of interacting with their computer.
You may wonder: How the heck do blind people use a computer if they cannot see the monitor?
The answer lies in a technology called screen reading. A special kind of software takes the screen contents such as documents, menus, dialogs, web pages, e-mail messages etc., and converts this data into synthesized speech and/or tactile Braille output.
Most of those special products also have a means of simulating mouse movement and even clicking or dragging and dropping, but these software modes rely on the fact that there is data to interact with. Generally, working with software in this manner is most inefficient.
If a software package is not accessible, this poses a real problem. Requiring the extensive use of toolbars, for example, without alternative means of reaching the same option via the keyboard or a menu item, will dramatically hinder efficiency for a blind user, provided the toolbar icons are visible to the screen reader at all.
If a program is accessible, meaning it outputs the data in a way a screen reader can understand, its menus and dialogs can be reached with the use of the keyboard, and the controls can be interacted with through keyboard means, this workaround is not necessary.
Much of the standard software present on a wide variety of computers running the Microsoft Windows operating system, such as internet browsers, e-mail or word processing applications are quite accessible today. Microsoft, Corel, the Mozilla Foundation and other companies have started to adopt the notion that accessibility is important not just for blind users, but is beneficial to any computer user.
- How do BDS technologies fit into this picture?
Borland designed the Delphi Visual Component Library in a way that proves to be very compatible with the Microsoft Foundation Class controls that Windows and those applications that ship with it use. Most screen readers depend on an application’s controls to be organized in a hierarchical order: A dialog, for example, can have one or more child controls such as edits, buttons, checkboxes etc. The screen readers then send a standardized set of messages to any given window to ask about that window’s state and other properties. For example, an Edit control could be multi-line, or it could be read-only. A checkbox could be checked, unchecked, or even be a tri-state checkbox.
The VCL classes behave in much the same manner: They react to the same messages in the same way. So once a screen reader knows that it can send the same messages to a TEdit that it can to an MFC Edit, the result is virtually the same. There are, of course, exceptions, and these will also be mentioned below.
- Am I at a disadvantage if I use C++ instead of Delphi?
No. Using C++Builder, you have access to the same VCL components and can apply the same techniques that are covered for Delphi below.
The Visual Component Library for Win32 has been around longest, so we will start with that. It was introduced with Delphi 1 as VCL for Win16, but the general layout has been consistent into the Borland Developer Studio 2006 world.
As mentioned above, most screen readers use the hierarchy of windows within a given application to obtain information on the layout and relations between different windows. Remember: Every control is a window, even when it does not have a frame or title bar. It has—and this is the important part—a window handle that is registered with the Windows operating system, and which is the reference for the screen reader to interact with it.
When designing an application’s user interface, make sure that the following general rules of thumb are followed.
Make sure that all your controls are in the tab order. You can best test this by running your form and pressing TAB repeatedly and watch where the keyboard focus lands. If a control is not in the tab order, this means that it cannot easily be reached with the keyboard.
- Do not set TabStop to TRUE on purely visual controls
Yes, while it is important that every control the user can interact with should be reachable via the tab order, it is equally important to make sure that focus does not land on controls the user cannot interact with. For example, there are applications that allow the keyboard focus to go to a TPanel control. These controls are for visual design only and do not expose any window name whatsoever. What happens is that the focus goes there, but because there is no window name to speak, the blind user of your software will be left in a state of total silence. Also, since this control cannot be interacted with, even for a sighted person there is no visual indication where the focus went and where the action is. So, when testing your application, you should make sure that you do not lead your customers into this kind of trap, blind or not.
- Properly labeling your controls
Include TLabel and TStaticText controls in the tab order, immediately preceding the control they are labeling. Most screen readers use the tab order to associate labels with edits or list boxes, and this greatly helps the screen reader to associate the label with the control. The difference between TLabel and TStaticText is that TStaticText registers itself with its own window handle, whereas TLabel does not. Ideally, TStaticText is the better choice if you really want to make sure your label is found. Recent tests with the TLabeledEdit control, however, have also shown quite accurate results. This includes all possible placements of the label.
Ideally, a label for an edit field should be put to the left of that edit field. This makes it easiest for screen readers to find it. Also, Braille display users benefit from this strategy because both the label and the entered text can be viewed at the same time.
- Provide mnemonics for your controls
Mnemonics are the underlined characters that you see in many Windows dialogs. Pressing Alt plus that underlined letter immediately sets focus to that control. The mnemonic is declared in a control’s caption. The letter to become the mnemonic is preceded with an ampersand symbol “&”. If you want to provide a mnemonic to a control that has no caption of its own, such as a TEdit control, give the mnemonic to its label’s caption, and in addition, set the label’s FocusControl property to the control the focus should go to. This applies to both TLabel and TStaticText.
Make sure that within a certain form, mnemonics do not repeat. If two controls in two separate forms have an identical mnemonic, this does not pose a problem. Also, mnemonics on menu items can be the same in two separate drop down or sub menus. They only pose a problem if the same mnemonic has been specified for two options within the same menu level.
- Never leave the keyboard focus in Nowhere land
For example, if you provide an Apply button that becomes active as soon as a dialog option is changed, but becomes inactive after it has been clicked on, make sure that the last action you do in its OnClick handler is to set focus back to a control that is active. Ideally, this should be the first control the user lands on when opening the dialog initially. Not setting the focus to an active control usually requires the user to alt-tab away from your application, and back, in hopes that the form will automatically focus to the first focusable control. Or, even worse, the user will have to use the mouse emulation to click an active control to regain focus control.
- Make sure that every option can be reached with the keyboard
When designing your application, make sure that each option can be reached through either a keyboard shortcut or menu item. Giving your customers a toolbar is fine, as long as you provide a keyboard means to get to the same option. For example, Microsoft Word gives the user the choice to click on the “New” button on the standard toolbar, but it also provides the common shortcut Control+N to achieve the same result. Always ask yourself if the option you just implemented can be reached via the keyboard. If you are unsure, run your application, keep your hands away from the mouse, and try to use your app with the keyboard alone.
- Group together related controls
Group radio buttons that belong to the same option within one group of radio buttons, but do not intermix radio button groups if possible. Make sure the Group Box is a real parent control/container component of the radio buttons it groups together. Both TGroupBox and TRadioGroup are good choices.
- Avoid automated focus jumps
It is common to set focus to an edit control or other control once a radio button choice has been made. For example, in a Print dialog, clicking on the “Print selected pages” radio button may automatically set the focus to the edit field defining the starting page. The same usually happens when navigating with the keyboard: Tab lands on the group of radio buttons, with one radio button usually selected. The Windows way of selecting another radio button is by pressing UP or DOWN ARROW. If a focus jump is tied to the OnClick handler, pressing DOWN ARROW will result in two focus changes fired almost instantaneously: The OnClick of the newly focused radio button, followed immediately by the jump to the edit field. Screen readers usually do not get a chance to tell the user which radio button they just landed on. The second focus change simply is fired too fast for the screen reader to have a chance to catch the first focus change in time. Instead of providing an automated focus jump, make sure your radio button gets focus, and pressing TAB will, depending on the selected radio button, take the user to the desired control (in our example, the edit field for the starting page) in one case, the “number of copies” in the case where “Print all” has been selected.
There are a number of controls that do not have any counterpart on the Microsoft side, for example the TStringGrid or the ActionBand components. A few of them will be covered here.
TMemo is a multiline edit control that can display a big chunk of text. It has been made to act as if it is a standard Edit control. Screen readers do not have a problem interacting with it normally.
TStringGrid is a rather difficult control. The most difficult part is that its table-like structure can currently not be easily conveyed to screen readers. There is no easy programmatic way to associate a cell’s contents with the column header, for example. The best is to make sure that the arrow keys can go to each cell, and that the text within the cell is highlighted so screen readers can at least track the contents.
The answer to this depends on which version of Delphi, C++Builder, or Borland Developer Studio you are using. If you are using BDS 2006, you are in good shape. The ActionBand component that ships with BDS 2006 includes a technology called Microsoft Active Accessibility (MSAA). MSAA allows programmers to convey information to a screen reader through non-visual means. This can be as simple as the label for an edit field, the currently selected menu item, or a toolbar icon the keyboard focus can be placed on. A control supporting MSAA usually has properties such as Name, Value, State, Description and Hotkey that a screen reader can use to get information about it. This is independent of the visual layout of the control. The ActionBands were introduced in Delphi 5 and were developed further over the years, however they were completely inaccessible to screen reader users until BDS 2006 was released.
If you are still working with an older version of Delphi, and you require both the ActionBands and accessibility, it is time to talk to your boss about an upgrade to bDS 2006.
- TSpinEdit, TSpeedButton, TBitBtn
These all behave nicely with screen readers, even though they do not quite have counterparts on the MFC side (except for TSpinEdit).
- Developing your own visual components
Developing your own visual components does not automatically mean your application will become inaccessible. On the contrary: If you inherit from a VCL component, you usually inherit its accessibility features as well. Only if you specifically take things away will you make it inaccessible in this case.
There are, however, third-party components that look like, and for the most part, behave like standard controls, but which do not incorporate all the standard messages and events that a Windows control of the same look and feel would offer. An example is TVirtualStringTree, an advanced Tree View control with lots of great features. It does have a few problems, though. For one, when you open or close a node, it does not raise the same events that a SysTreeView32 control would. Moreover, it does not convey its opened or closed state to the screen reader when asked. It also supports checkable Tree items, however the same problem: When asked by a screen reader, it does not tell so.
If you develop components yourself, you may want to consider implementing MSAA, especially when it becomes apparent that you want to convey things to a user through icons or colors that are not easily conveyed through other visual means. Non-visual communication with an accessibility technology product may be the best choice. Even more so if your customer needs accessibility in your product.
The rules that apply to VCL for Win32 generally also apply to the VCL for .NET counterparts. Note that, because VCL for .NET has not been available for as long as VCL for Win32 has been, screen reader vendors may not yet have all the class assignments for the VCL .NET components. If you are developing a vCL .NET application, either download a demo and test yourself, or offer the customer a preview of your work so they can evaluate and provide feedback. You may even be able to create a configuration that contains the re-classed windows and ship that with your application so that screen reader users can immediately get accessibility upon installation.
- WinForms with either Delphi or C#
If you create a WinForms application either with Delphi or C#, you may already have come across properties such as AccName, AccValue, AccRole, and AccDescription in the Object Inspector. Microsoft developed the WinForms control set, and because MSAA also comes from them, they decided to build MSAA straight into their controls and provide means to developers to manipulate what gets communicated to screen readers. Borland, when implementing WinForms support, also offers access to these properties, so WinForms applications written in Delphi or C# offer the same accessibility features that such apps developed in Visual C# or Visual Basic .NET do.
Basically the same rules still apply; however with the ability to give an Edit an Accessible Name, the necessity for the label to be found can be avoided. Screen readers usually use MSAA info when available, and only if it is not, go searching for on-screen means of identifying a label for an edit or list box.
Usually, the AccessibleRole should not be changed. Use this when developing your own WinForms components to tell the screen reader what type your control is of.
The AccDescription can be used to give more information. Microsoft uses this in the upcoming Office 2007 to give a description for each and every Ribbon control. Screen readers can usually be made to take advantage of these, or read the description by default along with the name and value if it exists.
- Things to be avoided at all costs
The only truly inaccessible thing you can do with certain versions of Delphi or C++Builder is to create CLX applications (those that you can also compile using Kylix). CLX components, in order to be platform independent, have to avoid using certain Windows-specific things such as registering themselves with their own window handles. CLX applications have one window that is visible to the Windows world, and everything else happens inside that one window, without Windows ever being notified of it. Focus changes, menus opening and closing, selections etc. all take place without Windows and with it the screen reader, being able to take notice.
As you can see, if you develop applications using the VCL for Win32, VCL for .NET or WinForms, your applications are, for the most part, accessible out of the box. There are a few guidelines which you should adhere to, to make your application more accessible. The most important question you should ask yourself: Can I do this without using the mouse?
All these techniques can be applied either at design-time of a new application, or when you want or need to make your application more accessible with the next release. Many things are automatic, but some things require a conscious decision on your part.
I hope that this article helps you in making some of these decisions in favor of customers who need screen readers to access the screen contents. Happy accessible programming!