Introduction

The aim of this project is to explore different options for the design of user interface for a digital television. As a large portion of the current TV sets in today's world, also involve the ability to send data to service provider as well, we are going to incorporate this feature in the GUI (Graphical User Interface) too. Because the other portion of TV sets, similar to the ones using systems like Freeview, Satellite receiver, etc restrict the user to be only able to view and receive content like the programmes and Teletext. But with the ability to send data, we can incorporate lots of other features like being able to pay for the programmes, manage subscriptions, the provider, etc. As this project is involving simulation, we will be using Adobe Flash for the coding and design of the interface. Hence, for example, the arrows of the keyboard will play the role of the cursor keys of the TV remote control. However, we will be incorporating the capability of using the mouse pointer as well which can indicate the possibility of wireless optical mice or integrated wheel pad on the TV remote control as well.

Objectives:

  • Simulate a simple GUI
  • Design the interface
  • Use coding to add interactivity
  • Add special effects to attract the user
  • Building means of ing the service provider
  • Payment facilities for extra services

Literature Review

Firstly, we have to examine the necessity of a user interface, be it for personal computer, television, mobile phone or any other computerised device. As we know, the computers operate, mainly based on the digital concept. Hence, if we had to explain what the machine understands, it would be 0s and 1s, also known as the binary numbers. We can assume that the “0” represents 0 volts and the “1” represents the 5 volts in the computers. However, the voltage alone can't be enough for humans to be able to communicate and utilise the computer for their use; there has to be a way for them to interact with it efficiently so that it fulfils their need for processing information.

This, arises the need for a proper user interface. Historically, we can divide the development of user interfaces into three eras:

  1. The Batch interface which started during the 40s
  2. The Command-line user interface which popularised in the early 70s
  3. The Graphical User interface (Also known as GUI); mainly developed during the early 70s with the Xerox Alto being the first computer to utilise it.

It becomes obvious that we will be focus on the last mentioned user interface ie. The GUI. This is mainly due to it gaining popularity when the computers developed by Apple appeared in the markets for personal use during the mid 80s. This was before Microsoft started to revolutionise the PC market with its release of Windows operating system.

After getting familiar with user interfaces, we initiate the concept of digital television.

The introduction of digital television distribution, whether by satellite, cable or terrestrial networks, can be expected to have two major impacts on the service providers. The first is that the more efficient use of spectrum increases the total capacity available for services. The second is that the transmission cost per service should fall. This presents a combination of opportunities and challenges to the service providers, and opens up the market place for more services and new types of service.

From the user's point of view, this increase in services presents a corresponding increase in the difficulty of choosing and selecting services to watch. Traditional

paper TV guides will be unlikely to list all of the services, and will be unwieldy if they do. Therefore, the key role for a user interface for digital television services

will be to assist in programme and service selection.

However, assistance in selection is not the only use for an interface. In a purely point-to-multipoint broadcast environment it is possible to create services that appear interactive to the viewer, where that interaction is local to the decoder and relies on support data transmitted with the service. A good user interface to such services will increase their attraction, and help them to compete against on-demand services.

Furthermore, with the introduction of digital video broadcasting, future services can be expected to develop in three principal areas:

  • Multiplexing of services into a single channel
  • Addition of data services, including services which are independent of any transmitted video programme.
  • Provision of a return path from the consumer to a net-work operator or service provider.

We can see the evolution of these services in the figure below:

Multiplexing of services. This provides the obvious quantitative increase in the number of services that can be delivered, but in addition it makes possible new types of service: multi-camera sport, multi-threaded drama, Near Video on Demand (NVoD).

Addition of Data services. These services include multi-lingual subtitles, electronic programme guides, ‘interactive teletext', and games downloading - all must be selectable and so have their impact on the user inter-face.

Provision of a Return Path. The return path makes possible services such as impulse Pay-Per-View (PPV), but can also be used to provide low-bandwidth on-demand services.

While there is little doubt that the majority of digital television services will follow the format of existing analogue services (linear schedules of television programmes), there is also little doubt that that some services will take advantage of the transmission technology, and in order to prototype a user interface we need to make some assumptions as to what these services might be.

The figure below shows the four key operators involved in the delivery of programmes to the viewer: the programme provider, service provider, bouquet provider, and network operator:

In traditional television services, a viewer makes their choice at the service level (by changing channel). In digital television services, an EPG can allow selection to take place at any of the other levels, depending on the design of the guide.

Usability Issues

There are some key issues that must be considered in the design of the interface.

The users. Most of the users of tomorrow's technology will be very similar to today's users, and there are plenty of examples to demonstrate that user's difficulties do not disappear with the arrival of a new user interface technology on the market.

Choice of input device. The Remore Control Unit plays a key role in the usability of the interface. In existing systems, many keys are not used (e.g. colour and contrast), and key labels are not understood. Some basic design rules are: it should have as few keys as possible, include a pointing device (joystick, arrow keys), and have a key layout that is easy to remember.

Understanding of the system. Viewers need to be able to hold an accurate mental representation of the system in their minds. If this is difficult, then they are likely to have problems using the system (for example the problem of understanding that there is a tuner in both the VCR and the television).

Theory

Now, we shall be investigating the theory in our proposed user interface. Usually, in our daily activities, we are usually involved in practically experiencing doing things and we might not be paying enough attention to the theory behind them. Hence, this arises the need to understand this. To help us understand some of the important theories behind designing user interfaces, I am referencing the article below that I read from the developer.com website which I found really useful in helping me understand these theories:

Interacting with a System

Let's begin with the basics. How machines and humans interact.

One of the simplest approaches to modelling interactive systems is to describe the stages of actions users go through when faced with the task of using a system. We could identify roughly seven steps for a typical user interaction with a generic interactive system: forming the goal and the intention, specifying and executing the action, perceive and interpret the system state, and finally semantically evaluating the interaction outcome (see Figure 1).First, the user forms a conceptual intention from her/his goal (for example, a user wants to access a particular project in the repository accessed from the Web site in Figure 2). Second, s/he tries to adapt this intention to the commands provided by the system (in our trivial example, an initial exploration of the Web page is needed to figure out how to realize the intention) and from these (user-perceived) commands carries out the action (for example, the user tries to type some information into the search text field, then hits the "search" button beside it). Then, the user attempts to understand the outcomes of her/his actions (in our example, by examining the page obtained after the "search" button is pressed). This is particularly important for computer systems, where the inner workings are hidden and users have to figure out the internal state only from few hints. The last three stages help the user to develop her/his idea of the system. The whole process is performed in cycles of action and evaluation. The user refines the model of the system s/he has in mind by interpreting the outcome of her/his actions.

Of course, our discussion was rather simplistic and immediate. Clearly, the issues behind human-computer interaction are hardly ever so plain and clear-cut. Nevertheless, our aim here is to provide an introduction to such important issues from an alternative viewpoint. Interested readers can get deeper into these topics by turning to the specialized literature. Some entry points are reported at the end of this article.

Users Are Not Designers, Nor Are Designers Users

As human beings, we can rely only on our current and past experience when interacting with the world around us; we also need semantic models. For example, we need to give meaning to whatever happens to us. That's why we often hear people talking about their experience with computers: They are familiar with the concepts of files, databases, mouse gestures, and so forth. But that doesn't mean that an end user will be able to read the designer's mind. What might seem an easy application for the designer team might be awkward and difficult to employ by the end user. It has often happened that even developers couldn't cope with buggy applications, whose internal model was cryptic. So, because the end user will have to figure out how the software will work by being given few, artificial hints, these same hints will have to be as coherent as possible; the basic ideas, the visible items and their interaction, their names, and everything else should be thought out at design time.

When planning a UI, a designer should focus on the needs of end users. It often happens, instead, that designers are too busy with citations from other cool, award-winning products that may result in a nightmarish implementation for the developer and a complete mystery for the end user. But when the UI is designed by a developer (as it frequently happens in small firms, for lack of money), the scenario might be even bleaker: the developer-newly-turned-designed will cast his old programmer's mindset on a less-usable interface. That's only because the developer is far too aware of how demanding a cool UI might be. However, big companies and other organizations are spreading their design guidelines, written by their team of professional designers, and that will, eventually, make their way through in common software also.

A Few More Concepts

In this section, we will briefly introduce some other interesting concepts drawn from the HCI (Human Computer Interaction) field.

Mismatch between user and system models. This is often referred to as the so-called gulf metaphors.

  • The gulf of execution is the mismatch between the user's intentions and the allowed actions (for example, in the well-known user Web site shown in Figure 3, a novice user wants to access his/her previous books wish list but the feature is not accessible directly from that page).
  • The gulf of evaluation refers to the difference between the user's expectations and the system's representation (for example, the user clicked the "gold box" icon confusing it for the wish list icon).

Response time is an important parameter in that a slow response is a cause of error and user frustration in using the application. This is particularly true for Web-based applications, where performances could be a serious bottleneck. Furthermore, response time affects users in different ways. Expectation and past experience play an important part. If somebody is used to having a task completed in a given amount of time, both excessive completion time or too short a time can confuse the user. In addition, personal attitudes should be taken into account. Short response times also help to explore the UI more easily wherever such a behavior is encouraged (by means of undoing actions, providing low error costs and so on).

Short-term memory (STM) is a limited memory that acts as a buffer for volatile data and it is used to process perceptual input. Empirical studies founded that, usually, human beings have an STM capacity of between five and nine items. Such items can be single objects or coherent chunks of information. The size of non-atomic pieces of information that can be stored in the STM depends on the familiarity with the subject, but usually the information lasts no longer than 15-30 seconds. Try it yourself: It is easy to remember seven different random colours, but it is not easy to remember seven randomly-picked Spanish words (as long as you don't have some familiarity with that language). STM is very volatile. Distractions, external noise, or other tasks quickly disrupt its content. Imagine you found an interesting new book from a never-heard-before author using the Web site in Figure 3 or other similar services. Then you are suddenly forced to leave the site and close the session. Even if you come back within five minutes, you will probably have problems in remembering the exact book title. STM is commonly used to keep the state in vocal interfaces: When you answer a vocal interface selecting menus and options by means of voice or by pressing keys, you need to remember the operational context ("where" you are in the menus and options chain).

Another kind of memory is the so-called long-term memory (LTM) that is more stable and much more capable, but with slower access than STM. A major problem with LTM is the difficulty of the retrieval stage. We all use mnemonic aids to access to LTM, like mental associations to remember a personal code or password, and so forth.

STM deals also with the operation efficiency. Operations that can be processed using only STM are easier and faster to solve than those that require LTM or some external cognitive help. Complex operations are worsened by the need of maintaining the data context throughout the whole process, using working memory and STM.

STM is a valuable aid to well-designed interfaces. STM needs concentration, and generally people should be in a proper environment for maximising their performances. They should feel at ease with the application, having a reassuringly predictable idea of how it works, without the fear of making catastrophic operations, without feeling compelled by the system, and so on. Of course we cannot intervene in the final physical environment where the application will be used, but we could consider it in our design.

A designer should always try to design the user interface to make users work as much as possible with STM; in this way, their memory load is lighter and the interaction is quicker and more error-free. A UNIX command line interface needs continuous access to LTM or some external "cognitive aid." It is not uncommon for UNIX novice users to have post-its or paper notes to remember some commands and their syntax, or even sequences of commands to carry out a certain task. With the advent of graphical user interfaces this situation has changed. Now designers have a powerful set of tools for designing expressive, easier to use user interfaces. Another means to avoid futile memory burden on users is to adopt a standard design. In this way, users may use the knowledge acquired using other standard UIs for ours also.

Control and Automation is another important issue in user interface design. It is useful to provide the automation of some features. However, this will take away control from users. People get frustrated and nervous when they feel they do not have full control over the work they are doing. It is important to provide end-users with the sense of control.

In contrast, by its definition, a UI should provide a high level, easy-to-use view of the services and data, hiding non-meaningful details such as the CPU's internal registers or the low-level physical state of the hard disk surface. A critical factor for a successful UI design is in balancing automation and user control, showing meaningful details and hiding all the rest, and doing so adaptively, depending on the particular user. Even the same user, as s/he gets confident with the application may want to skip some automatic feature by taking full control of it. It is useful to assess the levels of control that could be exerted in a UI. This helps to make the layers of automation that could be provided (such as defining macros, providing Wizards for most common operations and so on) explicit into the design. Anyway, generally speaking, a computer program is an inherently limited artefact, in that it cannot take into account all the possible situations but only a limited, thought out in advance set of combinations.

Consequently, balancing human control over automation is a typical trade-off of UI design. On the one hand, providing fully automated UIs could be too risky, especially when the task is a critical one (like managing a chemical plant), because many independent variables may cause unforeseen behaviour. On the other, allowing users to have too tight a control could be dangerous, too. They could modify some sensible data or use it in an unexpected way.

Some General Principles

There are a number of principles that should be kept in mind when designing a user interface. Just to mention a few:

  • Know your user. It is perhaps the single most cited guideline in user interface guidelines. Yet, sometimes it is hard to make assumptions on your user population. The interface in Figure 2 is focused mainly on computer engineers and programmers, while the one in Figure 3 is devoted to a much more wide audience (see colours, terminology ).
  • Minimizing the load on users. This implies reducing the memory and cognitive load (as discussed previously), providing informative feedback, memory aids, and other cognitive supports. It is also important to ensure that a work session can be easily interrupted for a few minutes without losing the work in progress (people are able to focus attention for a limited amount of time only). This should be taken into account when designing Web sites where pages have expiry dates and some information is not coded in the URL, making it impossible to step back to them later.
  • Preserving consistencies. There are many consistencies to be preserved in a user interface: labeling, terminology, graphic conventions, components, layout, and so on. Many guidelines, principles, and even software design systematic approaches, are oriented towards consistency. For example, look carefully at the interface in Figure 1. You will discover some language inconsistencies (a not complete site localization; in this case Italian and English words are mixed together).
  • Ensuring overall flexibility, error recovery, and customization. Flexibility is essential when dealing with people. Alas, human beings do err; providing mechanisms to reverse performed actions allows users to explore the UI, relieving them from the anxiety of being trapped in an unrecoverable mistake. Furthermore, the interface should be customizable by the user. For certain people (those with disabilities, for instance), this could be the only suitable way to use the application. Flexibility consists also in providing different usage mechanisms for different classes of users. Novices could use Wizards or other simplified but lengthy means for an easy interaction, while expert users can take advantage of some form of shortcut, all in the same UI. Generally, this is accomplished by providing two distinct interaction paths; one for experienced users and a simplified set of functions for inexperienced users.
  • Following Standards. There are many standards and guidelines for interactions, abbreviations, terminology, and so on. Standards are essential for cross-application consistency and effective implementation. They ensure professional quality while reducing the design effort.
  • Make explicit the system internal state. We already discussed this important principle above. For example, providing warning messages when delicate data is being directly manipulated, even though by experienced users. This is the case with the Amazon Web site (refer to Figure 3) where the fact that no users are currently logged in is signalled by the text "Hello. Sign in to get personalized recommendations."

Conclusions

In this article, we briefly discussed some of the issues and basic concepts in the theory behind user interface design. We saw that UI design can be organized around some basic criteria such as eliminating possible distractions in the UI, providing feedback to the user, avoiding errors or making them easy to handle, or to recover them (to promote an exploratory interaction mode with the user interface), and so on.

We saw that in user interface design an important role is played by the underlying conceptual model. Generally speaking, people act by means of conceptual, meaningful representations of reality. Such representations are given by their current and past experiences. Hence, there are different mental models of the same application, as seen by those that design it (UI designers), those that implement it (developers) and those that will use it (the end-users). It is important for designers to be aware of the different mental representations involved in the creation and consequent use of a user interface.

These and the other principles we mentioned are the basis of user satisfaction, low error rates, and effective task performance that inspired UI design guidelines and standards.

User interface design & analysis

It is believed that in a software development lifecycle, there should be a requirements stage, a design and analysis stage, a development stage and some testing.

The analysis part means the understanding of the requirements and documenting it in a reasonable style, so that we can create a design. This design is basically adopted from the analysis in a way to make it compatible for the implementation in the current technology. For example, we can have one analysis of a problem and different designs for a C programming language implementation and BASIC programming language implementation. Hence, we can say that the design is usually under the influence of the technology in use which makes it difficult for re-implementing among different technologies. So we should remember that the design and analysis are separate from each other.

Now, we shall focus more closely on the user interface design which pays closer attention on the experience of the user and interaction. The main purpose for this design is to make interactions as simple as possible so that the user can accomplish the task he/she wants in the most efficient way as well. A user interface, works good when it gets the job done without making the user pay attention to it too much. In this case, computer graphics can add to its usability. This design should balance the visual elements and technical functionality so that an operational system is created capable of adapting to different user needs.

There are different steps and procedures in designing the user interface:

  • Functionality requirements gathering: Puts together a list of functionalities needed for the system to fulfil the goals of the project and it's users' requirements.
  • User analysis: Analysing the system's potential users by discussing their requirements with themselves. We can ask questions like:

What do the users want?

How can the system be compatible with user's natural workflow?

How skilled is the user in using computers and what are the systems he/she is currently or already used?

What kind of design would the user prefer graphically?

  • Information architecture: Developing the process or information flow for the system.
  • Prototyping: This would be developing prototypes, as an example for the system to be based on in the future.
  • Usability testing: Testing the prototypes on a real user; We could use a technique (talk aloud protocol) during which we ask the user to tell us about their thoughts whilst using the system.
  • Graphic Interface design: This is the look & feel design of the GUI (Graphical User interface)

Experimental procedure

Here, we arrive to the part where we experiment with different procedures. Firstly, as I might have mentioned before, because this project was my first time using the Adobe Flash software, I truly had to experiment with its features. So, I started reading and browsing around different books to basically familiarize myself with the software. After that, I started to implement my planned vision of how I could simulate the user interface.

These days, flash animation can be seen in many websites. You might also have noticed that it sometimes gives extra options in the menu that you get by right clicking the flash animation. There you can change many settings like quality, zooming, printing, etc. For the purpose of my project, to keep it simple and minimal, I shall be disabling this menu. So after I created my first flash document, I went to published settings and I disabled the right-click menu.

The interface that I design should be compatible with a standard television screen. Although, as we can see in the current technology, we are moving forward towards HDTV, this technology has not been fully implemented in a way that, for example, every digital channel has the full HD resolution. Hence, I shall be implementing my user interface menu by using the 720 x 576 resolution which is used in standard PAL TVs. This means that, in the normal PAL channel, the interface will be displayed in full screen; and in the higher HD resolution channels, it will be smaller which shall be sufficient enough to show the variety of options in the interface.

Now, we arrive to the part where we have to create layers to properly organise different parts of the interface. The first layer that I created was the file content layer. This layer would store the variety of multimedia features like button sounds, on/off sounds, images, etc.

The next layer that I created was the intro layer. You might have noticed that each time a television powers up, it usually takes a little time to load up. This could specially be more if the DVR (Digital Video Recorder) is also included in the TV as it takes time for the Hard Drive to start up. This could be a little frustrating for the consumers. So this I believe arises the need of a fast on demand processor to seamlessly load up a screen, preferably with animation and if the customer prefers, music; and it includes the message “Loading” for example to let the customer know that the television is loading up. So, I created this intro layer which would include that message and a loading animation to accompany while the television loads up.

For the loading bar, I used several loading dots to indicate loading. I divided them to two layers of bright loading dots and dark loading dots. This means that, we will also have a mask which grows to show bright dots over the dark dots which indicate that the television is loading up.

As with any other interface, we used a scripting language to give functionality to the interface loading, buttons, etc. The scripting language that I used is called action script 3 to which I dedicated a special layer called AS3 as well.

For the loading screen, I used a black background. In this background, I started creating the black dots for the loader. So, I created one, then copy and pasted to speed up the creating process. After creating the dots sufficiently, I aligned them to the centre of the page and converted them to the movie clip type by right clicking them and selecting convert to symbol as they will be involved in the moving load bar.

Up next, I started to implement the load bar in a way to utilise the mechanism which allows the load bar to reveal the bright dots over the dark dots by masking. Hence, I drew a mask movie clip on the mask layer. I made it transparently coloured, so that it'll be invisible to the user. The height and width will be approximately similar to the height and width of the dark dots in order to appropriately reveal the bright dots. The rectangular mask that I created was also converted to a symbol named “Mask load bar”. Hence, the instance name for this I chose to be “Bright dots mask”. The reason I chose these names was solely to make good sense for myself while designing the interface.

Now for the loading text, I created a dynamic text field named “Loadpercentagetext”. Hence, the actionscript code will be able to communicate with the percentage text in order to change it dynamically while the interface loads. Its colour was the same as the load bar to make sure the use interface is kept simple. In Adobe flash, there is an option called character embedding which indicates which characters to we want to embed in our text. For our percentage text, I embedded the uppercase, lowercase, numerals and punctuation. This is especially important as the computer that I simulate the user interface on might not have the font that I used, installed on it; so this embedding feature makes sure that in case the computer doesn't have the font, it will be included in the flash file. This text was also aligned in centre similar to our loading bar.

So, our little mask movie clip will be growing to be 300 pixels wide which is also the size of our loading bar. This will be calculated according the percentage of of the file that was loaded for the television to start up properly. Then, I masked the mask layer over the bright dots layer. For the television brand, be it Sony, Samsung, etc, I used a static text for now. But I might later use the real image of the brand.

Now we arrive to one of the most important parts of the user interface design which is the coding. For the loading, I shall be using two functions. One is called the “load progress” and the other one is called “load complete”. There will also be two event listeners which closely monitor the events to recognize the most current situation of the loading process. Obviously, one shall be for the progress and one shall be for the complete.

In the first function, a variable named percent will be claimed. This variable will be storing the percentage of the loading process. Also, each time 1 percent of the loading process loads, we command the percent variable to times itself by 3 so that 3/300 pixels is completed which means 1 percent.

After the loading process is completed we use the command “Gotoandstop” so that it stops the animation. Also, just to make sure it will stop at the end of the loading process, I will also add another stop() command as an extra keyframe in the actionscript layer just to be safe.

Now, we arrive to the intro layer where I drew a big rectangle, naming it myIntro. Then I brought the timeline screen to the front by double clicking the rectangle. This timeline represent each of the frames I worked on which are included in the animation.

For the sake of memory, bandwidth and loading speed, we shall keep the intro layer simple, so that the loading times are maximised. In this part, we shall make the intro attractive. I started by drawing a line from left to right after which I added another line below it from right to left. For each of the lines, I had separate layers naming them line 1 and line 2. Each of these lines shall be converted to symbols as a movie clip type to be animated.

Now there will be different registrations for these lines, because the first line will be growing from the left side of the screen and the second line will be growing from the right side of the screen. So we set the first line's registration to left and the second lines registration to the right.

I wanted to animate all these lines until frame 8, so I set a keyframe of both of the line layers to frame 8. Here, I utilised the interesting feature available in Adobe Flash. This feature allows you to automatically animate to line from just one pixel to grow to a full line without the need to draw each of the frames as the software does the animating automatically. I just had to go to frame one and minimise the lines to just one pixel and leave it to the full size in frame 8.

Then, I made the animation magic happen by creating something called a “Motion tween” between frame 1 and frame 8. This feature makes the animation that I talked about before happen.

Now the point of saving my project cannot simply be emphasized enough. So I took the time and went on to upload my current flash files to the Google docs site, to make sure I have at least two copies of my work. I continued doing this every now and then to make sure I got the proper backup of my work.

Then, I added sample music as the theme for the television starting up. The track I used was from the Windows 95 start up theme. After that, I went back to the flash file and added a new layer called sound. Then, to add the sound file that I downloaded from the internet; I went to file menu and selected the option “Import to library”. This allowed me to locate that file and import it to the library of files to be included in my flash project.

After this, a graph will come up in the frames layer showing the visualisation of the sound which I extended to play until the 120th frame which would be the approximate duration of the sound file. After that, I decided to extend the lines animation to 25 frames as well so that its nearer to the duration of the sound file.

As the lines which I draw, will take part to form a rectangle and were the horizontal ones. I will be drawing two more lines which will be vertical. These will of course be named line 3 and line 4. As line 1 and line 2 will finish animating on frame 25, this frame will be the starting frame for lines 3 and 4 to animate. They start animating from the end of lines 1 and 2 to finish in creating a rectangle. Line 3 will come off line 1 and line 4 will come off line 2. These two lines will be animating until frame 45. So basically, as I changed the width of the first two lines to animate them, now I shall be changing the height, after which I would create a motion tween like before between frames 25 and 45 for the new 2 lines.

Here is a screenshot of how they would look like:

However, I cut off the remaining part of the lines at the end of the rectangle later for it to look like this:

Then I made another layer called “Welcometext” above which the layer “Mylogotext” goes. In the welcometext layer, I draw a static text to welcome the user to the TV or interface. Same goes for mylogotext and both were converted to symbols as a movie clip to be animated later.

For the animation, I used the Alpha special effect which is included in the Adobe Flash software package. This special effect makes the text fade out and fade in. So, firstly my texts will be completely invisible. Then they gradually become visible by fading in to the screen. This, as I said earlier, makes the user interface more attractive to the user which is an important factor in user interface design. However, this should not be judged as a waste of processor's time; because during this animation, another processor would be busy loading for example the TV start-up files, EPG (electronic programme guide), etc.

The intro part that I created has a new actionscript layer which accompanies it. In this layer we put the code necessary to instruct the flash file to play the frames.

Now, we arrive to the part where we make the main television menu. So I made a separate layer for this menu and another layer called the “pagesclip” which gives the functionality of animation to the menu. After this I created several buttons in the main menu layer naming them Audio, Video, TV tuning, input/output, etc for the variety of functionalities to be included. After I highlighted all of them and converted them to the movie clip type, I gave them the instance name of “main_menumc”.

However, inside of the movie clip, each of the buttons will have separate instance name of their own so that the actionscript layer commands each of the buttons when they are clicked to tell them what to do. Also, having all the buttons included in the movie clip helped me to animate them nicely as I was building the menu.

Before giving each of the buttons their instance names, I made sure that I lock the actionscript layer first. Then I named the first button as “menu1_btn”. This kind of naming strategy, together with Syntax help and code completion features available in Adobe flash actionscript compiler, helped me get through the coding process quicker.

The rest of the buttons will be named in a similar manner to that of button 1. Then, the code I used was as below:

Menu1_btn.addEventListener(MouseEvent.CLICK, button1Click);

Menu2_btn.addEventListener(MouseEvent.CLICK, button2Click);

Menu3_btn.addEventListener(MouseEvent.CLICK, button3Click);

Menu4_btn.addEventListener(MouseEvent.CLICK, button4Click);

Function button1Click(Event:MouseEvent):void{

// This is where we put the functionality for what the button should do when clicked

}

Function button2Click(Event:MouseEvent):void{

// This is where we put the functionality for what the button should do when clicked

}

Function button3Click(Event:MouseEvent):void{

// This is where we put the functionality for what the button should do when clicked

}

Function button4Click(Event:MouseEvent):void{

// This is where we put the functionality for what the button should do when clicked

}

Now, to apply the tween effects to the buttons, I used the code below:

Import fl.transitions.*;

Import fl.transitions.easing.*;

Var moveTween:Tween = new Tween(main_menu_mc, “x”, Elastic.easeOut, main_menu_mc.y, 60, 1, true);

Then, for each of the buttons, there will be a separate page. In that page I created a rectangle primitive to show the next options or the appropriate message. I remembered to convert all of the elements in the page to a movie clip and naming them “page 1”, “page 2” and so on.

After this, I exported each of the pages to the actionscript type in order to make them compatible with the actionscript coding. Then I left the colour of the background of each pages to be decided later and named the background “Page_container”. This makes the instance name to be “Page_container_mc”.

For the inner movie clip, I made the width to be 570 and height to be 270 so that it's appropriately lengthened for the page. After this, I added the code below to the code above:

Var p1:page_1= new page_1;

Var p2:page_2= new page_2;

Var p3:page_3= new page_3;

Var p4:page_4= new page_4;

This code collects movie clips from the library so that they are used on the stage when they are needed by using the addChild function. I would like to note that my buttons are usually referenced from the parent timeline. The special effect that I gave to each page goes in the way that when a button is clicked, the page is gradually faded onto the screen. This can either be a page starting to fade in from the outer part of the page or fade in from the inner part of the page.

But I had to note that before I could apply that effect to the page, I had to remove the current page from the page container first and this has to be done only when the page transition successfully finishes.

To experiment another way in creating my user interface, I tried a different approach:

After creating a new flash document, I divided my flash project by creating four different layers called: Actions, background, menu and pages. I have done this by clicking on the “create new layer” button on the flash timeline. After that, I double clicked on each of them to give them their appropriate names.

Now, I needed to add a background image. So I selected the background layer, then I went to the file menuàimportàimport to stage and selected my desired image. To prevent accidental changes, I made sure to lock layers when I wasn't using them by clicking on the dot icon which changes after being clicked on to a lock icon.

After that, it was time for me to create buttons for my menu. So I selected the menu layer, and then selected the window menuàComponents. Then, on the window that appeared, I clicked on user interface and double clicked on the button. Then I repeated the process to add as many buttons as I liked to the menu. Afterwards, I went to the properties of each of the buttons and changed their name appropriately to btn1, btn2, etc. Then, I changed the name displayed on the buttons themselves by using the component inspector.

For the title of each of the menu pages, I used the text tool. Afterwards, to add more pages, I used the pages layer together with the rectangle tool to draw the rectangle by using the appropriate height and width. I also used to colour tab to adjust the colour and transparency.

For creating new pages, I selected my first frame in the timeline and copied it for as much as pages that I needed. Then I created content for each of the menu pages by selecting them one at a time and giving those unique labels and content.

Then, I went to the actionscript layer and inserted the following code for the pages:

Stop();

//This makes sure that the timeline is stopped from going to the next page

Function btn1_clicked(e:MouseEvent):void{

gotoAndStop(“page_1”);

}

Function btn2_clicked(e:MouseEvent):void{

gotoAndStop(“page_2”);

}

Function btn3_clicked(e:MouseEvent):void{

gotoAndStop(“page_3”);

}

Function btn4_clicked(e:MouseEvent):void{

gotoAndStop(“page_4”);

}

//Below is the script which connects the buttons to their corresponding functions

Btn1.addEventListener(MouseEvent.CLICK, btn1_clicked);

Btn2.addEventListener(MouseEvent.CLICK, btn2_clicked);

Btn3.addEventListener(MouseEvent.CLICK, btn3_clicked);

Btn4.addEventListener(MouseEvent.CLICK, btn4_clicked);

Afterwards, I went to the control menu and tested my code so far to make sure it was working properly.

Source: ChinaStones - http://china-stones.info/free-essays/accounting/design-of-user-interface.php



About this resource

This Accounting essay was submitted to us by a student in order to help you with your studies.


Search our content:


  • Download this page
  • Print this page
  • Search again

  • Word count:

    This page has approximately words.


    Share:


    Cite:

    If you use part of this page in your own work, you need to provide a citation, as follows:

    ChinaStones, Design of user interface. Available from: <https://china-stones.info/free-essays/accounting/design-of-user-interface.php> [17-06-19].


    More information:

    If you are the original author of this content and no longer wish to have it published on our website then please click on the link below to request removal:


     
    У нашей фирмы полезный портал с информацией про Декавер farm-pump-ua.com
    別れさせ屋

    ремонт новострой