MODELING THE INTERACTION OF GROUPS OF INTERFACES DESIGNED FOR UNSIGHTED PEOPLE

The present study is related to modeling of complex interfaces for visually impaired people. The considerations are about the modeling of parallel devices handling requests which are certain tasks. The model, as presented, allows solving the problems by synchronization. It could provide possibilities to reduce the overhead in software realizations.


Introduction
The development of computer technology provides possibilities for deliberate combining of multiple devices into an integrated unified system designed to provide or facilitate the use of certain services.In a narrower context, the present work treats some problems of the creation of complex interfaces and performing devices for people with reduced sight or totally unsighted.The development of mobile devices and especially the new advances in embedded systems deliver impressing performance which is a prerequisite for realization of not only interface devices but for realization of systems carrying out partial or full functional data processing.The discussion below is not confined only to the realization of combined intelligent interactive interfaces or complexes of them designed for the unsighted.The aim is to set the conditions for modeling general purpose systems, study systems stability by themselves or within a generalized communication between them irrespective of communication standards.Simultaneously, an adequate method for description and studies of computer systems (the intelligent interface designed itself as an embedded system is practically a computer system) is shown.This statement is based on the fact that the nature of the computer systems is quite close to that of Markov processes with discrete states (so called Markov chains).Depending on the time scale, both kinds are implemented discrete time and continuous time.Presently, there is a great variety of embedded devices.They might be integrated to provide a set of functions.By the modeling, an approach investigating the stability of a system providing a single function (simplified system) will be demonstrated, as well as one investigating a system providing multiple functions (aggregated system).The material presented in the paper is based on studies on complex inter-communicating devices for visually impaired people.The case modelling shown in the following sections refers to sample configurations conforming to the conditions for application of intelligent interface complexes based on embedded devices.In this respect, the work is distinct by the semantics of the material presented.

Description of the Elements and Dynamics of a Markov System
A Markov system with queuing is characterized by incoming tasks distributed according to Poisson distribution and exponential processing time [1].It could be said that the system is in stable state after the decay of the transition processes and the efficiency does not depend on time.The classical system of type M/M/1 refers to queued systems where the incoming tasks obey the Poison distribution and they are serviced by a server system according to exponential distribution of the servicing times.The incoming rate (frequency) is denoted as λ and the servicing rate (frequency) as µ.They do not depend on the number of serviced tasks at the moment, i.e. they do not depend on the current state.Here, the other two parameters of the Kendall notation should be mentioned infinite capacity of a system and the strictness of the servicing of a queue First in First out [1], [2], [3].
First, it will be focused on the number of serviced N(t) at moment t.The incoming of task to the system can be regarded as birth whiles the leaving of a serviced task as death, within a Birth-death process [4], [5], [6].According to Poisson distribution, there cannot be more than one incoming task in a period ∆t, and the exponential distribution of the servicing time allows only one task to leave the system in the period ∆t.Therefore, N(t) can be considered a birth-death process which can make transitions to neighboring states N(t) 1 or N(t) + 1, in a time interval ∆t.
Mathematically formulated, the stochastic sequence {X k , k ǫ T} is called discrete time Markov chain if the conditional probability is defined for each i and j as follows: (2.1) The interpretation of eq.(2.1) is that probability distribution (k+1) depends only on the current state k (k-instance) but not on the way it was reached.The whole history of the transitions in the system is expressed as summarized in the specification of the current state.In other words, the system has no memory.The conditional probability on the right-hand side P [X k+1 = j|X k = i] is the probability for chain to make transition from state i in the time interval k to state j in time interval k + 1.This probability will be called probability for a single transition.Generally, probability depends on time.If it is invariant with time then the Markov chain is defined as time-homogeneous.For the modeling in the present work, mainly time-homogeneous Markov chain will be used.Shortly, the conditional probability can be rewritten as: (2. 2) The set of states of the Markov chain in the paper will be denoted as {0,1,2,3,..}.The denotation X k = i means that the chain is in state i within time interval k.The probability for the chain to be in this state will be described by the expression: (2. 3) The conversion of the Markov chain from one state to another will be called transition.The graphical representation of chain dynamics is depicted as a diagram of the states and transitions [4], [5].
The system can either make transition to another state or remain in the current one: (2.4) The conditional probability in equation (2.4) describes only the chain dynamics.For full characterization of a Markov chain it is necessary to define the initial state the initial probability distribution P [X 0 = i] of the chain.Starting from the initial state, it is possible to calculate the probability for the system to go to another state in some future moment [6], [7], [8].This can be done using the total probability theorem: (2.5) The considerations so far imply that the matrix calculation is the more elegant method to formulate the dynamics in systems with Markov chains.To express the probability for transition in a system with n possible states, a matrix P sized (nxn) is introduced.It is called transition probability matrix or transition matrix [9], [10].It can be written as follows: (2.6) The element p ij is the probability for transition defined by equation (2.6).For the purposes of the micro-modeling, the current system is assumed to have finite number of states n.The assumption is for this case.The sum of the probabilities for transition in another state and remaining in the current state is unity.Thus, the sum of the values in a row is defined as: A matrix where the sum of the values in certain row is unity and all the values are positive is called stochastic matrix.Certain Markov chain is fully defined (for single transition) by such a matrix, together with the vector of the probability for the initial (current) state which is called the initialization vector.The probability for the system to take on of the possible n states in time interval k can be expressed as a vector: (2.7) Using the matrix representation, the calculation of the probability for certain state to be taken can be written: . . .
Substituting π (i) in the equations above, we obtain: where Therefore, it can be written: or (2.9) The last expression (2.9) is the well-known equation of Chapman-Kolmogorov [10] [11].

Markov Model of Real System Elements with Discrete States and Discrete Time
Let n servicing devices D1 , D2 , D3 , . . ., D n are given, designed for m types of tasks T 1 , T 2 , T 3 , . . ., T m .Each task T i is serviced by one or several servicing devices D j , and the load on a device is expressed as an integer number of requests to itw ij .The device loads can be described by a table with all possible loads.Each row of the matrix corresponds to the distribution of the tasks among the servicing devices.It is interpreted as follows: The task T i sends two requests to the first device, one for the second and third and none to the fourth device.The general representation of the tasks is shown in Table 3.1.The state of the servicing system at certain moment will be described by a vector (a 1 , a 2 , a 3 , . . ., a n ) , consisting of the number of requests to the servicing system.Each of the different incoming tasks T i is supposed to be described by a Poisson process with parameter λ i and the time interval δt.
The servicing period of device D j is supposed to be an exponential distribution process, random process with parameter µ j .It is assumed that the full formal description of the system is represented by the matrix in fig. 3

Fig. 3.2. Formal description of the system
It should be noted that: • A is the state of the servicing system; • T i k i are the requests; • p is queue length it can be zero; • k i is the type of the i th request.
The first task in the queue T 1 k 1 , will be allowed by the servicing system D if a i =0 for the values of i but w k 1 1 > 0, where: This condition is equivalent to the statement: if one or several servicing devices required for the fulfillment of the first task in the queue are busy then the task must wait until they are available.The working sequence of the system is: • A tasks is sent to the system or not; • Complete part of the request of the busy devices; • If possible, the servicing system D allows the first task from the queue in.
The process described obeys the conditions for a Markov process since the subsequent state depends only on the preceding one but not on the entire prehistory of the process.
From the point of view of the modeled object intelligent interface devices created on the basis of embedded systems, it will be useful to consider concrete cases related to the purpose.
The task from the queue (if there is any) is on the left-hand side and on the right hand side the requests to the two servicing devices which are currently processed.01|11 means that the two servicing devices each are processing some task and a task of kind T 11 waits in the queue to be processed.
In this state, new task cannot be allowed due to the limitation posed on the number of tasks in the queue.Generally, a new task may be allowed in only if the system is in the state 00|a where a is the parametric representation of the busy state of the devices.From state 01|11 the system can either make transition to state 01|01 or 01|10, or remain in state 01|11.
The first transition is made when the first device completes its request and the second one when the second device completes.The transition from 01|11 to 01|00 is impossible due to the Poisson nature of the process.The probability for the servicing system to permit the execution of a task is ρ∆t.The probabilities for the two transitions described are µ 10 ∆t and µ 01 ∆t, respectively.The additional probability for the system to remain in the current state is then s=1 − µ 10 ∆t − µ 01 ∆t.The jobs in computer systems are formally described by this method.It provides possibility to model a batch processing system but containing more than one device.This formal description allows modeling a complex task with more than one servicing device.Similar to the example above, a transition matrix can be constructed.The matrix for two parallel devices is presented in Table 4.2.The incoming of a task, completion of part of the requests by the busy devices and permitting a new task, if there is any and if possible, is given by the transition matrix.
Let sais a set of matrices, all of them obeying the condition that the sum of the elements of every row equals some value a.
Since the transition matrix shown in Table 4.2 is stochastic, all its powers will be non-negative.
Proving these statements is beyond the scope of the present study.They were included only to keep the logical thread in the paper.
One feature of the transition matrix is that the sum of the members of each row is unity.This doesnt have to be proved.The transition matrix is stochastic and satisfies all the conditions described in the theoretical background above.
From engineering point of view, these facts can be interpreted quite simply: the probability for a system to go to certain defined state or remain in the current one is unity.This interpretation is nothing else than the expression of the probability definition on the basis of a full space of elementary events.The numerical studies of the system of discrete states described above were made by self-developed software.The algorithms used do not differ substantially from the theoretical background presented.The main reason to develop our own software is that it provides possibility to control the error by calculation of each state which is not always available in commercial software.The discussion on this software is also beyond the scope of the present work but it will be the subject of a future publication.
The plots shown in Figs.4.1 and 4.2 are randomly selected.Simulations were performed with great variety of values of λ∆t, µ∆t and ρ∆t and all possible test vectors were investigated.
At first, the plots show big changes in the probabilities for transition but after the seventh iteration the values stabilized.This indicates the stabilization of the system itself [11], [?].The simulation of the system proves its stability.
For all the variants investigated, the stabilized values of the probabilities for transition with different test vectors reached the same values.The simulations illustrated in tables and figures were obtained with the same values of λ∆t, µ∆t and ρ∆t but with different test vectors used to study the system.It should be noted that similar patterns were observed with other values of and .Fig. 4.1.Plot of the probabilities for transition at each discrete moment with test vector(1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)

Table 4 .
2. Diagram of the states with two parallel devices