## Design of a bit-serial multiplier |

This paper describes the architecture of a bit-serial multiplier and its formal derivation. A 16-bit multiplier of this type has been implemented in the processor of the ISATEC Systola-1024 array processor board.

The following scheme describes the school method for multiplication of two `n`-bit numbers `x` and `y` and addition of a number `s` for `n` = 3. The generalization to arbitrary `n` is obvious.

s_{2} | s_{1} | s_{0} | ||||||||||

y_{2} | y_{1} | y_{0} | · x_{0} | |||||||||

y_{2} | y_{1} | y_{0} | · x_{1} | |||||||||

y_{2} | y_{1} | y_{0} | · x_{2} | |||||||||

r_{5} | r_{4} | r_{3} | r_{2} | r_{1} | r_{0} |

By introducing additional variables for intermediate result bits and additional subscripts to make all variables distinct this scheme is extended to the following scheme:

s_{05} | s_{04} | s_{03} | s_{02} | s_{01} | s_{00} | |||||

x_{05} | x_{04} | x_{03} | x_{02} | x_{01} | x_{00} | |||||

y_{05} | y_{04} | y_{03} | y_{02} | y_{01} | y_{00} | |||||

c_{05} | c_{04} | c_{03} | c_{02} | c_{01} | c_{00} | |||||

s_{15} | s_{14} | s_{13} | s_{12} | s_{11} | s_{10} | |||||

x_{15} | x_{14} | x_{13} | x_{12} | x_{11} | x_{10} | |||||

y_{15} | y_{14} | y_{13} | y_{12} | y_{11} | y_{10} | |||||

c_{15} | c_{14} | c_{13} | c_{12} | c_{11} | c_{10} | |||||

s_{25} | s_{24} | s_{23} | s_{22} | s_{21} | s_{20} | |||||

x_{25} | x_{24} | x_{23} | x_{22} | x_{21} | x_{20} | |||||

y_{25} | y_{24} | y_{23} | y_{22} | y_{21} | y_{20} | |||||

c_{25} | c_{24} | c_{23} | c_{22} | c_{21} | c_{20} | |||||

s_{35} | s_{34} | s_{33} | s_{32} | s_{31} | s_{30} |

The values of the new variables of the second scheme are defined in terms of the variables of the first scheme as:

s_{0j} | = s_{j} | for j {0, ..., n-1} | ||

x_{i0} | = x_{i} | for i {0, ..., n-1} | ||

y_{0j} | = y_{j} | for j {0, ..., n-1} |

The intermediate result bits are defined by the following recurrence equations:

s_{i+1j} | = (s_{ij} + x_{ij} y_{ij} + c_{ij}) mod 2 | for i {0, ..., n-1}, | j {0, ..., 2n-1} | |||

c_{ij+1} | = (s_{ij} + x_{ij} y_{ij} + c_{ij}) div 2 | for i {0, ..., n-1}, | j {0, ..., 2n-2} | |||

y_{i+1j+1} | = y_{ij} | for i {0, ..., n-2}, | j {0, ..., 2n-2} | |||

x_{ij+1} | = x_{ij} | for i {0, ..., n-1}, | j {0, ..., 2n-2} |

Then we have as the result of the multiplication/addition:

s_{nj} | = r_{j} | for j {0, ..., 2n-1} |

The next step is the transformation of the above index space {(`i`,`j`)} into the index space {(`t`,`z`)} of time and a linear array of processing elements. This is done by the method of Moldovan and Fortes [MF 86].

The data dependency matrix `D` of the recurrences in the index space {(`i`,`j`)} is

s | c | y | x | |||||||

D = | i | 1 | 0 | 1 | 0 | |||||

j | 0 | 1 | 1 | 1 |

The data dependencies express the fact that, e.g., variable `s` computed at index (`i`,`j`) is next used at index (`i`+1,`j`), or variable `y` computed at index (`i`,`j`) is next used at index (`i`+1,`j`+1).

Now a linear transformation is chosen that transforms the index space {(`i`,`j`)} into the index space {(`t`,`z`)} of time and processors. The transformed data dependencies express the data flow in time and processor space.

With the linear transformation

i | j | |||||

T = | t | 1 | 1 | |||

z | 1 | 0 |

applied to the data dependency matrix we have

s | c | y | x | |||||||

TD = | t | 1 | 1 | 2 | 1 | |||||

z | 1 | 0 | 1 | 0 |

`TD` expresses for each variable computed at time/position (`t`, `z`) at which time and in which processor relative to (`t`, `z`) it will be used next:

Variable `s` is passed to the next processing element with delay 1, `c` is stored in each processing element for one time unit, `y` is passed to the next processing element with delay 2, `x` is stored in each processing element for one time unit.

The transformation of the index space {(`i`,`j`)} into the index space {(`t`,`z`)} of time and processors determines at which absolute time the input variables have to be at which processing element and at which absolute time the output arrives at which processing element.

For instance, `x`_{20} = `x`_{2} has to be at processing element `z` = 2 at time `t` = 2, since `T`·(2 0)^{T} = (2 2)^{T} (index pairs are assumed as column vectors). The result bit `s`_{34} = `r`_{4} has to be at processing element `z` = 3 (i.e. it leaves processing element 2) at time `t` = 7, since `T`·(3 4)^{T} = (7 3)^{T} etc. Observe that it is necessary to initialize the multiplier with 0's corresponding to the variables that have to be set to 0 in the multiplication scheme (e.g. `c`_{00} or `y`_{10}).

Different transformations result in different time schedules and usage of interconnections of the processor array. The following conditions have to be satisfied for a valid transformation `T`:

(`ZD`)_{k} > 0 for all columns `k` and

`TD` = `VB`

The first condition incorporates the restriction of the systolic approach that data communication requires at least one time unit. Here, `Z` is the first row of `T` (the transformation in time).

The second condition reflects the restriction that data have to travel along the connections of the processor array. Here, `V` is the connectivity matrix of a linear array and `B` is a utilization matrix of `V`. `B` says which connections of `V` are used by which variables of `D`.

The connectivity matrix `V` of a linear array contains (possibly among others) the connections

p : | pass to the next processing element (z = 1) in one time unit (t = 1) , | |

q : | pass to the next processing element (z = 1) in two time units (t = 2) and | |

h : | hold in the same processing element (z = 0) for one time unit (t = 1) . |

This results in the following connectivity matrix:

p | q | h | ||||||

V = | t | 1 | 2 | 1 | ||||

z | 1 | 1 | 0 |

With the following utilization matrix

s | c | y | x | |||||||

B = | p | 1 | 0 | 0 | 0 | |||||

q | 0 | 0 | 1 | 0 | ||||||

h | 0 | 1 | 0 | 1 |

we have

`TD` = `VB`

Matrix `B` says that `s` travels along connection `p`, `c` uses connection `h`, and so on.

The resulting multiplication unit is a linear array of processing elements, shown in Figure 1 for an operand length of `n` = 3. Each processing element performs the computation determined by the recurrence equations.

The input variable `x` is held in the same processor in each time step. Thus, this input has to be provided in a bit-parallel way. However, a timing analysis shows that `x`_{0} is first required at time `Z`(0 0)^{T} = 0, `x`_{1} is first required at time `Z`(1 0)^{T} = 1, and so on. Thus, input `x` can be shifted into the multiplier serially and latched at appropriate times.

| |

Figure 1: Bit-serial multiplier for three-bit numbers | |

For `n`-bit operands, the multiplier has an execution time of 3`n` cycles. It takes `n` cycles before the first result bit is produced at the output of the multiplier, and then another 2`n` cycles for output of the 2`n` result bits. However, successive multiplications can be pipelined in a way such that the input is provided while the last `n` result bits are being output. Thus, the execution time drops to 2`n`.

The bit-serial multiplier presented here is very simple. Its design is obtained in a straightforward way from the standard multiplication algorithm. For successive multiplications, its execution time is optimal since 2`n` cycles are necessary to produce 2`n` output bits bit-serially.

However, there are bit-serial multipliers that require 2`n` cycles for a single multiplication [Sips 82][SR 82].

[MF 86] | D.I. Moldovan, J.A.B. Fortes: Partitioning and Mapping Algorithms into Fixed Size Systolic Arrays. IEEE Transactions on Computers C-35, 1, 1-12 (1986) | |

[Sips 82] | H.J. Sips: Comments on "An O(n) Parallel Multiplier with Bit-Sequential Input and Output". IEEE Transactions on Computers C-31, 4, 325-327 (1982) | |

[SR 82] | N.R. Strader, V.T. Rhyne: A Canonical Bit-Sequential Multiplier. IEEE Transactions on Computers C-31, 8, 791-795 (1982) |