YM2612 Timers emulation

For anything related to sound (YM2612, PSG, Z80, PCM...)

Moderator: BigEvilCorporation

Post Reply
Eke
Very interested
Posts: 884
Joined: Wed Feb 28, 2007 2:57 pm
Contact:

YM2612 Timers emulation

Post by Eke » Tue Aug 21, 2007 4:43 pm

I have some question about the FM timers and the most accurate way I could have them emulated.

First, in the official documentation, it is told that the FM Timers period are:
TIMERA_period = 18 x (1024 -TIMER) microseconds
TIMERB_period = 18 x 16 x (256 -TIMER) microseconds
where TIMER is the 10bits (for timerA) or 8bits (for timerB) timerbase register value.

I believe (tell me if I'm wrong) that the "18" value is an approximated microseconds value and that the exact one is given by:
  • 1000000/(VCLK/144) = 18.77 microseconds for NTSC
I also imagine that there is some kind of counter register which is periodically (using the above period) incremented by one and that the timer overflows when it reaches the timerbase registrer value.
Or maybe it is the content of the timer base register which is directly incremented and set overflow flag when rolling back to 0 ?



Also, I am wondering about the best way to emulate those timers regarding synchronization, and from what I know, there seems to be 2 common ways:

1/ the first one, used in Genesis Plus, synchronizes the FM timers with CPU execution time: it uses timer values in microsecs and the counters are incremented by 64 microsecs at each scanline
The timer overflows when it is over the programmed timer period

2/ the second one, used in Gens and other emus I think, synchronizes the FM timers with audio samplerate: the timer counter is also incremented at each scanline but the increment value is interpolated according to the number of samples expected for the scanline (approx. SOUNDRATE/60/262 for NTSC timings). The increment value is then given by Nsamples x (VCLK/144/SOUNDRATE) x 4096 and the first timer, for example, overflows when it's over the value (1024 -TIMERA)x4096, (I imagine that this factor is applied to fix rounding errors)

I am not sure to understand what are the advantages/inconvenients of each other method, could someone with better knowledge in sound interpolation/sampling explain the difference?


Another thing: I recently added cycle-accurate sample generation in the genesis plus NGC port (which mean that at each FM writes, we look the number of CPU cycles executed so far to know the exact number of samples that need to be rendered or not before processing the write):
I would like to add also cycle-accurate FM timers emulation by updating the counters (and eventually detect timer overflow) before each FM status read , according to the CPU cycles executed so far.

My question, which I believed is related with the one above is the following:
would it be correct to increment the counter value by one each 144 CPU cycles (18.77 microsec) ? or maybe each time a new sample is rendered (approx. each 160 CPU cycles at 48Khz with NTSC timings) ? In both case, timer overflow would occur when the counter is over the Timer Base Register Value (not the microsecond value)


Thanks for any good advices, I am really not very confident with this to be correct :roll:

TmEE co.(TM)
Very interested
Posts: 2440
Joined: Tue Dec 05, 2006 1:37 pm
Location: Estonia, Rapla City
Contact:

Post by TmEE co.(TM) » Tue Aug 21, 2007 8:49 pm

I hope this helps : Timers in Gens in PAL mode are most accurately emulated, Fusion has its timers too slow... I've done MD on left speaker, Emulator on Right speaker tests with many games... the difference between timer periods in PAL and NTSC configs should be very tiny... but it can be wrong, I have only PAL system and putting it into NTSC will not change the master clock which the timer periods are based on...
Mida sa loed ? Nagunii aru ei saa ;)
http://www.tmeeco.eu
Files of all broken links and images of mine are found here : http://www.tmeeco.eu/FileDen

Eke
Very interested
Posts: 884
Joined: Wed Feb 28, 2007 2:57 pm
Contact:

Post by Eke » Wed Aug 22, 2007 9:48 am

From 32x scanned documentation:
The master clocks for NTSC and PAL used by the Megadrive (and 32X) are different.

Mega Drive Master Clock Cycle:
  • Mck = 1/fosc [sec]
    NTSC fosc = 53.693175 [MHz]
    PAL fosc = 53.203424 [MHz]
68000 Clock Cycle
  • Vclk = 7 Mck
Z80 Clock Cycle
  • Zclk = 15 Mck
Anyway, I think i have understood the difference between the two methods I mentionned:

. in the first one, the timer increment is an approximate value of the scanline duration in microsecs (exact value is 1000000/HZ/LINES_PER_FRAME)

.the second method try to be more accurate by dividing the frame in samplecount intervals and synchronize timer increment with number of rendered samples (sample are generated each 1000000/SAMPLERATE microsecs). This method is better than the previous one but however also an approximation since the number of samples per scanline is rounded (exact value is SAMPLERATE /HZ/LINES_PER_FRAME).


With cycle-accurate sample generation (sample are updated each VCLK/SAMPLERATE CPU cycles), there is no such approximation and I believe it would be correct to increment timer counters each time a new sample has been generated (which should be done before each FM write/read depending on current CPU cycle count and at the end of the frame depending on the remaining samples to render)

The increment value is exactly 1000000/SAMPLERATE [microsecond] for each generated sample with Timers Base period in microsecs being always a factor of 144 x 1000000/VCLK.

About imprecison due to rounding values when using integer, perhaps it would be better to use "floating point" values for both increment and timer base factors, or, as done in Gens, multiplicate both by some appropriate scale factor.

Post Reply