Thought this might be of some use with regards to the performance differences between the decoder options (encoder set to 'software' in all cases). The process tested was a re-encode to BD9.
Hardware in use : Phenom X4 9950 (2.6GHz), ASRock K10N78 with integrated Geforce 8200 (Cuda capability 1.1 - 8 cores). Raid0 array.
BD 'Prince Of Persia' ripped to HDD before backup.
Software & CUDA were almost identical in speed - the whole process taking approx 3.5 hours.
CoreAVC (which is alleged to be the fastest software decoder) actually took longer - around 4 hours.
The oddity : the encoding rate (kbps) varies between the three options - quite dramatically but I don't understand why - surely this rate has been decided by the analyzing process right at the very beginning. The following are fairly close approximations to what I saw :-
Software 4800
CUDA 3800
CoreAVC 5800
Can anyone explain why this would be ?
Thanks,
Ian.
Hardware in use : Phenom X4 9950 (2.6GHz), ASRock K10N78 with integrated Geforce 8200 (Cuda capability 1.1 - 8 cores). Raid0 array.
BD 'Prince Of Persia' ripped to HDD before backup.
Software & CUDA were almost identical in speed - the whole process taking approx 3.5 hours.
CoreAVC (which is alleged to be the fastest software decoder) actually took longer - around 4 hours.
The oddity : the encoding rate (kbps) varies between the three options - quite dramatically but I don't understand why - surely this rate has been decided by the analyzing process right at the very beginning. The following are fairly close approximations to what I saw :-
Software 4800
CUDA 3800
CoreAVC 5800
Can anyone explain why this would be ?
Thanks,
Ian.