Profiles in Benchmarking: Ganesh T S

The SPEC Graphics and Workstation Performance Group (SPEC/GWPG) takes pride in the tools it provides to help organizations and individuals evaluate performance in ways that align closely with the work they do every day. Critical links in the chain of communication are the editors and reviewers who use benchmarks to enlighten readers on the performance of new graphics cards and workstations.

The Profiles in Benchmarking series aims to introduce you to these people and their unique perspectives. In this installment, we talk with Ganesh Subramanian, better known to AnandTech readers as Ganesh T S. Ganesh is a senior editor at AnandTech, which serves the needs of readers looking for reviews on PC components, smartphones, tablets, pre-built desktops, notebooks, Macs, and enterprise/cloud computing technologies.

Did you always have an engineering or technology mindset, even as a child?

Engineering and technology mindsets require the right environment to develop. I used to spend most of my summer breaks from school at my uncle's place -- he happened to be an electronics enthusiast, and his house was filled with gadgets and PCBs of all kinds. I remember playing around with his ZX Spectrum and using audio cassette tapes to load programs into it. I suppose I owe my interest in the field of electronics and making a career out of it to those days.

What is your educational background?

I completed my undergraduate studies from BITS, Pilani, India, with a double major in electrical and electronics engineering, and computer science. Immediately after that, I started graduate school at Iowa State University, from where I received a masters in computer engineering.

What career path brought you where you are today?

I shifted to the SF Bay Area and joined Ambarella Corp. in 2006 as an ASIC verification engineer just prior to my graduation from Iowa State. Back then, it was a startup company focusing on silicon for video compression and image processing. Within a few years, we had our chips in multiple camcorders and pocket cameras, and I had a large library of videos captured using those chips.

My desktop PC and notebook struggled to play back those videos, and I started looking into hardware-accelerated video decoding with GPU add-in cards. I was surprised to find that video decoding support was not very robust across multiple vendors, and I ended up blogging about it. Around the beginning of 2010, Anand Shimpi, founder of AnandTech, put out a call for writers. I sent over my writing samples, and was roped in as a freelancer.

Freelancing at AnandTech turned out to be a good decision, as Anand gave me the freedom to work on things I liked. I started out by covering video processing and multimedia systems, and quickly expanded my focus to cover storage systems and computer networking.

Over the last 10 years, I have come to realize that the basic principles and techniques utilized in my full-time work as an ASIC verification engineer also translate fairly well to the development and automation of evaluation methodologies for various technology products. The tools may differ, but the concepts behind performance evaluation are the same.

What about computer performance stokes your interest and passion?

In semiconductors, designers often have to create a chip that optimizes for PPA -- power consumption minimization, performance maximization, and die area minimization. These three metrics are often at loggerheads with each other, and I often get to see the tradeoffs in action as part of my full-time work.

Die area directly translates to the cost of the chip. In evaluating consumer end products -- whether computers or storage/networking equipment -- I have come to understand that the PPA concept has universal application.

As the relentless march for higher performance goes on and workloads keep evolving, it is exciting to get a hands-on view of how silicon and system manufacturers are stepping up to address the PPA trifecta.

Do you think standardized benchmarks are important and if so, why?

Standardized benchmarks are absolutely essential. They provide an objective way of measuring performance while maintaining reproducibility. Consumers can be confident that there is no advisor bias when taking decisions based on the results of these types of benchmarks. At the same time, these benchmarks also allow the reviewer to provide additional context to the results.

What engineering or technology developments do you think will most affect graphics and workstation performance over the next five years?

Transistor scaling looks set to continue for the next five years. This will enable vendors to put out more powerful graphics and workstation products, but cost might be a concern. Silicon vendors may need to innovate more than usual on the architecture side in order to be able to push the performance boundaries further.

One of the interesting aspects I’ve noticed is the sensitivity of the graphics workstation benchmark scores to the PCIe link width -- an aspect that was practically non-existent in gaming benchmarks. PCI-SIG is all set to rollout rapid updates to the PCIe standard over the next few years. As vendors start supporting these new standards, it will be interesting to watch how the graphics workstation performance scales.

How do you think reviewers like yourself can help professionals navigate these changes?

As a reviewer with a handy database of previous results, running the SPECviewperf or SPECworkstation benchmark on the new systems when they become available will quickly enable us to get an idea about the quantum of improvement. Professionals looking to purchase new workstations can refer to our results and decide whether an infrastructure overhaul is immediately warranted, or whether it would be prudent to wait for the next generation.

Are there areas of graphics and workstation performance that are currently not being benchmarked adequately?

SPEC/GWPG has done stellar work in creating the SPECworkstation and SPECviewperf benchmarks. On the workstation side, I would like to have some representation from electronic design automation tools -- maybe some open source programs dealing with HDL simulation, synthesis, placement and routing, etc. On the graphics side, I believe SPECviewperf has all bases covered for now. I am sure the benchmarks will get updates in the future to include contemporary workloads related to AI and ML.

On the methodology side, I’d like to see SPEC/GWPG add the ability to automate collection of a performance-per-watt metric. As workstations get miniaturized, having an idea of the power efficiency of a particular platform for a given workload becomes important.

What interests do you have outside of work?

I love to solve cryptic crossword puzzles, and also indulge in some compilation work in my spare time. I have been regularly contributing cryptics for the last 13 years under the pseudonym “Neyartha” to one of India's leading English newspapers, The Hindu.

Other than that, I like traveling to new places and doing needlework. I believe needlework helps develop patience and focus -- qualities that serve me well in my professional endeavors too.

https://www.linkedin.com/pulse/profiles-benchmarking-ganesh-bob-cramblitt/

stay in touch

Sign up for our newsletter

There will be no spam, marketing offers, or selling of this mailing llist to 3rd parties