The amount of data being generated through physical design continues to come from more and more tools. As a result, one of the challenges that physical designers run into is being able to see the results all in one place. It’s worse when you want to share all of this information with someone on the other side of the world. Over the last few months we’ve been working hard to make it so that physical designers can see the result of all of the various tools in one place and explore more about the design.
In the latest version of Pinpoint, you can see custom overlays (power/congestion) while debugging the timing and color different hierarchy or cell usages. This makes it even easier to find the problem quickly and identify the solution:
Several years ago, a few of us were working on a project for another company. The goal was to introduce a new way of doing chip design that was so simple that a new grad from college could tapeout chips. The project was obviously challenging, but management would ask questions like: “This product is supposed to be very simple to use, why is it taking so much effort to get it all put together?”. This confusion is common: simple to use must mean it’s simple to build. The reality is that making something easy-to-use is even more difficult to build.
Most technical software in our industry requires weeks to get installed and running using your data. The vendor chalks this up to the slogan, “It was hard to build, it should be hard to use.” Yet there is a very real sense that when tools are complex, we assume they are powerful. Complex tools may be powerful, but not because of their complexity. When we sought to build Pinpoint we were forced into thinking through this tradeoff.
The first alpha version of Pinpoint was not friendly to use nor simple. Just ask the first few alpha testers. The journey thus began, how can we architect this tool to be simple, yet powerful. How can we make the tool so that teams can measure whatever they want to measure no matter how their flow is configured. We broke the tool apart into component pieces: a server and a client. The server to display the pages and a client to collect data. The client would run as a commandline tool so that it could be easily added into the flow. We developed a high level API for both in TCL, Python and Perl. We worked with a user-experience designer to design the web version to be both flexible and have good defaults.
The result after many iterations has allowed customers to get up and running with their data in the tool usually within a couple of hours. We’ve had people bring up the tool with very little interaction in India, Japan, France, Germany, and the US. Within a day they can have the tool integrated into their flow and start capturing data. If you’d like to see how easy it is, check out the How-To demos on our demo page.
We’ve made a lot of progress, but we’re not done. Our goal is to make things easier. Easier to communicate, easier to visualize, easier to track. This is a never-ending quest for us to continue to improve the overall experience.
As we’ve been working with more design teams, one of the concerns that occasionally comes up is whether design engineers really want to share all of the data they are generating. Getting a design closed requires constant experimentation. Sometimes the experiments work, other times things go way off the tracks. If the data is simply collected, but is not viewable or filtered by engineers, managers may make decisions off some experimental result could create some uncomfortable situations.
One of our goals from the beginning of designing Pinpoint was to make the data useful for the engineers to use. Presenting the information both as a high level summary as well as allowing the engineer to dig into the details of what is happening on a various experiment. We’ve always provided engineers with the ability to label experimental runs, add comments to provide context, and hide snapshots that represent bogus data. In addition, we added the ability for engineers to mark certain snapshots as representing reviewed data and summarize this up to management as a status report. All this to prevent people using the data for conclusions that may be inaccurate.
In Pinpoint 2.5, we added the ability for engineers to automatically mark snapshots as private. Even though we’ve seen that open communication among a team transforms communication, we wanted to make sure that the tool could be useful even when running wacky experiments that an engineer wants to keep private.This allows them to still check the data into Pinpoint and see the summary of the results or compare various experiments but not worry about any failed experiments being seen by others. Once the engineer decides that they would like to share the information, they can click the publish button and then share the snapshot with others.
Our goal continues to be helping to improve engineers lives and helping make teams as productive as possible. Helping to provide flexibility on managing the data is one of the ways we do this.
When we see data presented, we start looking for reasons that the data is that way. We want to know why. Why is the stock market up? We want to know and if no one helps us along that path, we’ll make up a reason. When teams start collecting data on their projects, the raw data is looked at by everyone, but only the people who are closest to the data know what the data means. We realized that when teams are working and management is looking at the current metrics of the project that there is a need to mark certain measurements as not only reviewed but also add comments.
This is one of the reasons we built the ability to add “best so far” flags into Pinpoint. This allows an engineer to flag a particular snapshot of the metrics as the best run so far as well as explain why things are the way they are. This flags a run so that management is only looking at runs that have been reviewed and are immediately provided with the associated narration that goes along with the data. Moreover, the running narration over the course of doing a design provides the story of an entire block through the design flow. By the end of a project, it’s difficult for everyone to remember what was going on 2 months ago, but being able to look back and remember can provide insights into how to do better on the next project.
How are you currently managing the narration that surrounds the metrics for your design?
As we have more conversations with people about their experience going through tapeout, we continue to collect different analogies that we want to share so when you go through tapeout, you can describe to your family why you’re working ridiculous hours, your eyes are bloodshot, and you can’t get ready for bed just yet, even when you’re “home from work.”
Here they are:
Tapeout is like finals week at college except it lasts months instead of a week.
Tapeout is like having more to do than is possible to do, and you knowing will be judged by and large on how you deal with that.
Tapeout is like putting together a 7,000,000 piece jigsaw puzzle without a picture on the front of the box, while people come by invoking the power of ECO to rip out parts of the puzzle you just finished assembling, or load you up with piles of new pieces.
Tapeout is like reading a three hundred page book [of timing reports] that has no plot, every day, for a month.
Tapeout is like watching your “family” spend all your money on necessities and emergencies, knowing it’s your job to pay the mortgage at the end of the month, knowing the money won’t be there, knowing you’ll be on the street when it isn’t, and knowing everyone knows it’s your problem, not theirs.
This is important to us because we are trying to make it easier for teams to get to tapeout without the current number of headaches that keep them away from their family. Let us know how you describe tapeout and we’ll add it to the list.