Open Source Cloud Authors: Yeshim Deniz, Elizabeth White, Pat Romanski, John Mertic, Liz McMillan

Related Topics: Open Source Cloud

Open Source Cloud: Article

The Challenges of Developing Games & Other High-Resolution Graphics

Developing technical and process improvement strategies

Technical Solutions
Standard industry best-practices were able to help game developers prevent and detect many errors that C/C++ developers commonly encounter. Practices such as coding standard enforcement, unit testing, runtime error detection, and regression testing help the developers - ranging from inexperienced to expert - produce better code. With these practices, many errors are prevented, and the errors that were introduced are rooted out as early as possible - when they are fastest, easiest, and cheapest to fix.

Considering the tight timelines that are characteristic of the gaming industry, it is critical that attempts to improve quality do not impact the gaming developers' already busy schedule or disrupt the creativity that is critical to keeping ahead of competitors. That's why it's critical to automate these practices. For minimal disruption, the practices are configured to run behind the scenes - for instance, overnight, as part of an automated build process - then alert developers only if a problem is found. Much testing can be done without user intervention, by scanning the code and executing it with automatically-generated test cases. Of course, the more effort developers put into the testing, the greater benefit they get out of it. For instance, to improve code coverage, developers could analyze the coverage achieved by automatically-generated unit tests, then extend these tests to cover a larger percentage of the code and/or to verify specific functionality requirements.

Automating the implementation of these key industry practices helps the gaming developers overcome many of the challenges associated with producing top-quality software in a high-pressure environment and working on teams that have many inexperienced developers. However, it does not help them with one of their toughest tasks: correctly interacting with graphics libraries.

As I mentioned earlier, there are two main graphics libraries used by game developers and other developers creating high-resolution graphics: OpenGL and DirectX. OpenGL, introduced by SGI in 1992, is the most widely-used and supported interface for realistic 2-D and 3-D high-resolution graphics. It is an open, vendor-neutral graphics standard that works on a wide variety of platforms, including Linux. DirectX is a group of technologies for running and displaying multimedia applications on Microsoft Windows and Xbox. We found that the best way to prevent errors related to library misuse was to customize the coding standards analysis practice and technologies to automatically check OpenGL and DirectX rules as well as the standard C/C++ rules. The basic set of rules, based on the guidelines set forth in the libraries' specifications, includes over 30 rules for avoiding pitfalls with OpenGL and DirectX API development. DirectX rules checked include:

  • Use the macros SUCCEEDED() and FAILED() to check if a DirectX function has failed
  • Include stdafx.h at the beginning of the file
  • Use D3DVALUE instead of float
  • Effects should be closed (End()) in the reverse order in which they were opened (Begin())
  • Use Pass() between Begin() and End()
  • Don't call other functions in the same line as a Begin call
  • Don't call other functions in the same line as an End call
  • Don't change Effect ::Technique between Begin() and End(); call End() first
  • Don't use #include more than 10 times
  • Use an equal number of lock calls and unlock calls
  • Don't use exclusive mode
  • Don't call BeginScene() twice without first calling EndScene()
OpenGL rules checked include:
  • Use the appropriate number and order of "Begin" and "End" functions calls
  • Use the appropriate number and order of "NewList" and "EndList" functions calls
  • Use GL commands between a Begin/End pair
  • Use GL commands between a NewList/EndList pair
  • Don't use designated functions in Begin/End blocks
  • Don't use designated functions in NewList/EndList blocks
  • Don't use designated functions outside Begin/End blocks
  • Don't use an End block without a Begin in each NewList/EndList block
  • Don't use forbidden bracket commands between a Begin/End pair
  • Don't use forbidden bracket commands between a NewList//EndList pair
  • Use only GL functions between every Begin/End block
  • After every Begin(GL_LINES), use the count of "vertex" function calls divided by 2
  • After every Begin(GL_TRIANGLES), use the count of "vertex" function calls divided by 3
  • After every Begin(GL_QUADS), use the count of "vertex" function calls divided by 4
  • After every Begin(GL_POLYGON), use more than 4 vertexes
  • Don't use the function ÔLoadMatrix' to initialize a matrix
  • Don't use the function ÔMultMatrix' to change a matrix
  • Don't use negative vertex and texture coordinates
  • Don't use more than one GL command in a single line
  • Don't use more than five levels of functions calls
This set of rules can be extended with rules that check additional guidelines, such as the guidelines for avoiding Open GL pitfalls described in Mark Kligard's "Avoiding 16 OpenGL Pitfalls (www.opengl.org/resources/features/KilgardTechniques/oglpitfall/), as well as guidelines in the growing set of resources for these libraries. Most gaming developers are aware of these guidelines. However, due to the extreme working conditions and time constraints mentioned earlier, they sometimes make mistakes and they simply don't have the time to verify that the guidelines are followed.

In addition, we provided the game development organization with a technology that allowed them to design and check custom rules that verify compliance with additional guidelines. These custom rules could be used to check additional OpenGL or DirectX guidelines that the team decides are helpful, check guidelines for custom graphics libraries, and check additional requirements unique to their organization, technologies, projects, etc.

Process Improvement Solutions
Truly implementing a practice in a team requires not just the appropriate tools, but also the team culture, workflow, and supporting infrastructure required to embed the practice into the team's development process.Teams that attempt to implement practices with tools alone typically do not achieve the expected quality improvement benefits. For instance, assume that a team tries to implement the coding standards enforcement practice by only purchasing a coding standards enforcement tool and asking each developer to use that tool. Over time, it's likely that most of the coding standard violations will remain in the code. Why? Without additional team-wide support for the coding standards enforcement practice, developers typically become overwhelmed by the number of problems reported and do not know how to approach them. The tool helps the team members recognize the faults in their code, but if the developers do not have the necessary support, the faults remain and the code quality does not significantly improve.

The Parasoft AEP Methodology details one strategy for embedding best-practices into a team's development process. In a nutshell, the AEP Methodology is a new methodology for improving software quality and increasing the efficiency of a team's software development lifecycle. It is based on the AEP Concept, which is essentially to learn from your own mistakes and the mistakes of others, and then automatically apply that knowledge in the software lifecycle to make software work. The basic principles of the AEP methodology are:

  1. Apply industry best-practices to prevent common errors and establish a foundation for full-lifecycle error prevention.
  2. Modify practices as needed to prevent unique errors.
  3. Ensure that each group implements AEP correctly and consistently.
    a. Introduce AEP on a group-by-group basis.
    b. Ensure that each group has an appropriate supporting infrastructure.
    c. Implement a group workflow that ensures error prevention practices are performed appropriately.
  4. Phase in each practice incrementally.
  5. Use statistics to stabilize each process, and then make it capable.
For a detailed discussion of how this methodology works, see http://www.parasoft.com.

By applying the previously mentioned best-practices within the AEP methodology, gaming development organizations gain the following benefits:

  • Higher quality: Fewer errors are introduced into the code, and introduced errors are identified and fixed early in the cycle (when fixing them is generally less difficult, time-consuming, and costly).
  • Fewer schedule slips, faster time to release: Because of the improved error prevention and error detection, less time is required for end-of-cycle debugging, and games are more likely to pass the independent validation in the first attempt.
  • Easier, faster software updates: Team code reflects a standard style, not individual preferences, and meets a predetermined quality standard. When an organization wants to enhance or extend popular game, misunderstandings/buried errors don't impede the process.
  • More proactive management: Increased visibility into code quality, test scope, project readiness, and team productivity helps management identify problems as they arise and start resolving them as soon as possible.
Learning From the Game Development Industry
Even if you're not one of the relatively few developers working on games for Linux, you can still benefit from the lessons we learned by working with gaming organizations.

There is a growing tendency to use Linux for high-resolution graphic development, which shares many of the same challenges as game development. As many industries working on high-resolution graphics are looking to move from SGI systems, they are finding that porting legacy code to Linux is much faster and cost-effective than porting it to Windows. Linux is already emerging as the platform of choice for developing and running graphical work for animation projects (including major projects at DreamWorks, Pixar, and Sony Pictures), special effects, and film production. In addition, Linux is becoming a popular operating system for other high-resolution graphics applications, such as computer aided design, medical devices, and geographical imaging systems.

Developers in these industries face many of the same challenges as developers in the gaming industry. For instance, developers working on high resolution graphics in any of these industries are all too familiar with crazy deadlines, long hours, low tolerance for graphical errors or other errors, and the need to master standard graphics libraries (for Linux, OpenGL) and/or custom graphics libraries.

Consequently, the same technical solutions and process improvement solutions that help developers in the gaming industry can help developers working on other high-resolution graphic projects - or even in other industries with similar pressures and development environments. The automation of practices such as coding standards analysis, unit testing, runtime error detection, and regression testing helps ensure that common coding problems don't delay intense production schedules. To verify compliance with guidelines for standard graphical libraries, guidelines for other technologies that the application interacts with, and unique organizational or project requirements, the coding standards analysis practice can be extended to check custom rules. And, to ensure that these practices become an enduring and seamless part of the development process, they can be implemented within the AEP framework.

More Stories By Wayne Ariola

Wayne Ariola is Vice President of Strategy and Corporate Development at Parasoft, a leading provider of integrated software development management, quality lifecycle management, and dev/test environment management solutions. He leverages customer input and fosters partnerships with industry leaders to ensure that Parasoft solutions continuously evolve to support the ever-changing complexities of real-world business processes and systems. Ariola has more than 15 years of strategic consulting experience within the technology and software development industries. He holds a BA from the University of California at Santa Barbara and an MBA from Indiana University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
SYS-CON Events announced today that 910Telecom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Housed in the classic Denver Gas & Electric Building, 910 15th St., 910Telecom is a carrier-neutral telecom hotel located in the heart of Denver. Adjacent to CenturyLink, AT&T, and Denver Main, 910Telecom offers connectivity to all major carriers, Internet service providers, Internet backbones and ...
SYS-CON Events announced today that Coalfire will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Coalfire is the trusted leader in cybersecurity risk management and compliance services. Coalfire integrates advisory and technical assessments and recommendations to the corporate directors, executives, boards, and IT organizations for global brands and organizations in the technology, cloud, health...
SYS-CON Events announced today that Transparent Cloud Computing (T-Cloud) Consortium will exhibit at the 19th International Cloud Expo®, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. The Transparent Cloud Computing Consortium (T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data proces...
The Internet of Things (IoT), in all its myriad manifestations, has great potential. Much of that potential comes from the evolving data management and analytic (DMA) technologies and processes that allow us to gain insight from all of the IoT data that can be generated and gathered. This potential may never be met as those data sets are tied to specific industry verticals and single markets, with no clear way to use IoT data and sensor analytics to fulfill the hype being given the IoT today.
WebRTC defines no default signaling protocol, causing fragmentation between WebRTC silos. SIP and XMPP provide possibilities, but come with considerable complexity and are not designed for use in a web environment. In his session at @ThingsExpo, Matthew Hodgson, technical co-founder of the Matrix.org, discussed how Matrix is a new non-profit Open Source Project that defines both a new HTTP-based standard for VoIP & IM signaling and provides reference implementations.
In his general session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed cloud as a ‘better data center’ and how it adds new capacity (faster) and improves application availability (redundancy). The cloud is a ‘Dynamic Tool for Dynamic Apps’ and resource allocation is an integral part of your application architecture, so use only the resources you need and allocate /de-allocate resources on the fly.
We're entering the post-smartphone era, where wearable gadgets from watches and fitness bands to glasses and health aids will power the next technological revolution. With mass adoption of wearable devices comes a new data ecosystem that must be protected. Wearables open new pathways that facilitate the tracking, sharing and storing of consumers’ personal health, location and daily activity data. Consumers have some idea of the data these devices capture, but most don’t realize how revealing and...
A completely new computing platform is on the horizon. They’re called Microservers by some, ARM Servers by others, and sometimes even ARM-based Servers. No matter what you call them, Microservers will have a huge impact on the data center and on server computing in general. Although few people are familiar with Microservers today, their impact will be felt very soon. This is a new category of computing platform that is available today and is predicted to have triple-digit growth rates for some ...
SYS-CON Events announced today that MathFreeOn will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. MathFreeOn is Software as a Service (SaaS) used in Engineering and Math education. Write scripts and solve math problems online. MathFreeOn provides online courses for beginners or amateurs who have difficulties in writing scripts. In accordance with various mathematical topics, there are more tha...
In past @ThingsExpo presentations, Joseph di Paolantonio has explored how various Internet of Things (IoT) and data management and analytics (DMA) solution spaces will come together as sensor analytics ecosystems. This year, in his session at @ThingsExpo, Joseph di Paolantonio from DataArchon, will be adding the numerous Transportation areas, from autonomous vehicles to “Uber for containers.” While IoT data in any one area of Transportation will have a huge impact in that area, combining sensor...
SYS-CON Events announced today that SoftNet Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. SoftNet Solutions specializes in Enterprise Solutions for Hadoop and Big Data. It offers customers the most open, robust, and value-conscious portfolio of solutions, services, and tools for the shortest route to success with Big Data. The unique differentiator is the ability to architect and ...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smar...
@ThingsExpo has been named the Top 5 Most Influential Internet of Things Brand by Onalytica in the ‘The Internet of Things Landscape 2015: Top 100 Individuals and Brands.' Onalytica analyzed Twitter conversations around the #IoT debate to uncover the most influential brands and individuals driving the conversation. Onalytica captured data from 56,224 users. The PageRank based methodology they use to extract influencers on a particular topic (tweets mentioning #InternetofThings or #IoT in this ...
SYS-CON Events announced today that Niagara Networks will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Niagara Networks offers the highest port-density systems, and the most complete Next-Generation Network Visibility systems including Network Packet Brokers, Bypass Switches, and Network TAPs.
In an era of historic innovation fueled by unprecedented access to data and technology, the low cost and risk of entering new markets has leveled the playing field for business. Today, any ambitious innovator can easily introduce a new application or product that can reinvent business models and transform the client experience. In their Day 2 Keynote at 19th Cloud Expo, Mercer Rowe, IBM Vice President of Strategic Alliances, and Raejeanne Skillern, Intel Vice President of Data Center Group and ...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
Virgil consists of an open-source encryption library, which implements Cryptographic Message Syntax (CMS) and Elliptic Curve Integrated Encryption Scheme (ECIES) (including RSA schema), a Key Management API, and a cloud-based Key Management Service (Virgil Keys). The Virgil Keys Service consists of a public key service and a private key escrow service. 

Fact is, enterprises have significant legacy voice infrastructure that’s costly to replace with pure IP solutions. How can we bring this analog infrastructure into our shiny new cloud applications? There are proven methods to bind both legacy voice applications and traditional PSTN audio into cloud-based applications and services at a carrier scale. Some of the most successful implementations leverage WebRTC, WebSockets, SIP and other open source technologies. In his session at @ThingsExpo, Da...
Fifty billion connected devices and still no winning protocols standards. HTTP, WebSockets, MQTT, and CoAP seem to be leading in the IoT protocol race at the moment but many more protocols are getting introduced on a regular basis. Each protocol has its pros and cons depending on the nature of the communications. Does there really need to be only one protocol to rule them all? Of course not. In his session at @ThingsExpo, Chris Matthieu, co-founder and CTO of Octoblu, walk you through how Oct...