Close
  • Latest News
  • Artificial Intelligence
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
Read Down
Sign in
Close
Welcome!Log into your account
Forgot your password?
Read Down
Password recovery
Recover your password
Close
Search
Logo
Logo
  • Latest News
  • Artificial Intelligence
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
More
    Home Cloud
    • Cloud
    • IT Management

    Reinventing Enterprise Techology

    By
    Peter Coffee
    -
    September 26, 2005
    Share
    Facebook
    Twitter
    Linkedin

      Enterprise and academic systems are equally challenged by thickening jungles of system complexity and massive flows of data. Research communities in both worlds are responding with innovative efforts aimed at every level of the IT stack—from the lowest level of logic and memory hardware to the most abstract models of application function and user interaction.

      Research facilities are also IT customers as well as innovators, with technical sophistication that often leads to early adoption and provides potential lessons learned that may benefit less aggressive sites. In this report, eWEEK Labs examines some of the opportunities and challenges that invite a growing exchange of insights—not just a trickle-down from theory to practice—between researchers and users in industry and academia.

      This report is based on a series of conversations that took place this summer with researchers and managers at four key centers of commercial IT innovation—IBM, Intel Corp., Microsoft Corp. and Sun Microsystems Inc.—and at two academic hot spots—the San Diego Supercomputer Center, or SDSC, and the Massachusetts Institute of Technology.

      Our discussions shed light not just on whats to come but also on ways that enterprise IT builders can start now to ease the process of adopting these innovations as they mature into commercial offerings.

      /zimages/3/28571.gifClick here to read about Yahoos new Internet research lab.

      Next Page: Room at the bottom.

      Room at the bottom

      The mantra of IT advancement is the 40-year-old empirical observation dubbed Moores Law, a remarkably accurate 1965 prediction by Intel co-founder Gordon Moore that the feasible density of packing electronic devices would follow an exponential trend. Often asked, though, is the question of whether that rate of progress can continue as hardware nears frontiers defined by the nature of matter and energy.

      “Whats got the semiconductor industry nervous is that theres been a slowing of the improvement rate in performance,” said Thomas Theis, director of physical sciences at IBM Research, in Yorktown, N.Y. “The power dissipation, the heat from quantum mechanical tunneling events—the insulators are just over a nanometer thick in 90-nanometer technology. One nanometer doesnt make much of a barrier for a free electron.”

      For memory devices, Theis said, there are many roads to explore. “Memory devices just need an easily distinguishable on and off state,” he explained, “but every successful device thats been used for logic has amplification. It allows you to restore signals against a reference, the ground or the voltage supply, so small variations in one device after another dont drive the system out of spec.”

      Mere continuation of the Moores Law trend to greater physical density of devices is not an attractive option, even if it were physically possible, Theis said. Like the density of devices, “factory costs have always been exponential,” he added. “At some point, they only become affordable by nation-states or some kind of consortium of nation-states. Theres a cost to maintaining tolerances.”

      Its therefore important, Theis continued, to ask more fundamental questions about the direction that future hardware design and fabrication should follow. “If you look at biological systems,” he observed, “the amount of what is done with extremely high precision is small. Biology works just well enough; the system as a whole functions. The focus of our research is to deliver the information in such a way that the error rate is higher, but its good enough to make the process work.”

      The Internet itself demonstrates this general approach: Its packet-based communication relies on connection and transfer protocols, such as TCP/IP and Ethernet, that are designed to function “well enough” rather than requiring perfection to function at all.

      IBM is only one of several research centers at which were finding a growing trend toward resilient and fault-tolerant protocols, even at the level of chip-to-chip connection and on long-distance links. Enterprise buyers should be increasingly prepared to discuss their tolerance for error rates and variabilities in system performance, rather than expecting single-valued measures of performance in system specifications.

      Not merely in metaphor but also in direct application, biology and biochemistry may contribute to the creation of IT hardware. “What were about to publish is a nanowire transistor technique that uses optical lithography to make a coarse template, then relies on polymer self-assembly to define the channels of the transistor,” said Theis. Carving out the major pathways of a microchip by currently conventional techniques, the process that Theis described then lets the basic mechanisms of molecules finding and binding to each other complete the job with nanoscale precision.

      “We think these are relatively inexpensive processes,” said Theis. His teams experimental prototype is “a very ugly device right now; its just a toy,” he said. “But its a step in the right direction,” he assured us—a step, that is, toward continued progress in performance without an unacceptable explosion of manufacturing costs.

      Another trend that surfaced during our conversations at several research centers is the one toward more cost-effective processor architectures, following the mantra of “performance per watt” that Intel has recently adopted but that vendors including Advanced Micro Devices Inc. and Sun identified long ago as the future figure of merit for CPUs. The power that goes into a computer doesnt lift weights or pump water; it all turns, eventually, into heat, and the challenge of keeping densely packed server installations cool enough to run reliably is becoming a critical concern.

      Rather than seeking performance growth in ever-more-complex devices, therefore, “more system builders are moving to multicore architectures,” said Sun Solaris Group Manager Chris Ratcliffe in Santa Clara, Calif. “The number of cores will expand rapidly. We have 32-way CPU systems in the works—it looks like a 1U [1.75- inch] rack system, but its immensely complex and can run hundreds of thousands of applications on that single system.”

      In a number of conversations with eWEEK Labs, Sun Executive Vice President and Chief Technology Officer Greg Papadopoulos has painted just this picture of multiple cores rather than increased core complexity as the future of optimal processing performance. Sun has followed through, currently shipping to early-access customers eight-core, 32-thread CPUs that use far less power per unit of capacity than competing architectures do. The work described by Ratcliffe continues that trend.

      Value for money is also a major driver of IBMs innovative Blue Gene architecture, with a Blue Gene rack holding 2,400 processors now within one week of becoming available to users at the SDSC and its nationwide networked user community. That single rack, affording 5.7 teraflops of computing power with 512GB of memory, represents “unheard of” density, said SDSC Production Systems Division Director Richard Moore.

      “The paradigm shift is that IBM slowed down the processors to 700MHz,” Moore explained, pointing out that the resulting reduced heat output is complemented by the innovative mechanical design of the machine, with its odd slanting sides that maximize cooling air flow across each horizontal subsystem circuit board within the cabinet .

      Inside the Blue Gene box, Moore added, are five independent high-speed networks that maximize overall system throughput. “Its an important architecture … very cost-effective,” Moore said.

      Researchers with networked access to the SDSC installation will likely apply the Blue Genes power to complex tasks such as simulations in chemistry and physics. Industrial and commercial users are also exploring similar approaches to failure-mode prediction in engineering projects and in pharmaceutical development.

      Less exotic—but equally focused on improved high- performance-computing value—are currently available tools for scavenging available CPU cycles across a network, such as the Condor system from the University of Wisconsin-Madison. Condor is now easing access to idle computing power at facilities such as MITs Laboratory for Nuclear Science, in Cambridge, Mass. Its also found at a growing number of commercial sites, with one deployment up and running for the last year on a 200-server grid at The Hartford Financial Services Group Inc., in Hartford, Conn.

      The normal turnover of PCs on desktops, explained MIT nuclear lab Associate Director Pat Dreher, can sometimes mean that the idle computers available during off-hours have more power than dedicated research clusters whose replacement or upgrade depends on scarce project funds. That perverse situation is one that tools such as Condor help to turn into a benefit.

      At the MIT installation, “when people arent working, [Condor] queries machines,” said Dreher. “If they arent active—no ones logged in, theres no keyboard or mouse activity—a job is put on that machine, and the results [are] written back to the central area.” When a user returns to work, “the job is checkpointed out, [and] a snapshot is taken and stored on disk in a frozen state until it can get cycles to finish,” he added.

      Next Page: Whats in store.

      Whats in store

      Serving both its own community at the University of California, San Diego, and members of a far-flung networked community of other research sites, the SDSC wears at least two hats as both a research site and a 24-by-7 production facility. As the center takes on new roles, its massive storage systems are becoming correspondingly heterogeneous, and eWEEK Labs thinks enterprise sites can learn from the SDSCs explorations and experience.

      “We have about 500TB of disk,” the SDSCs Moore said. Moore had to raise his voice to speak above the howl of the air conditioning in the vast machine room that houses such behemoths as the DataStar, a massive IBM pSeries installation that this month completed a doubling of its processor count to 2,048 IBM Power4+ CPUs.

      Data storage density at the center is growing along with processing power: “We just purchased another 500TB of Serial ATA disk, which is much cheaper per byte than Fibre Channel,” Moore said.

      The traditional challenge of supercomputing, to capture and analyze vast data sets resulting from massive simulations, still demands Fibre Channels performance, said Moore. A data library, meanwhile, can go from low-activity archive to high-intensity processing activity in a short time. A natural disaster such as Hurricane Katrina, for example, can create a huge spike in demand for data on meteorological patterns or earthwork failure behaviors, said Anke Kamrath, the SDSCs division director of user services and development.

      Supercomputer-speed data access will therefore continue to be a critical and growing need. An earthquake simulation, for example, may represent a 40TB or 50TB output set, said Kamrath, adding that snapshots 200TB in size are not out of the question in future studies.

      And thats what leads to dilemmas of collection versus use. “It takes 200 hours to move that kind of thing across the network—for disaster response, thats a long time,” said Kamrath. To move massive data collections, she added, “you have to tune the FTP parameters to the file sizes. We can push data harder than anyones ever pushed it, but it takes heroic efforts to get anything like the published capability of a connection.”

      Kamrath said she wondered, therefore, if some enterprise sites may be kidding themselves about the actual value of the vast data farms theyve built. “You hear the stories about Wal-Mart storing all their user data; theres hard problems to solve before you can really use all that.”

      Closing that gap is the mission of the SDSCs Natasha Balac, group lead for data applications. Balac is working with researchers who want to host their data collections at the SDSCs National Science Foundation-funded Data Central facility, which opened its digital doorway last month.

      “A lot of the data is flat files,” Balac said, and that creates huge problems of usability. Balac is working with the owners of large data collections to devise better architectures for future efficient use; enterprise sites should likewise be thinking now about the implications of vastly expanding data flows.

      Next Page: Getting a grip.

      Getting a grip

      At some point, the abstractions of data and computation turn into the practicalities of software and hardware that require a lot of attention to work well.

      “Anybody whos been in a modern data center knows that there are hundreds of components, perhaps with hundreds of tuning parameters,” said Steve White, senior manager for autonomic computing science, technology and standards at IBM Research, in Hawthorne, N.Y. “Getting them to work can take days; to work together can take weeks; to work really well can take a career.”

      Whites group is trying to tame this complexity at both the low level of core technology and the high level of organizational process. “We want to raise the level of interaction from the knob-and-dial level,” he said. His goal is to let a system operator specify the ends rather than the means—”to tell the database, Heres how fast I want you to be. If that means you need a larger buffer pool, do that,” White said.

      Current work with IBMs DB2 database, White said, demonstrates some of the possibilities, and eWEEK Labs has also found substantial progress toward this goal in Oracle Corp.s Database 10g.

      At the same time, White said, its critical to respect the expectations of the people who are charged with managing a system. “The really important thing will be transparency,” he said. “The system administrator has to be able to say, I know that wont work, or perhaps, Hmm, I hadnt thought of that.”

      Jim Sangster, an enterprise management software engineer at Sun, in Menlo Park, Calif., agreed that there is a need to keep automation within culturally acceptable bounds: “Fully automated data centers?” Sangster asked. “Thats not the angle were moving forward with. Thats not what customers particularly want, nor am I sure that its achievable.”

      One example, Sangster said, can be found in customers reactions to automation in management tools designed for critical incident response. “Were about to ship Sun Cluster Geographic Edition for disaster recovery across a continent or around the world,” he said. “We have yet to have a customer who wants that fully automated. They want a red-button confirmation. Is it really a full site failure, or is there a connectivity failure? … You want to know the nature of the failure and the business loss and data loss that might result. All sorts of issues need a person to make decisions.”

      IBMs White is nonetheless optimistic that complex tasks will become increasingly automated. “At some point, people will get tired of saying OK”—and theyll let the automatic systems do more, he said. White further observed that this is part of a long-standing trend: “People used to spend a lot of time thinking about what sector would get used on a hard disk.”

      Similar thinking was evident in eWEEK Labs conversations with Suns Ratcliffe, who described the “predictive self-healing” direction thats already apparent in the companys Solaris 10 operating system. “We can automate the handling of CPU and memory problems; customers get meaningful error messages and notification of how the system has mitigated the problem and what they should do in the longer run,” Ratcliffe said. “We can migrate tasks off a CPU or memory or I/O subsystem before it becomes a problem.”

      Thats just the beginning, continued Sun Distinguished Engineer Ken Gross at the companys physical sciences research facility in San Diego. Suns CSTH (Continuous System Telemetry Harness), launched in response to a European banks call for help about two years ago, is growing in capability and broadening its availability across the full range of Sun machines.

      The software-based CSTH relies on numerous physical sensors in a hardware package. “A blade has over a dozen [sensors] that were originally put in the computers for gross protection—to shut it down before theres a fire or permanent damage to components,” said Gross. “Sun was the first company to poll those sensors continuously and monitor them with a pattern recognition algorithm.”

      Gross said he expects that it will soon be commonplace, at least on well-instrumented hardware, to recognize anomalies and detect potential failures weeks in advance.

      Vital to the acceptance of these approaches, however, is minimization of false alarms that might shut down hardware without sufficient reason. The mathematical weapon to that end, said Gross, is MSET (Multivariate State Estimation Technique), a system developed for nuclear power plants and other such safety-critical systems and now widely considered a proven approach with a rare combination of sensitivity, reliability and efficiency.

      For enterprise site builders and managers who arent prepared to offer their head office an MSET seminar, the question is how the transition to self-management can be made both credible and cost-effective.

      For one thing, said IBMs White, these systems will be much more auditable, a criterion thats top of mind in many executive offices. “Walk onto the floor of a data center and ask, What do you have, and how does it interact? People dont know,” White said. “The trend toward self-management will build a foundation of self-awareness. People will be able to ask the system, What are you, and what do you depend on? The system will have made an electronic contract, and it will know what its using.”

      Enterprise sites may find that the statutory mandates of auditability, at least as much as the ROI (return-on-investment) prospects of improved manageability, are the levers that will pry loose the resources they need to acquire and deploy these innovations as they become mainstream products.

      Next Page: Code thats not secret.

      Code thats not secret

      What were trying to do is ask, Can we rebuild software to get systems much more reliable than what we have today?” That was the proposition offered to eWEEK Labs by Microsoft Senior Researcher James Larus, founder of the companys Software Productivity Tools group.

      At least since the emergence of Java, one of the recognized paths toward that greater reliability has been greater use of powerful run-time environments that detect and prevent many types of software error. Microsofts .Net Framework, after much initial skepticism from many developers, has lately gained ground and will be the foundation of applications on the Vista family of Windows platforms.

      But thats not the end of the journey. “Weve built an operating system written entirely in C#, all safe code except the kernel and a HAL [hardware abstraction layer] on top,” Larus said. “The device drivers and everything else are written in a dialect of the safe form of C#.”

      “You get really interesting consequences,” Larus continued. “You cant point into another process, and you cant pass pointers into another process space.” These “software-isolated processes are much less expensive to create and switch between than hardware processes,” he said.

      Larus contrasted this with the brittle mechanisms used today. “The way you do extensions in C++ is to load modules that come in as extensions of your address space,” he said. “If it fails, you dont know the state of your data structure, and the best you can do is kill the process and hope that your data isnt corrupted.”

      Under the managed regimen established by the Singularity technology that Larus team is developing, “when you kill a process, other processes have to recover, but they can know their own state at the point of the failure,” Larus said.

      Also key to building better software is the enablement of better team interaction, an area of research effort at both IBM and Intel—and an ongoing process of social change at MIT. “Its a matter of looking at converged communication technologies,” said John Vicente, director of IT research at Santa Clara-based Intel.

      Intel Principal Engineer Cindy Pickering described the cultural and social dimensions of the problem. “We find that sometimes people who dont speak English as a first language are more comfortable expressing themselves in text, and that some cultures are less outspoken and that others therefore dominate discussions,” Pickering said. She therefore expects to find considerable benefits for improved team interaction from integrating multiple communication modes into a single collaborative environment.

      Features of the experimental environment, Pickering continued, include both real-time and historical tools for understanding the progress of a project. “You can put an electronic representation of a team member in different parts of a team space, or leave little bread crumb trails in a document or a source code file where someone might have made annotations or changes. People who are attending asynchronously can get a better idea of whos been making what changes,” she said.

      IBM is exploring similar avenues for improved team function. “Weve started to put native collaborative capabilities into Eclipse,” said Daniel Yellin, director of programming models and tools strategy at IBM Research. “For a project, I can view team members; I can see whos online and do instant messaging. I can tie that in to the whole software design process. When I say, Joe, I dont understand this piece of code, it brings the snippet into his browser and logs our chat in to the design notes. I can store the results of that communication in a persistent form.”

      In eWEEK Labs Opinion, the open-source vigor of the Eclipse community is likely to propel these ideas into practice sooner rather than later.

      Its in IBMs heritage, though, to ensure that such things can work at the enterprise scale. “There are some deep things in terms of scalability,” said Yellin. “For instance, how do you integrate this better into a software configuration management system? Whats the right way to do this with teams of tens or hundreds of people? If a requirement changes, how do I know who needs to be informed of that change? … That needs to be triggered, automated as much as possible.”

      Enterprise site technology adopters should bear such questions in mind whenever theyre hearing a sales pitch for team-oriented collaboration tools, as these core ideas enter the marketplace in products such as Suns Java Studio Enterprise and this months update of Borland Software Corp.s JBuilder.

      In the meantime, sites such as the MIT campus are demonstrating the human side of collaboration technology, as public computing rooms that used to hold rows of workstations are now being refurnished to accommodate users carrying personal wireless-enabled laptop computers. “We made a commitment to become a wireless campus by the end of 2005, and were ahead of schedule,” said Jerrold Grochow, vice president for information services and technology at MIT, in Cambridge, Mass. “Starting last spring, you could see the difference. More of the students sitting outside with laptops, people opening their machine and assuming theres going to be a connection.”

      The leapfrog to wireless has sidestepped difficulties in wiring older buildings with bulky and inflexible high-speed cable, Grochow said, and is making it possible to develop custom applications—including some for advanced handheld devices such as PalmOne Inc.s Treo.

      Though hardly leading edge by the standards of what people expect to see at MIT, this is still ahead of the curve of new-device adoption in most enterprise settings. Enterprise users should look at available hardware and productivity- enhancing connectivity frameworks such as Microsofts forthcoming Windows Communication Foundation, formerly code-named Indigo, to form a vision of what theyd like to be deploying a year or two from now.

      Making complexity understandable is key to avoiding dangerous loopholes

      To make it possible for researchers to innovate without wasting time on the reinvention of security requires a strong cyber-security infrastructure, said MITs Dreher. Grochow agreed. Whats needed, he said, is a single-sign-on setup that provides robust security, rather than separate and often rudimentary systems in individual applications.

      An SOA (service-oriented architecture), Grochow added, facilitates this by making security services readily available throughout an application portfolio—a necessity as research collaborations increasingly involve international teams and as the value of the research work in progress continues to grow. Enterprise managers should heed Grochows example by developing new systems with an SOA-based future in mind rather than building monolithic systems that are redundant in function today and will also hinder their own improvement in the future.

      Lack of forward thinking is at the heart of the security problem, many researchers agree. “Systems today are in a mess because people didnt think about security from the beginning,” said IBMs Charles Palmer.

      “Design for security, design for management” has to be the double mantra, according to Palmer.

      Teaching people to think clearly about security and its interactions is probably more difficult, though, than teaching machines to explain those interactions in plain language or with clear visual metaphors. IBM is trying to work the problem from the latter direction with a research project called SPARCLE, an admittedly tortured abbreviation for Server Privacy Architecture and Capability Enablement.

      “Weve talked with people to find out whats missing—even in English, security rules are hard to follow,” said Palmer, a noted researcher in genetic algorithms who manages the security, networking and privacy departments at IBMs Thomas J. Watson Research Center. “SPARCLE eats the English, produces what it understands to be the nouns and the actions and the objects of the actions, whos doing what for how long and with what purpose,” said Palmer. “[SPARCLE] gives you back little tables—like a bunch of multiple-choice dialog boxes—that its gotten from the English, and it pulls out the actors and conditions and purposes.”

      Palmer said he hopes enterprises will see the benefit of SPARCLE sooner. “We have a research prototype, we have real impact in mind,” he said. “I would not be surprised if you see this in the next year.”

      Technology Editor Peter Coffee can be reached at peter_coffee@ziffdavis.com.

      Check out eWEEK.coms for the latest news, reviews and analysis about productivity and business solutions.

      Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.

      Peter Coffee
      Peter Coffee is Director of Platform Research at salesforce.com, where he serves as a liaison with the developer community to define the opportunity and clarify developers' technical requirements on the company's evolving Apex Platform. Peter previously spent 18 years with eWEEK (formerly PC Week), the national news magazine of enterprise technology practice, where he reviewed software development tools and methods and wrote regular columns on emerging technologies and professional community issues.Before he began writing full-time in 1989, Peter spent eleven years in technical and management positions at Exxon and The Aerospace Corporation, including management of the latter company's first desktop computing planning team and applied research in applications of artificial intelligence techniques. He holds an engineering degree from MIT and an MBA from Pepperdine University, he has held teaching appointments in computer science, business analytics and information systems management at Pepperdine, UCLA, and Chapman College.
      Get the Free Newsletter!
      Subscribe to Daily Tech Insider for top news, trends & analysis
      This email address is invalid.
      Get the Free Newsletter!
      Subscribe to Daily Tech Insider for top news, trends & analysis
      This email address is invalid.

      MOST POPULAR ARTICLES

      Latest News

      Zeus Kerravala on Networking: Multicloud, 5G, and...

      James Maguire - December 16, 2022 0
      I spoke with Zeus Kerravala, industry analyst at ZK Research, about the rapid changes in enterprise networking, as tech advances and digital transformation prompt...
      Read more
      Applications

      Datadog President Amit Agarwal on Trends in...

      James Maguire - November 11, 2022 0
      I spoke with Amit Agarwal, President of Datadog, about infrastructure observability, from current trends to key challenges to the future of this rapidly growing...
      Read more
      Applications

      Kyndryl’s Nicolas Sekkaki on Handling AI and...

      James Maguire - November 9, 2022 0
      I spoke with Nicolas Sekkaki, Group Practice Leader for Applications, Data and AI at Kyndryl, about how companies can boost both their AI and...
      Read more
      Cloud

      IGEL CEO Jed Ayres on Edge and...

      James Maguire - June 14, 2022 0
      I spoke with Jed Ayres, CEO of IGEL, about the endpoint sector, and an open source OS for the cloud; we also spoke about...
      Read more
      Careers

      SThree’s Sunny Ackerman on Tech Hiring Trends

      James Maguire - June 9, 2022 0
      I spoke with Sunny Ackerman, President/Americas for tech recruiter SThree, about the tight labor market in the tech sector, and much needed efforts to...
      Read more
      Logo

      eWeek has the latest technology news and analysis, buying guides, and product reviews for IT professionals and technology buyers. The site’s focus is on innovative solutions and covering in-depth technical content. eWeek stays on the cutting edge of technology news and IT trends through interviews and expert analysis. Gain insight from top innovators and thought leaders in the fields of IT, business, enterprise software, startups, and more.

      Facebook
      Linkedin
      RSS
      Twitter
      Youtube

      Advertisers

      Advertise with TechnologyAdvice on eWeek and our other IT-focused platforms.

      Advertise with Us

      Menu

      • About eWeek
      • Subscribe to our Newsletter
      • Latest News

      Our Brands

      • Privacy Policy
      • Terms
      • About
      • Contact
      • Advertise
      • Sitemap
      • California – Do Not Sell My Information

      Property of TechnologyAdvice.
      © 2022 TechnologyAdvice. All Rights Reserved

      Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.

      ×