![]() ![]() ![]() This situation isn't new or uncommon.įor researchers, the lure of harnessing spare computing cycles has been simply too good to pass. Many of our employees have high-end desktop systems sitting idle at home while they're at work. Most of those machines are always on, because our IT department monitors the machines and applies software updates overnight. Most employees have one to two machines at their desk. The company I work for has a work force of over 30,000 employees. Certainly there are applications which can sustain 60-80 percent CPU utilization however, with the exception of research applications, and enterprise production systems, most desktop systems found throughout the world are underutilized. Even when active, most applications utilize fewer than 10% percent of a machines CPU. The truth is many machines are idle for as much as 90% of an entire day. Consider what those machines might be doing right now. Consider the power of modern desktop systems and take a moment to consider the machines you have at home, school or at the office. Modern machines are capable of executing billions of instructions in the time it takes us to blink. In this presentation I'll focus on volunteer computing - however - it will be easy to envision how Grid systems and public computing can come together using Open Source tools. Volunteered computing systems can compliment and extend the reach of Grid systems into the homes of public contributors. However, the two methodologies are not mutually exclusive! In contrast, Grid systems are specifically designed to offer control and predictability. Project organizers have little to no control over the availability and reliability of remote systems. With volunteered computing, it is the general public which provides the necessary computing resources required to achieve a common goal. Before we dive in, I'd like to briefly examine how publicly volunteered computing projects are different from isolated research projects and those involving the use of Grid platforms. In this session, we'll explore how the field of distributed computing has evolved over the past few years. I'll provide links to where you can download the papers at the end of this presentation. The article explored how distributed computing projects work and how open source tools and open standards were playing their part.My talk today is based on the earlier article for O'Reilly and two new papers which were prepared for this conference. I wanted to share what I had learnt during the past few years and an introductory article seemed like a good way to do that. So I'm pleased to have this opportunity to speak to you here today.Īfter the event, I returned home and began working on an article for the O'Reilly OpenP2P site, entitled "Tapping the Matrix". In January 2004, right here in Copenhagen, ChessBrain set a new Guinness World Record involving distributed computation. ChessBrain is a distributed computing project which plays the game of chess using the processing power of Internet connected machines. Colin Frayn and I began work on a project called ChessBrain. I'll be available throughout the day if you'd like to approach me, or you can send me an email at the address shown behind me.ĭuring the past six years I抳e been involved with public distributed computing projects. I have a lot of material I'd like to share with you today, so please save your questions until after this presentation. ![]() I'm here today to speak to you about how open source tools are making it possible for individuals to harness distributed computing resources. My name is Carlos Justiniano and I'm the founder of the distributed computing project and the new open source msgCourier project. Good morning and thank you for attending this presentation. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |