Grid updates 2019
Grid
Published: 2019-11-03

2019-11-03: 75 years; last update of the year

Today we crossed 75 years of compute time for WCG.

Two other events happened earlier this week: the Africa Rainfall Project kicked off, and we finished our 3900X upgrade. We’re excited about both, especially since finishing the upgrades let us build two more nodes out of spare parts. We now have 128 threads in-house.

Here’s to next year!

2019-08-16: Top 1000; New hardware; Visiting friends

Thanks to a visit from user Sheridon of Xstreme Systems Team, we broke into the top 1000 teams a few days earlier than expected. We are now #931 by points, and #997 by WUs returned!

Our in-house fleet also grew a little today, as we’ve gotten a third 3900X. That brings us up to 44 cores / 88 threads in the Greenhouse rack.

Eighteen months (and three days) ago, this team was brand new, and the sum total of our hardware was a single 8 core R7 1700 :)

Also, we are back to crunching all WCG subprojects.

2019-08-09: Second 3900X

Demand for this thing has been crazy. It took just short of a month for us to get our hands on another 3900X!

With this upgrade, we have no more 1600s in service, and we are up to 40 cores / 80 threads in house. Now we just need to find two more of these monsters…

2019-07-31: Sort of no AC

We now have a portable AC unit, and we’ve worked out how to keep crunching 24//7 with judicious application of fans.

Also, we’re closing in on being a top 1000 team, in both points and WUs returned. Soon!

2019-07-29: No AC / SCC1 & ZIKA

We’ve been without decent AC for over a week. As a result, we are slowed way down by only being able to run our compute nodes at night.

In unrelated news, we’re now preferentially running the Smash Childhood Cancer and OpenZika subprojects. SCC1 is back after a hiatus after some planning by project scientists, and Zika is now in its terminal phase.

2019-07-12: Our first half century

Today we clocked out first 50 years of compute time for WGC.

It took us one year and five days of calendar time to hit 25 years, and now we’ve hit 50 years, just a few days shy of 6 months from our 25 year milestone. The hardware upgrades we have in the pipeline should see us hit 75 years even faster than that!

2019-07-12: Ryzen 3900X

We’ve just upgraded one of our machines with this new CPU, and have been putting it through its paces. Check out all the information here!

2019-05-30: WCG ARP1 Beta WUs!

This is the first time we have been involved in the opening stages of a WCG subproject!

Earlier today, the first 2000 beta WUs for one of WCG’s upcoming climate projects were pushed out. One of our nodes got 2 of them. Here’s what they look like while running:

boinc     4263 99.6  4.4 835036 733508 ?       RNl  00:52 903:44 ../../projects/www.worldcommunitygrid.org/wcgrid_beta27_wrf_7.19_x86_64-pc-linux-gnu
boinc     4313 99.6  4.4 835040 733568 ?       RNl  02:07 829:01 ../../projects/www.worldcommunitygrid.org/wcgrid_beta27_wrf_7.19_x86_64-pc-linux-gnu

WCG staff say to expect 20h+ runtimes on these WUs. The runtimes shown in the ps output above (904min, 829min) are for WUs at 91% and 84% complete, running on a Ryzen 1600. Here are the final timings from the job_log:

1559236628 ue 2732.665687 ct 58854.040000 fe 13697606073123 nm BETA_ARP1_0000263_000_1 et 59068.865957 es 0
1559240866 ue 2732.665687 ct 58589.830000 fe 13697606073123 nm BETA_ARP1_0000364_000_0 et 58801.453863 es 0

The actual runtimes (in field ct, for “cpu time”) were 58854s (16h 21min) and 58589s (16h 17min).

It is interesting that while the number in the fe (estimated FLOPS) field is very large (13.7 TFLOPS), it’s actually small compared to other subprojects (46.2 TFLOPS for a MCM WU; 24 TFLOPS for a Zika WU; 23.8 TFLOPS for a MIP WU). This makes the very long runtime surprising. Possibly this is always way off for new projects?

Staffers also said to expect greater-than-normal memory usage with these WUs. You can see that each of these jobs is using about 734MB of resident memory (that is: memory used exclusively by that process). This is, indeed, more than the WUs of other subprojects. Here’s a look all WUs running on that machine, which has 16GB RAM, sorted by memory usage:

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
 4313 boinc     39  19  835040 733568  29400 R  99.3   4.5 851:40.72 wcgrid_beta27_w
 4263 boinc     39  19  835036 733508  29400 R  99.7   4.5 926:23.18 wcgrid_beta27_w
 4898 boinc     39  19  414036 354304  53388 R  99.3   2.2  47:05.21 wcgrid_mip1_ros
 4957 boinc     39  19  186492 124328  47896 R  99.3   0.8   4:38.02 wcgrid_mip1_ros
 4946 boinc     39  19  132404  58408   2744 S  99.7   0.4  14:20.48 wcgrid_zika_vin
 4909 boinc     39  19  132376  58376   2752 S  99.7   0.4  31:46.39 wcgrid_zika_vin
 4893 boinc     39  19  132396  58352   2752 S  99.3   0.4  49:54.03 wcgrid_zika_vin
 4852 boinc     39  19   77128  37000   2392 R  99.7   0.2 153:24.09 wcgrid_mcm1_map
 4855 boinc     39  19   77024  36692   2332 R  99.7   0.2 146:04.97 wcgrid_mcm1_map
 4862 boinc     39  19   76800  36504   2392 R  99.7   0.2 134:49.66 wcgrid_mcm1_map
 4850 boinc     39  19   76800  36488   2392 R  99.7   0.2 159:56.44 wcgrid_mcm1_map
 4828 boinc     39  19   74788  34756   2060 R  99.7   0.2 187:57.47 wcgrid_mcm1_map

WCG staff say (with tongue firmly in cheek) that they can “neither confirm nor deny” that these WUs are from an upcoming climatology project, but they do point out that the software is a modified version of the Weather Research and Forecasting model software from NCAR/UCAR, thus the “WRF” in the binary name. Based on the WU name, the subproject will be known as ARP1 – no idea what ARP stands for yet. They also say:

The work for this project will be broken into small geographical regions, and in the end each region will be simulated for one calendar year. Each individual work unit represents 48 hours calendar time for this simulation. Once a result has been validated for the 48 hours, the output will be used to build the input for the next 48 hours of runtime.

Exciting times!

2019-05-26: Team ranking milestone

As of today’s statistics run, Firepear is a top 1250 team. To be more precise, we are now ranked #1249 by WUs returned (our preferred metric).

It took 3.5 months to go from 2000 to 1500 (500 places), and now it has taken 2.5 months to move the next 250 places. We’re still climbing, but ever more slowly.

2019-05-21: A Return to GPGPU

This weekend we added a GTX 1650, and returned our venerable GTX 750 Ti to service. We benchmarked them against each other on Primegrid WUs before adding a new project: Einstein@Home!

As soon as GPUGrid gets their Linux client functional again, we’ll be running all three on our GPUs. We also plan to outfit each machine in the farm with a GTX 1650.

2019-05-02: CPU Time Milestone

Team Firepear has reached 40 years of CPU time in WCG.

2019-05-02: Badges

mdxi has reached Diamond (5 year) in MIP.

2019-04-24: Badges

In the past week, birdmoot has hit the following WCG milestones:

  • HSTB Silver (45 day)
  • FAH2 Emerald (1 year)
  • MCM Diamond (5 year)
  • OpenZika Emerald (1 year)

2019-03-22: Badge and milestone

mdxi has hit Silver (45 day) in Help Stop Tuberculosis, and as a team we’ve hit 15 years of CPU time for Mapping Cancer Markers.

2019-03-13: Team ranking milestone

As of today’s statistics run, Firepear is a top 1500 team. In fact, we are ranked exactly 1500th by WUs, and exactly 1400th by points (we go by WUs completed, because points are a bit wibbly).

That said, we’re now gaining very, very slowly on the teams still ahead of us. We’re not going to go much further until we get more computing housepower later this year.

2019-03-08: Milestones

Today we reached 3 years of CPU time on the Fight AIDS @ Home 2 subproject, and 5 years on Microbiome Immunity Project.

Sometime recently while we weren’t looking, team member mdxi became a top 5,000 user.

2019-03-01: New team member badges

birdmoot got the Ruby (180 day) badge for FAH2 today.

2019-02-27: Compute time milestone

Team Firepear reached 30 years of CPU time for World Community Grid today.

With all four nodes crunching, our compute time and WUs crunched numbers are really racking up quickly.

2019-02-25: New team member badges

Today birdmoot hit Emerald (1 year) of compute time for the Microbiome Immunity Subproject.

2019-02-20: New team member badges

mdxi has hit 10 years of compute time on the Mapping Cancer Markers subproject, getting his first second tier Diamond badge.

2019-02-16: New team member badges

birdmoot recently hit Ruby (180 days) in OpenZika and Bronze (14 days) in Help Stop TB.

2019-02-06: node04 online

We are up to full strength, with the addition of our fourth, and final planned, compute node. Our hardware configuration should be static until the summer when the Ryzen 3x00 processors are released.

2019-01-19: A quarter-century of compute

With today’s stats refresh, Team Firepear has over 25 years of cumulative compute time for World Community Grid. That’s nowhere near the big leagues, but we’re proud of what we’ve been able to contribute.

2019-01-14: One Year Anniversary!

A year ago today, Team Firepear was founded and began crunching for science. The overwhelming bulk of our work has been for World Community Grid, so here’s what our stats look like as of this evening’s update:

  • Total runtime — 24y 162d 14:23:14 (rank 3,150)
  • Results returned — 114,877 (rank 1,785)
  • Top subproject — Mapping Cancer Markers, with 10y 275d+ runtime and 29,196 WUs

That’s a pretty good start, but this year we’ll be doing more. Last year we started off with 2 cores/4 threads. We’re starting this year with 22 cores/44 threads. Soon that’ll be 32/64, and who knows what will happen after the Ryzen 3X00s drop!

2019-01-01

In a lovely little New Year’s surprise, we have crossed 10 years of compute time on the Mapping Cancer Markers subproject of World Community Grid!

Older updates

For older news, check out our 2018 updates.