2019-05-30: WCG ARP1 Beta WUs!
This is the first time we have been involved in the opening stages of
a WCG subproject!
Earlier today, the first 2000 beta WUs for one of WCG’s upcoming
climate projects were pushed out. One of our nodes got 2 of
them. Here’s what they look like while running:
boinc 4263 99.6 4.4 835036 733508 ? RNl 00:52 903:44 ../../projects/www.worldcommunitygrid.org/wcgrid_beta27_wrf_7.19_x86_64-pc-linux-gnu
boinc 4313 99.6 4.4 835040 733568 ? RNl 02:07 829:01 ../../projects/www.worldcommunitygrid.org/wcgrid_beta27_wrf_7.19_x86_64-pc-linux-gnu
WCG staff say to expect 20h+ runtimes on these WUs. The runtimes shown
ps output above (904min, 829min) are for WUs at 91% and 84%
complete, running on a Ryzen 1600. Here are the final timings from the
1559236628 ue 2732.665687 ct 58854.040000 fe 13697606073123 nm BETA_ARP1_0000263_000_1 et 59068.865957 es 0
1559240866 ue 2732.665687 ct 58589.830000 fe 13697606073123 nm BETA_ARP1_0000364_000_0 et 58801.453863 es 0
The actual runtimes (in field
ct, for “cpu time”) were 58854s (16h
21min) and 58589s (16h 17min).
It is interesting that while the number in the
fe (estimated FLOPS)
field is very large (13.7 TFLOPS), it’s actually small compared to
other subprojects (46.2 TFLOPS for a MCM WU; 24 TFLOPS for a Zika WU;
23.8 TFLOPS for a MIP WU). This makes the very long runtime
surprising. Possibly this is always way off for new projects?
Staffers also said to expect greater-than-normal memory usage with
these WUs. You can see that each of these jobs is using about 734MB of
resident memory (that is: memory used exclusively by that
process). This is, indeed, more than the WUs of other
subprojects. Here’s a look all WUs running on that machine, which has
16GB RAM, sorted by memory usage:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4313 boinc 39 19 835040 733568 29400 R 99.3 4.5 851:40.72 wcgrid_beta27_w
4263 boinc 39 19 835036 733508 29400 R 99.7 4.5 926:23.18 wcgrid_beta27_w
4898 boinc 39 19 414036 354304 53388 R 99.3 2.2 47:05.21 wcgrid_mip1_ros
4957 boinc 39 19 186492 124328 47896 R 99.3 0.8 4:38.02 wcgrid_mip1_ros
4946 boinc 39 19 132404 58408 2744 S 99.7 0.4 14:20.48 wcgrid_zika_vin
4909 boinc 39 19 132376 58376 2752 S 99.7 0.4 31:46.39 wcgrid_zika_vin
4893 boinc 39 19 132396 58352 2752 S 99.3 0.4 49:54.03 wcgrid_zika_vin
4852 boinc 39 19 77128 37000 2392 R 99.7 0.2 153:24.09 wcgrid_mcm1_map
4855 boinc 39 19 77024 36692 2332 R 99.7 0.2 146:04.97 wcgrid_mcm1_map
4862 boinc 39 19 76800 36504 2392 R 99.7 0.2 134:49.66 wcgrid_mcm1_map
4850 boinc 39 19 76800 36488 2392 R 99.7 0.2 159:56.44 wcgrid_mcm1_map
4828 boinc 39 19 74788 34756 2060 R 99.7 0.2 187:57.47 wcgrid_mcm1_map
WCG staff say (with tongue firmly in cheek) that they can “neither
that these WUs are from an upcoming climatology project, but they do
point out that the software is a modified version of the Weather
Research and Forecasting
software from NCAR/UCAR, thus the “WRF” in the binary name. Based on
the WU name, the subproject will be known as ARP1 – no idea what ARP
stands for yet. They also say:
The work for this project will be broken into small geographical
regions, and in the end each region will be simulated for one
calendar year. Each individual work unit represents 48 hours
calendar time for this simulation. Once a result has been validated
for the 48 hours, the output will be used to build the input for the
next 48 hours of runtime.
2019-05-26: Team ranking milestone
As of today’s statistics run, Firepear is a top 1250 team. To be more
precise, we are now ranked #1249 by WUs returned (our preferred
It took 3.5 months to go from 2000 to 1500 (500 places), and now it
has taken 2.5 months to move the next 250 places. We’re still
climbing, but ever more slowly.
2019-05-21: A Return to GPGPU
This weekend we added a GTX 1650, and returned our venerable GTX 750
Ti to service. We benchmarked them against each other on Primegrid WUs
before adding a new project: Einstein@Home!
As soon as GPUGrid gets their Linux client functional again, we’ll be
running all three on our GPUs. We also plan to outfit each machine in
the farm with a GTX 1650.
2019-05-02: CPU Time Milestone
Team Firepear has reached 40 years of CPU time in WCG.
mdxi has reached Diamond (5 year) in MIP.
In the past week, birdmoot has hit the following WCG milestones:
- HSTB Silver (45 day)
- FAH2 Emerald (1 year)
- MCM Diamond (5 year)
- OpenZika Emerald (1 year)
2019-03-22: Badge and milestone
mdxi has hit Silver (45 day) in Help Stop Tuberculosis, and as a team
we’ve hit 15 years of CPU time for Mapping Cancer Markers.
2019-03-13: Team ranking milestone
As of today’s statistics run, Firepear is a top 1500 team. In fact, we
are ranked exactly 1500th by WUs, and exactly 1400th by points (we go
by WUs completed, because points are a bit wibbly).
That said, we’re now gaining very, very slowly on the teams still
ahead of us. We’re not going to go much further until we get more
computing housepower later this year.
Today we reached 3 years of CPU time on the Fight AIDS @ Home 2
subproject, and 5 years on Microbiome Immunity Project.
Sometime recently while we weren’t looking, team member mdxi became a
top 5,000 user.
2019-03-01: New team member badges
birdmoot got the Ruby (180 day) badge for FAH2 today.
2019-02-27: Compute time milestone
Team Firepear reached 30 years of CPU time for World Community Grid
With all four nodes crunching, our compute time and WUs crunched
numbers are really racking up quickly.
2019-02-25: New team member badges
Today birdmoot hit Emerald (1 year) of compute time for the Microbiome
2019-02-20: New team member badges
mdxi has hit 10 years of compute time on the Mapping Cancer Markers
subproject, getting his first second tier Diamond badge.
2019-02-16: New team member badges
birdmoot recently hit Ruby (180 days) in OpenZika and Bronze (14 days)
in Help Stop TB.
2019-02-06: node04 online
We are up to full strength, with the addition of our fourth, and final
planned, compute node. Our hardware configuration should be static
until the summer when the Ryzen 3x00 processors are released.
2019-01-19: A quarter-century of compute
With today’s stats refresh, Team Firepear has over 25 years of
cumulative compute time for World Community Grid. That’s nowhere near
the big leagues, but we’re proud of what we’ve been able to
2019-01-14: One Year Anniversary!
A year ago today, Team Firepear was founded and began crunching for
science. The overwhelming bulk of our work has been for World
Community Grid, so here’s what our stats look like as of this
- Total runtime — 24y 162d 14:23:14 (rank 3,150)
- Results returned — 114,877 (rank 1,785)
- Top subproject — Mapping Cancer Markers, with 10y 275d+ runtime and 29,196 WUs
That’s a pretty good start, but this year we’ll be doing more. Last
year we started off with 2 cores/4 threads. We’re starting this year
with 22 cores/44 threads. Soon that’ll be 32⁄64, and who knows what
will happen after the Ryzen 3X00s drop!
In a lovely little New Year’s surprise, we have crossed 10 years of
compute time on the Mapping Cancer Markers subproject of World
For older news, check out our 2018 updates.