G-larmS - Geodetic Alarm System |
A collaborative effort of the Berkeley Seismological Lab and New Mexico Tech |
1. Synthetic Hayward Fault Scenario
2. 2010 M7.2 El Mayor-Cucapah Earthquake
3. 2014 M6.0 South Napa Earthquake
The test cases on this page showcase some of the capabilities of G-larmS. We provide data and configuration files, as well as some shell scripts to reproduce these results (email ) Internally, we validate code changes against these cases to ensure we didn't break anything.
This is a very simple scenario that helps us test our setup in the San Francisco Bay Area. We put an uniform amount of about 1.2 m of slip on the length of the Hayward Fault from the surface to 12 km depth. The amount of slip over this fault area would equal, roughly, a magnitude 7 earthquake. This forward model assumes an elastic half-space and homogeneous material (We use Okada's analytical expressions).
To utilize this scenario, G-larmS can generate a synthetic ShakeAlert alarm at a given time. Knowing that it is running in simulation mode, it grabs the synthetic offsets and adds these to the positioning solutions for the network of real-time GPS stations running in the Bay Area. This could happen in real-time or in a more controlled replay environment, in which we run archived positioning solutions through G-larmS. Once the simulated offsets are added, G-larmS works like it would in the case of a real-event: it estimates the offsets and inverts for slip on a fault to infer a magnitude.
While the Bay Area GPS network real-time processing currently is set up to work with position changes along a network of baselines, i.e. rover-base station pairs, G-larmS is capable to work with baselines, absolute positions (e.g. PPP), or a mix of these. The animations below show the solutions to the scenario for each of the three data cases.
Relevant papers:
Note that G-larmS tests several best fitting fault geometries in parallel. For all of the Hayward tests, in the beginning, a
Mt. Diablo type thrust fault fits best in the (noisy) beginning, but is quickly superseded by
the actual, San Andreas Fault parallel geometry.
While the current real-time processing in the SF Bay Area relies on relative position changes
between pairs of stations, G-larmS can handle absolute positioning...
... as well as a mix of absolute and relative positions.
Here we use actual data, recorded by high-rate GPS stations north of the
Mexican border. We triangulated the network similar to that in the
Bay Area and re-processed the data along those baselines using
trackRTr
, (trackRT - rewind), with ultra-rapid
orbits. This simulates a real-time environment as good as possible
(we should probably throw in a few data gaps, though).
Once we have the positioning time series, we replay them through G-larmS and issue a synthetic ShakeAlert alarm at about the origin time, which initiates G-larmS' offset estimation and inversion. The ShakeAlert message contains a lower magnitude than the final M7.2. We do this to demonstrate the growing of the fault as the displacements build up.
Relevant papers:
The August 24, 2014, M6.0 South Napa Earthquake may appear rather small for real-time displacement modeling, nonetheless it tested our Bay Area setup and we got results in real-time. Certainly, the earthquake caused surface displacements that are clearly resolved in post-processed GPS data. The question was whether this motion was large enough to stand out coherently above the rather large (1-2cm) background noise of real-time processing. We believe it did and that events of that size and geometry are at the lower detection threshold of the current network geometry and processing strategy in the Bay Area.
This event did not only deliver the main shock, but a significant number of aftershocks triggered ShakeAlert and with that G-larmS. As these events are generally small, we wouldn't expect any surface displacement. Hence, this earthquake series delivered a great natural experiment to test our noise sensitivity and magnitude resolution limits.
Relevant papers:
Note the phase shift in the displacement time series and the estimated offsets: that's due to latency of getting data and processing (~4-6 secs). The offset and magnitude estimation didn't start until 24 sec after the event due to a bug that delayed the processing onset (12 seconds extra wait time). Optimizations could reduce this to 14 secs: 8 seconds S-wave travel time and 6 seconds data latency
This is a version after we fixed the bug and tested the code in simulated real-time (note that the latencies in the offset estimation are missing).
| Last modified: March 12 2019 15:21.