I came across this conundrum today when talking with an architect friend. Neither of us are involved in the project, but word gets around-- fairly small consultant community here.
A rural regional school is trying to earn two LEED credits, the value of which is on the order of $750,000 in reimbursements from the state school building authority.
The Phase I (I am told) recommended testing for pesticides based on historic agricultural use. The building project was apparently carried through and is now in the wrap-up phase without the testing having been done. The LEED reviewers have reportedly said do the testing or say goodbye to the LEED credits. The school board doesn't want to do the testing and the project architect has apparently asked the environmental consultant to backpedal the recommendation to get the school out of the requirement, but that report is proably a few years old now and I doubt the consultant would like to do that.
I'm not up to speed on LEED requirements but it is my understanding that LEED requirements for schools essentially want a yes/no answer for contamination of any sort, and don't distinguish between RECs, de minimis, exempt pesticide residues, etc.--you present either a clean Phase I, a Phase I with RECs and a clean Phase II, or the Phase II finds release and release has been cleaned up to residential standards.
The sad thing is, I know this area and the main historic use was (and still is) family dairy farms.
What do you all think?
I was talking to a very old friend of mine the other night. She lives in Providence, RI and wants to move to the country. She found an ideal house in Coventry and was well along on the way to buying it. She mentioned the address and something about it tickled my memory, so I called it up on Google Maps.
That was when I realized I had to ruin her day, week, and possibly even month, by telling her the epic story of the Exploding Toxic Mafia Pig Farm Superfund Site located quite literally next door to her dream home.
There followed a few seconds' dead silence on the phone and then some very unladylike language directed at the realtor and the current owner.... the owner had to know because the wellwater is tested annually by the state agency.
Just reviewed a previous "assembly-line" Phase I for a site I'm doing. The previous guy (3 years ago) missed a rather large area (~15'x20') of oily dirt in a rather obvious and easily visible location.
Explanation? "Snow cover."
Per the report, the inspection was done in early June.
This is an interesting conundrum here... not an unusual situation in and of itself, since in my experience it's entirely possible for two investigations by different consultants to come up with different results. This time, though, the investigators are federal agencies.
EPA and USGS each did an investigation of an oil/gas field in Pavillion, WY where potable wells were reportedly contaminated with fracking fluids. The two monitoring wells in question are 785 and 980-feet deep, installed with mud rotary methods and have 20' long, 4-inch ID prepack steel screens.
Both found some contaminants, but at different concentrations, and API is criticizing the EPA study as invalid, unscientific, defective, etc.
Of course, I read the API's objections (see link above) as a bit weak-- ok, the USGS found less of the stuff than EPA did, but they DID find stuff attributable to fracking, and USGS was only resampling existing wells, not conducting a whole new investigation.
It really does, however, highlight some of the difficulties inherent in environmental and geophysical investigations, particularly groundwater sampling, where criticizing the GW sampling methods is the most frequently-used club with which to beat the other consultant during a 'duelling engineers' scenario.... the party taking offense always sides with the concentrations that support their case.
EPA report: http://www.epa.gov/region8/superfund/wy/pavillion/docs.html
USGS report: http://pubs.usgs.gov/of/2012/1197/OF12-1197.pdf
I've seen (and worked on) several instances of a property being identified as an abandoned municipal landfill.. Larr Schnapf has posted a number of instances of methane intrusion from these over the last year or so too.
Assuming there is no hazardous/RCRA C/CERCLA issue present, but only the usual landfill issues of groundwater quality, landfill gas, subsidence, and the need for closure and post-closure work, what avenues are worth exploring for recovering expenses?
I'm curious... I know one of the historic uses of PCBs was as pesticide extenders. Were they typically incorporated into the product in the manufacturing stage, or mixed in later by the people mixing the stuff for application?
Let me apologize in advance for such a long post, but I did not have time to write a shorter one. If you read this, I ask that you click ‘like’ at the bottom of the post.
Technology is essentially an umbrella word for a constantly evolving set of means developed in order to accomplish an ever-changing array of ends. Information sharing is the end, the internet is the means, and so on. This is why in 2012 we have smartphones rather than George Jetson’s flying car—on-the-go telecommunication is useful in pretty much every walk of life, while some people have enough trouble navigating a vehicle through two dimensions, let alone three.
Technology consists of two parts; it is as much the user as it is the thing being used. Add all the microchips you want, the human element will still be there, and technology evolves according to people’s needs and aptitudes—it fills needs, but it must also work around what people can and cannot (or will/won’t) do—email access in the palm of your hand is great, but getting stuck five thousand feet up in a parking orbit while the Piggly Wiggly’s minimum-wage air traffic control moves a delivery truck off the runway is probably not anyone’s happy fantasy.
It’s very hard—often impossible-- to accurately predict where technology is going. For a given technology, though, there are essentially three criteria by which a technology will be evaluated, and which can help guess at its future potential. Does it work? Can people use it? Does it actually add value or make a process easier?
Does it work?
This is the easiest question to answer, because the answer will make itself evident. Sometimes technology is useless because the engineering is flawed. History is littered with false starts (Laserdisc, the dinnerplate-sized precursor to DVDs), dead-ends (virtual reality!), also-rans (Betamax?), risk/benefit imbalances (arguably nuclear power) abject failures (the Pruitt-Igoe housing project in East St. Louis), and sound enough things that just didn’t sell (the Edsel).
The personal computer and its descendants, which include laptops, PDAs, tablets, and smartphones and all the other swarming grand-progeny of the Universal Turing Machine, probably had more impact that any other technological endeavor in the last half-century. When I was in grad school the first time around, I worked in the IT department as tech support. I was the luckless soul charged with maintaining the first-generation iMac G3s that the university had acquired, to accommodate the people who clung to Apples. The first iMacs were terrible—we used to say that the quickest way to crash an iMac was to move the mouse or hit enter. Uptime for individual machines was about 15%. The other 85% of the time, they were doorstops.
It’s obvious how much things have changed in the last decade—when the iMac debuted in 1999, it was a laughingstock, and Apple was at its’ nadir—the company’s little beige boxes had been left so far in the dust by the Windows/IBM PC combination that my boss didn’t realize Apple was still in business until we were ordered to buy some iMacs. A dozen years later, the iMac has evolved into a decent computer, and Apple is the world’s hottest consumer technology company. Half of the people reading this post (and I appreciate all three of you!) are probably reading it on an iPhone or iPad.
The design specifications make good advertising candy but are not the sum of a technology. For example, our GPS mapping unit’s electronics are rated to work in Antarctic conditions. Can we actually use it in such conditions? No, for two reasons--first because field people shouldn’t be out in –70F conditions unless they’re actually in Antarctica. Even North Slope oil rigs shut down in weather like that. The second reason is because the interface is a LCD touchscreen that stops behaving properly at about 20F, so even though the electronics work, the human being can’t use the unit.
To further complicate things, sometimes a technology is ultimately used for something other than its’ original intended purpose, which means the original specifications can become irrelevant. Hollow-stem auger drill rigs were designed for geotechnical and water well drilling, not environmental sampling. Duct tape becomes a permanent solution rather than a temporary patch. Paintball guns were originally designed for timber cruisers to mark trees with. GPS was originally designed to guide cruise missiles, not for surveying.
Can people use it?
The second question, whether a technology is usable, is a bit easier to answer, or at least to test—give the thing to some people and watch them monkey around with it. The human test is a lot less forgiving and a lot more arbitrary than test-to-destruction proving in a lab. Like the Edsel, a perfectly sound engineering design can be a complete flop if it’s not accessible, comfortable, and confidence-inspiring. This is why companies also spend big money on usability testing, to hopefully ensure that people will like what the company wants to sell. Jeff Hawkins, one of the founders of Palm Computing, whittled a wooden block down until it felt comfortable and handy and carried it around for a week, in order to evaluate what the first generation Palm Pilot PDA should feel like.
Human beings don’t change as rapidly as manufactured products can be developed, and for a product to ‘penetrate’ society and become generally accepted, it must be something that a lot of people in the chosen marked can easily adapt to, and want to use. That’s why we have smartphones but don’t have flying cars. On the flip side, that’s also why HDTV-quality video equipment is a marketing success despite the fact that human beings still come from the factory with Eyeball 1.0, which in many cases just can’t take advantage of the improved image quality.
When there is some sort of a technological failure, sometime the user is the problem, if he’s not adept at using the product. We used to call this an “ID10T error.” I confess to having been an apprentice BOFH (Google the term), becoming frustrated with what I saw as the ineptitude of most computer users, and recommending RTFM updates.
I offer a second anecdote from the same IT job. One day I got a service call from an irate professor, who was fulminating that his computer was dead. He complained that his cup holder had malfunctioned. His CD drive, into which he had set his Styrofoam cup of coffee, had crunched the cup in half and spilled the drink into the machine’s innards. (Unlike the programming language, this Java was bad for computers).
Disclaimer-- I know this is one of the stories that have been bandied about the Internet since computers first started to have CD drives, but in this case it actually happened. For the record, he was as mad about the spilled coffee as he was about the university’s computer.
Stupidity was not the problem. Lack of facility with the technology was. Most of the university’s professors were very smart people (in my case, an elderly theologian fluent over a dozen languages) who had definitely not grown up with computers and who had often never seen a computer until one was dumped on their desk sometime in the mid-1990s, when the university started providing desktops to faculty without asking if they were wanted. In fact, he told me that when he wrote his Ph.D dissertation, sometime during the Second World War, he had to write it by hand with a fountain pen, in Latin except for the Greek and Hebrew quotes, and take it to the school’s typist pool to have it typed up, whereupon it went to the university print shop to be printed (not copied!) and bound. This professor only used the computer because students in the last year of the Clinton administration could compose, type, and print papers in their dorm rooms, with translations courtesy of Babelfish, and wouldn’t stop emailing him their reports.
With not much help from anyone else, guys like this professor did what most people do with a new piece of technology that they don’t understand; they learn just enough to make it do what they need it to do—a glorified typewriter, and maybe email--and then ignore the rest. Most technical professions aren’t much different. Consider a product like AutoCAD, and look at the various professions that use it—mechanical engineers, land surveyors, and architects all use the same software in different and usually discrete ways, but I’ll wager very few of them know and use everything AutoCAD can do.
The cup holder incident was well over a decade ago, and since then computers and the internet have become one of the major parts of the world’s nervous system, penetrating further and faster than electricity, the telephone, or television. The internet ‘started’ for most people in the US sometime between 1993 and 1995, when AOL, CompuServe, and Demon Internet—remember any of these names?—started making dialup internet access available to millions of households. According to the US Census, about 22% households had computers, and so few people had the Internet at home that the Census didn’t bother to ask about it. By 1997, 36% of households had computers, and half of those had Internet service. By 2007, 68% of households had Internet access, and the Census didn’t bother to ask if you owned a computer or not. Now it’s 2012 and I laughed at a computer with a 56k modem I found in the office basement last month. At one point, that machine was the cutting edge.
The point is that as the scale of a technology’s use grew, the skills that went with it became more common. In 1993, many computer skills that are now common knowledge were sufficiently rare that it was considered a specialty skill in many situations. These days, practically nobody needs to train new hires in using Microsoft Office anymore, since most people entering the workforce in an office or professional capacity will already have experience with it.
I didn’t bother writing this essay using the method I was taught in my pre-PC high school days, outlining everything to the nth degree, writing bits of information down on notecards, and all that stuff. I didn’t need to—that would make as much sense as sending Morse Code via SMS text messages, or rendering TR55 stormwater calculations down into a bunch of 1950s-era IBM punch-cards and shipping them in a packing crate instead of emailing the data file. Computer programming has eliminated the problem of having to rewrite or retype an entire document to accommodate the changes, which is what made it so essential to have everything so rigidly planned out when hand-writing or typewriters were the way of the world. I started with a general sequence of topics and just typed stuff into a Word document as it came to mind, and cut/pasted chunks of it back and forth until the narrative was in a hopefully sensible order.
Technology evolved along with it—when I started the IT job, most of the infrastructure wasn’t too far past the old UNIX mainframe-and-workstation days, half of the campus offices were still on dialup-only, if you wanted to use the Internet you had to go to a computer lab, and very few people had cell phones. When I left, three years later, every building was wired for Ethernet access, most of the students had Dell laptops or desktops, nobody used the computer labs, and everyone had a cell phone. Since then we’ve seen free WiFi in coffee shops, Youtube, smartphones and tablets, lolcats, and revolutions on the other side of the world get liveblogged on Facebook.
Does it actually HELP?
Sometimes the technology works, and the user is up to speed, but the technology doesn’t actually add any value towards the product or make a job easier to do. This is the hardest of the three questions to answer.
Most of the environmental applications for technologies evolved to meet one of two metrics-- productivity or precision.
Productivity means money—if you can do a job in less time, you can do something else with the time saved. Electronic logging of samples, as Joseph Burley has written about, can be a very useful time-saver, and I must confess to fits of jealousy. Myself, I’m essentially limited to pre-printing the labels for sampling jars so I don’t have to hand-write them on-site, but that still saves me at least half an hour when I have a lot of wells to sample. We also locate our wetland flags using GPS and bump the coordinates directly into AutoCAD, which saves the time it takes to get a survey crew to locate all the flags with rod and transit. On the other hand, if it takes longer to use the new thing, or if it is less reliable, then the technology is a flop. If a new XRF gun can shoot more samples per hour than the old model, but requires an hour to calibrate every morning, then it might not be worthwhile. The human eye and nose might still be better indicators of waste oil contamination than a PetroFlag test.
Precision means data quality and a degree of certainty over results. Is a mapping-grade GPS as accurate as a transit survey? No. Does it need to be? For a wetland survey, no. If you’re doing a construction stakeout, though, GPS is not the tool you want to use. The customary level of accuracy and precision for a land survey is usually a sixteenth of an inch—just over a millimeter. You could draw it more precisely with a CAD program, yes—to the hundred-thousandth of an inch, if you wanted to—but your plotted drawing wouldn’t be discernibly more precise or accurate to the human eye than a hand-drawn plan prepared by a skilled draftsman.
Sometimes productivity and precision are mutually exclusive, and new technologies come at a cost that older methods did not carry. Then it becomes a question of whether the end justifies the means.
The science fiction author Robert A. Heinlein, who learned a thing or two about engineering as a naval officer and who was astutely aware that bells and whistles can become a ball and chain, summed the usefulness issue up with a passage from his most famous book, 1959’s Starship Troopers:
“If you load a mud foot down with a lot of gadgets that he has to watch, somebody a lot more simply equipped—say with a stone ax—will sneak up and bash his head in while he is trying to read a vernier.”
For example, low-flow methods have been standard for most environmental groundwater sampling work for almost ten years. The argument in favor is that this sampling method –the means-- provides samples more certain to be representative of the media from which they are collected –the end-- by reducing turbidity, avoiding stagnant water, and so on. On the downside, in practice, the use of low-flow is as subject to abstruse technical debate as anything else in the realm of sampling protocols and their potential biases. It’s also far more time-consuming than sampling with a bailer, since waiting for a troublesome well to equalize for four or five parameters at a purge rate of 40ml/min can take a while.
As a side note, some states, including Vermont and New Jersey, were as of last October considering eliminating the requirement for low-flow sampling for petroleum cleanups funded by state reimbursement funds. The logic is that low flow methods are time-consuming, leading to higher manpower costs, and don’t provide levels of consistency, accuracy, or precision sufficiently better than old-fashioned bailer sampling to justify the extra cost.
I suspect that much the same thing applies to many more advanced technologies in the environmental industry—for example, membrane interface probes and newer borehole logging methods. Whatever the new methods’ benefits, you will see hollow-stem augers and the Standard Penetration Test on a lot of sites for decades to come, because that’s what a lot of people know how to use. Engineers (broadly construed) are generally a conservative bunch—as Sir Joseph Bazalgette, architect of London’s sewer system put it, “The great engineer is a pragmatist made conservative by the conspicuous failures of structures and machines hastily contrived.” Many technologies are only slowly adopted, because the professionals to whom it is marketed are reluctant to employ a method or device until its reliability, consistency, and essential scientific soundness have been proven over a period of time.
Come to the dark side, we have cookies
The downside of technology is that any technological product or method can be abused, become a crutch, or a shortcut, or a potential failure point. A computer model for stormwater calculations can be run without proper checks from the engineer, leading to errors. A banker can leave his client on the hook owning a contaminated property because the banker didn’t read the database search report he ordered, spot the dry cleaner listing, and ask for a Phase I. A vital piece of hardware can go haywire, leaving a project on hold until it can be fixed, or worse, produces bad data that nobody catches until the driller punches a hole in the septic tank that was supposed to be twelve feet away. In each case, technology fails without the proper human element.
Dude, where’s my jetpack?
The end point is that technology can only progress at the pace people are willing to accept. That’s what makes it so hard to guess where technology is going. We’re so far ahead of where we thought we’d be thirty years ago that reading old science fiction books can produce cognitive dissonance. William Gibson’s Neuromancer was cutting edge when published in 1984. It popularized a lot of concepts like artificial intelligence, virtual reality, cyberspace, etc. I reread the book a couple of years ago, and when I got to a part where one of the characters is trying to sell on the black market three megabytes of RAM that he stole out of a Toshiba laptop, I just about fell out of the chair laughing. Three MB must have seemed like a lot then, when 640kb was the cutting edge, but the first PC I bought, in the later 1990s, had 64 megs, and the last time I saw even a 64 meg chip, it was junk that someone had fashioned into jewelry.
Anyone who in 1962 tried to guess what 2012 would be like would probably be astonished (or possibly depressed) at how things actually turned out.
Do we have lunar colonies? No. Do we have flying cars? No, and like I said, this is probably a good thing. Do we have smartphones that let us talk to someone on the other side of the planet while standing in line at the bank, or call AAA from any roadside, or let us known instantly when Grandma’s in the hospital? Yes. Has medical science made enormous progress? Yes; just think about how many people in the US didn’t die of polio, scarlet fever, whooping cough, or smallpox last year. Are we still burning fossil fuels? Yes. Have our air and rivers been restored to a quality they probably haven’t seen since the 19th Century? Do we all live in one of Le Corbusier’s gleaming and salubrious metal arcologies? No. we still live in wooden boxes, and Buckminster Fuller’s portable Dymaxion house was a total flop.
On the other hand, has science and technology let us down? Home appliances were one of the great commercial successes of the post-1945 era and a major theme in futurist thinking.. Everyone wanted them, everyone could use them, and they made life easier. It’s true that the 21st-century equivalent of a housewife doesn’t have to spend an hour every day doing dishes by hand or hand-wringing laundry, and can microwave dinner rather than juggling pots on a one-burner stove. That’s good, because all that housework would cut into the time she spends at a part-time job to help make the mortgage payment. Our light rail and public transport systems, robust prior to 1945, have atrophied in favor of the automobile, which dominates our society to the extent that urban parking lots are a major land-use concern. We got to the moon with slide rules and vacuum-tube computers, but we haven’t been to the moon in forty years, even though we now have computers that can beat humans at chess…. yet an internet startup named Instagram was worth billions of dollars when Facebook bought it just months before Facebook’s own IPO hemorrhaged money. It’s because most people like sharing sepia-toned photos of their drinking buddies, and relatively few people (and regrettably few politicians with budget authority) care about outer space.
I highly recommend Joseph Corn and Brian Horrigan’s Yesterday’s Tomorrows and Eric and Jonathan Dregni’s Follies of Science, two very good retrospectives on what people in previous decades though the future would look like.
Barring unforeseeable necessities, we probably won’t see any great revolutions in the immediate future, but rather a lot of small changes accumulating. We won’t see robots doing Phase I site inspections, but the inspector might have a webcam strapped to his shoulder with a project manager looking on. Some corporations might adopt remote monitoring for their manufacturing processes—it’s simple enough to wire a stack emissions sensor to a modem and let it phone home, and a webcam in a waste accumulation area could let the EHS staff in home office keep tabs on practices in Bangalore or Muskogee. We might wind up with Big Brother, but we won’t have SkyNet.... and there's always room for Jell-O Perfection Salad.... which at some point someone thought was a good idea. We don't know who, but were I in his shoes, I wouldn't own up either.
We were recently asked to bid on a job for a Phase I. After talking to us, the prospective buyer realized that the purchase and sale agreement specified (per the bank's direction) that a particular consutant would be performing the environmental study, at the borrower's expense, and that he was essentially stuck with them.
I understand that lenders can have preferred consultants, but I've never seen a consultant explicitly specified at a bank's request in a P&S. I'm pretty annoyed by this for a whole slew of reasons. Guess I won't bother waiting for that bank to call me, either.
Is this a new thing, or just the good old boy network (local bank, local consultant who happens to use that bank)?
Is there anything about this that would be unethical or illegal?
On several of our Phase II and remedial projects lately, where we have been working on behalf of a party who is not the site owner, we have been required to do split samples with consultants representing the various owners.
Is this becoming more common than it previously was? It used to be a very rare thing for us but it's become rather frequent in the last year or so.
Also, my experiences have generally been inoffensive if not actually pleasant.... how antagonistic can these situations become?