You see, ethanol contains at least 30 kilojoules per gram more available energy than the CO2 and water it's made from. One quart of vodka made in the Brooklyn lab of this startup has about a half-pound of carbon, which if they used their local electric power grid, is two kilowatt hours. I looked it up, and New York State burns fossil fuel for something over one third of its total energy -- I suspect that does not include heating houses in winter, which all over the USA is mostly fossil fuel (because electric heat costs three times more than gas or oil) -- so if their process is 100% efficient (no wasted energy at all, which is impossible), and if you believe the skimpy numbers on the EPA (government) website (1.6 lbs CO2/KWh) it comes to two pounds of carbon added to the atmosphere for every pound they took out (which will eventually return when the vodka is consumed).
Ah, but you say, they are using fossil-free electricity! Don't you believe it. Maybe they have solar cells covering the whole roof of their lab in Brooklyn, which might turn out three or four bottles a day if they don't heat the place, but to win the Xprize they must show a profit, which even at $70/bottle they must produce thousands, not dozens of bottles each month. If they are doing this with electricity, they are buying it off the grid.
When I lived in Texas, you could buy your electricity from wind farms, but it's a lie. All the electrons in the whole continental American power grid are scrambled together, and all the renewable sources are running at full capacity (because everybody wants to say they are using renewable energy), so every extra kilowatt you want to buy off the grid must come from the only source they can turn up or down at will (and not only when the sun is shining and/or the wind blowing) which is fossil fuel. Some people pay extra for wind or solar power, and the operators of those plants take their money and pour their electrons into the common pot, and then when the wind dies down, the coal or gas plants crank up their burners and they get carbon fueled electricity anyway, just like everybody else. Any energy-intensive startup is pure added carbon, no matter how much they pay for it. Maybe in 20 or 50 years we will have enough energy storage to last through the nights and the cloudy and windless days, and enough generating capacity to keep those batteries full, but no country is there yet, and the front runners are at least 20 years away.
Now let's look at whether making vodka directly from CO2 is even the most carbon-efficient way to do it. The obvious alternative is a thousand years old: you grow grain or potatoes, which sucks vast amounts of CO2 out of the air, and some of it turns into starch that can be fermented (using yeast) and then distilled using heat, (that part) same as the guys in Brooklyn. Each pound of carbon that makes it into the bottle sucked 20-50 pounds of carbon out of the air on the farm but also used up a few ounces of fossil fuel driving the tractors up and down the rows. The rest of that carbon is mostly celulose (stems and leaves) which can be burned to run the still (no extra energy used) and what's left can be plowed into the ground for complete removal from the cycle. Actual net carbon negative for traditional booze.
But all is not lost. Those guys in Brooklyn can still be carbon-negative, right there in Brooklyn. Remember those solar panels on the roof? Shut the chemical plant down and sell the solar energy back to the power company, which removes the need for that much carbon-sourced energy from the grid, then go get drunk on farm-grown vodka.
But they won't win an Xprize for that actual savings.
What these guys did is a cute trick, but (a) All told it doesn't take more carbon out of the atmosphere than they put back in, (b) The best way to get carbon out of the atmosphere is still the way God did it, and (c) Taking carbon out of the atmosphere won't have any effect on the climate anyway (God is still in control of that).
The next page in Spectrum is a 2-page spread poking fun at the
Germans, who for all their government efforts in the last three decades
to convert to renewable energy, have not done as much as the Americans
did with no government help at all.
Many translators approach these classics reverently, like it was a holy book some kind, and try to preserve as much of the original language flavor as they can without destroying intelligibility (see my reading of Chaucer, four months ago), but this guy erred on the side of unintelligibility: he preserves many Latin technical terms untranslated and many more as slightly Anglicized respelling. I have a college degree in mathematics, and I was surprised to see so much of what I learned in school came straight out of Newton. However, his proofs come off rather informal (compared to the way they taught math when I was in school), so much that I could not follow the majority of them. A lot of his theorems apppear to deal with rabbit trails, areas and speeds of bodies going around a center of a circle or elipse, but I suppose that was the kinds of things they could measure or calculate about planetary motion, so it must have been interesting to them at the time. Some of his proofs he credits Euclid or Galileo.
As I recall, Copernicus got credit for proposing a helio-centric planetary system, but (I read somewhere) Copernicus thought the orbits should be circular to reflect the perfection of God Who created it. When telescopes got better and they could measure the planetary motions more accurately, (I think it was) Kepler observed that the orbits are actually elipses, but nobody knew why until (200 years after Copernicus) Newton invented calculus to explain how gravity works. I have not yet gotten to the calculus part, but already in the first 40 or so pages he is discussing how an attractor with an inverse square law centripetal force will cause a conic section (elipse or parabola) path around it.
I still have more than 500 pages to read. Dunno if I'll survive. sigh
I didn't, I'm skimming already. I just can't get excited enough about the areas swept by random bodies going around each other.
Come to think of it, I had it up to here in heavy math when I was an
undergrad in Berkeley -- I nearly flunked out -- and after I did graduate,
I never touched math at all (other than simple grade-school arithmetic)
for something like ten years. Computers do logic, not math. It was funny,
One of the courses in the math major was Symbolic Logic, but it was also
required for philosophy majors, and taught in the philosophy department.
Half the class were math majors, the other half philosophy majors. The
math students all got A's and the philosophy students all got C's. Probably
explains why so much of what passes for philosophy is utter tripe.
Needless to say, I refuse to do it in my own Bible study. A few years ago I had a friend who was out there on the Spiritual front lines engaging the atheists -- and then coming to me for help. I thought it was fun, because the atheists are such idiots! They don't really believe their own arguments, they just throw up anything they can think of -- even if inconsistent -- so long as it attacks Christian faith.
A common question raised by the skeptics is, "If God is both Good and All-powerful, why is there evil in the world?" The answer isn't hard: If God wanted robots incapable of doing their own moral calculus, then He would not have given us free will. We cannot "love God with all of our heart, soul, mind, and being" out of compulsion. It must be freely offered, or it is not offered at all. When everything is said and done, the people who want to love God and their neighbor, they get to go to His Paradise, where if everybody didn't do it all the time without exception, it wouldn't be Heaven for the rest of us. You don't want to do that? Not to worry, God has prepared a special place where God and His despicable rules are not, and you can go there instead. Of course, since nobody there follows God's Law, it's a horrible place to be, but take your pick.
Anyway, I was discussing this theological insight with a friend a month or two ago. My late sister used to say, "If you don't like being good today, what makes you think you will like it any better in the afterlife?" I thought it was a marvelous insight, and argued that way to my friend. He seemed to think that our less-than Heavenly personal preferences would all get switched off on Judgment Day and suddenly get replaced with virtuous thoughts and motives. Several days later I realized this was the same thing I argued against to the atheists. If God must do a brain transplant on you before He can let you into His Heaven, then it wouldn't be you going into the Pearly Gates, but some robot who only looks like you. I want my thoughts and desires to be so in line with God's purpose and will that a brain transplant won't be needed when I get there. I suspect God prefers it that way too. Certainly a lot of what Jesus taught is consistent with that understanding.
What brought all this on today is an article in a recent ChristianityToday praising Phoebe as the first and best interpreter of Paul's epistle to the church at Rome. Only women write about women in the Bible (just as only blacks write about racism), any white guy trying to write the same thing would be immediately condemned as racist and/or sexist -- and it would be true, same as when those other people write about it.
So this "Phoebe" piece is substantially better than the average female-written skubalon, but for one blunder I don't think any competent guy would commit:
Whether your English translation says [of Phoebe] "servant" or "deacon," the only translation not fitting to the New Testament is "deaconess." [Nov CT,p.56]She sort of hints but does not say clearly, that the Greek word translated "servant" (or left untranslated as "deacon"), unlike most Greek nouns and adjectives, has no separate feminine form, the same inflection can be either male or female, like 'anthropos' ("person" as distinguished from 'aner' which is specifically male "man"), and over which the feminazis made a lot of trouble by translating not only it as "people" (correctly) but also the family term for "brothers" as "brothers and sisters" when the distinct Greek word for "sister" is not in the text there (but is used in other places, like for Phoebe herself). They mostly won that debate, but apparently not on logic. I do not know if McNutt and Peeler (the authors of this article) themselves supported the mash-up, but most females writing anti-complementarian articles like this Phoebe piece belong to the same tribe.
It's just disappointing to see an otherwise fine article like this spoiled by an inconsistency so damaging to their overall credibility.
The point she/they were making was that the notion of a female "Deaconess"
as an ordained officer in the Church but subordinate to male leadership
did not exist, neither in Paul's mind nor for a century or more later,
but she did not capitalize it to clarify the term as an ecclesiastical
office rather than merely the female form of a noun that could have been
so inflected in Greek if it had existed -- and as Paul certainly would
have used, as he did (in the same context) with both the female form of
"brother" = "sister" and 'prostatis' which the author somehow neglected
to mention is only feminine in Greek. God often and soundly condemns
having "different weights" depending on whether you are buying or selling;
this is clearly a case of it.
Over the years, I have been stopped between seven and ten times, on the road or in public places, for no crime other than being black. -- Esau McCaulley in Sept. ChristianityTodayOver the years, I have been stopped -- maybe not ten times, unless you count being thrown out of church (which as McCaulley describes it, probably amounts to the same thing) but at least four times -- for no crime other than being -- well, I'm not black, so I can't blame that particular whipping boy, but I usually blame -- being honest and obeying the laws. I think "terror" is still descriptive of when it happened to me as it did McCaulley.
The point is, there are Bad People out there, and if you let the Bad Guys define who you are, then that is who you are and it will consume so much of your cognitive capacity that it cannot be used productively, and you will not be the kind of success that America is famous for, and then you will have something to blame the Bad Guys for. Everybody is born with the same number of brain cells, and the USA today is the most egalitarian country in the whole wide world and in all time. But you cannot become excellent at anything but looking over your shoulder if you waste those brain cells on looking over your shoulder.
The bottom line is still the bottom line: God is bigger than the racists and the sexists and the "other" political party -- you know the one, they forced safe (think: pneumonia when it shuts down in the middle of a winter night) energy-efficient (think: heating the whole house (with yikes! carbon fuel, because you can't keep one tiny corner of one bedroom warm with renewable hydro power) electric blankets out of the country... and when I'm not worrying about stuff that God has under control, I can do useful things that make the world a better place. With God's help, of course.
That's the point. God gave you and me these difficulties so that we
might know that "Man does not live by bread alone, but by every word that
proceeds from the mouth of God."
Then -- I guess it was yesterday -- I finished reading the next book in my personal library, this time The Text of the New Testament, in which the author Bruce Metzger laments in the next-last page how not even the Apostle Paul can spell names consistently. The central focus of Metzger's book is on recovering the original text of Biblical book by comparing different versions of the ancient Greek texts and analyzing the kinds of copying errors that might have occurred. Correcting bad grammar and misspelled words in the original text is an easier error to identify and repair than some of the others.
Of course, grammatical errors in the original text, like accurately
reporting a person's lie, does not disqualify Scripture from being inerrant
in all that it teaches, because Scripture does not intend to teach Hebrew
spelling or Greek grammar, nor that what the serpent told Eve in the Garden
was true. "I ain't never said nuttin" is a truthful but ungrammatical way
to say "I did not speak" on some particular occasion. Including these grammatical
issues as part of the Holy Writ may be God's way of reminding us that a
high-brow education is not a necessary requirement for entry into His Kingdom.
God did that a lot, as exemplified also in the fact that Mary and Joseph
and the shepherds were nobodies in their culture.
The funny thing about his effort is that it so resembles the Cargo Cults that arose among pre-civilized Pacific islanders shortly after WWII. It seems that the US military needed these isolated islands out in the Pacific as refueling stations for long flights from the mainland out to the battle fronts and back, so we brought in all the military support necessary to make that happen. The natives saw these big birds come down out of the sky and discharge all this "cargo" and then leave again. After the war was over, the military pulled out, leaving only a fading memory of cargo planes bringing gifts. So they built life-sized models of the long-gone cargo planes and left them out on the abandoned airstrips in hopes that the real gods would see them and return with more gifts. The anthropologists had a ball with it.
The point is, these people had no idea how planes work, nor why they
came and went, but they did know what they looked like, and they supposed
that was what is important. The same thing happens among academics trying
to re-create artificially the natural intelligence in every human brain.
They are getting better at seeing what these neurons look like, and like
the cargo cultists, they suppose that is all it takes to replicate the
functionality. Nobody knows -- not the Christians (see "The
Matrix" earlier this week) and certainly not the Darwinists -- how
intelligent thought and consciousness happens inside the human skull. For
a while they supposed that it was based on inferential logic (and they
made a lot of progress in that direction, perhaps as much as 1% of human
reasoning ability), but then they got lazy, or maybe the funding dried
up when the government saw how far off the goal was and found other ways
to spend tax revenues more closely linked to getting themselves reelected.
Making model airplanes is sooo much easier and cheaper.
If there are errors in the code, then we might detect the inconsistencies and know that this is not real. More likely, all we would know is that physics doesn't behave the way we thought it should in that particular situation. It happens all the time, like in quantum mechanics. People who imagine that Darwin was basically right, get blindsided all the time when they discover that physics doesn't work the way they thought. The latest wrinkle is the prevalence of unfossilized soft tissue buried in rock layers once -- and perhaps still, despite the contradictions -- thought to be a half billion years old. But that's only to show that people want to believe what they want to believe, nevermind the evidence.
I guess this is a hot topic right now -- it appears that another film in the franchise is due out next year -- because the Institute for Creation Research devoted a whole page attempting to debunk the idea. They failed. The problem is, like the movie people, they made the mistake of conflating the idea that we and our universe might be a cosmic simulation with the notion that we inside our universe can build a computer smart enough to fully emulate a human. They are different questions. God could have created the whole universe we live in as a simulation on His computer, and there's no way we could know it apart from God saying so. He didn't, at least not in the Bible. The Incarnation (God becoming human and living with us) is a little tricky, but not much more than a giant VR where God puts on the goggles in His universe to experience ours.
However, imagining such a supra-universe is no more productive nor scientific than the atheists imagining a zillion alternate realities where the laws of physics are randomly different, thereby to get around the problem of the "Anthropic Principle," the fact that our universe is so fine-tuned to support human life, that there's no way it could have happened by chance. Maybe the simulation idea is another shot in the dark to try to imagine how the universe could come to be without God. That's the real problem, they don't want a Creator God Who has the right to tell us how to live. The problem is, the programmer of such a simulation (if it exists) is God, and He does have that right, whether we acknowledge it or not.
As for humans building a computer within our universe that is complex enough that we could download our consciousness into the computer and thus outlive our mortal bodies, I suspect that is about as likely as Darwin being right about the origin of the species -- and for the same reason: entropy won't allow it. It's a different question than the universe being a simulation, because God (and His computer, if any) is outside our universe and not subject to our particular physics (probably including entropy). God can design and build a computer big enough to be the whole universe and all of us in it, because He is still bigger than His computer, just as I am bigger and smarter than any computer or program I could design. I could be wrong, but our entire experience, and science itself, is consistent with the Entropy law, and there ain't no such thing as perpetual motion, nor self-programming computers.
No appeal to any vitalistic soul or spirit outside the physical universe
(as the authors of the ICR piece did) is necessary.
Maybe there is such a thing, maybe not, we don't really know. If our consciousness
is nothing more than an incredibly complex computer, God can still upload
it into whatever eternal computer or other physical brain as He may choose
to (consistent with what He has already explained to us) because God is
not constrained by the physics of the universe He built. So these guys
are essentially arguing from ignorance. They cannot say the soul and/or
"spirit" are or are not part of the physical universe until somebody knows
enough about them to say so from the facts. Until then it is pure guesswork,
like Darwin supposing that the cells in a living body are amorphous blobs
of protoplasm capable of evolving.
Although not himself a Christian, the founding editor had his finger firmly on the pulse of BAR's readership, which (from the ads you can tell) is -- or at least was -- conservative Christians. His replacement is an academic with a terminal case of Clue Deficit Disorder. Having been in academia myself, I can attest from personal experience that CDD is an occupational disease more prevalent inside the ivory towers than outside. Like Nicodemus, they should know better, but they don't. Me, I got out. The academic tenure system guarantees that there is no consequence for incompetence, which is what CDD amounts to. In the Real World, if you fail to understand the customers and/or meet their needs, you start looking for another line of work, because you just lost your job.
Anyway (despite this editor's insistence to the contrary) hostility to Christian faith within his pages has stepped up substantially. Like the increased usage of anti-Christian terms "B.C.E." and "C.E." for what used to be "BC" and "AD". The cover story features a town I never heard of in Samaria, and they found a document there with no Biblical (Jewish) names. Several secular documents, and only one Jewish name in the lot of them. They led off with that fact: it's what made the story interesting -- to the anti-God crowd. Finally, halfway through, they cite a couple Bible verses (one in Ezra, one in Nehemiah) that mention Hadid. They could have said that up front, and changed the whole focus. If they wanted to.
That was subtle. The next story after it, "How Old Are the Oldest Christian Manuscripts?" is more blatant in its attack. An important part of the atheist assault on Christianity is the attempt to convert the divine inspiration and Apostolic authority of the New Testament into a Darwinistic accumulation of late writings by non-eyewitnesses. We have plenty of those, and they are not in the Bible for that reason. Unfortunately, the original documents that the Apostles did write are worn out and gone. All we have are copies -- thousands of them, some of them very early. Probably just as well we don't have the originals, it keeps us from turning them into idols like the serpent of Moses.
This article spends most of its ink on arguing for starting and ending dates based on secular events as the most reliable way to know the date of a manuscript. For example, excavators found a manuscript buried in "a secure archaeological context," the rubble of a Syrian city supposed to have been destroyed in AD 256 (yup, the only author who got the date right in several months), so they can know it was written before that. Another document found in the upper Nile mentions the death of some guy "in the 23rd year of Diocletian (AD 306/307)" so it had to be written after that. Then he uses this to dismiss the early date on the Rylands fragment of the Gospel of John and another codex thought to be an earlier copy of the Greek New Testament in the Vatican, the one on which most of the modern translations are based, because they were dated by the style of writing. Maybe he's right, maybe not, but he did not offer any dates that got moved earlier based on his technique. In any case they are all copies.
The really funny thing is that I used essentially this same argument against the reliability of (pottery) styles to criticize the dates BAR offered to dismiss Biblical dates of events and persons, see "Portland & Pottery" last year and "Archeology and Dates" long before that. If any of this methodology were really robust, you wouldn't see this kind of contradiction.
The next article, "What's In a Name?" reports on a statistical study comparing the names the archeologists found buried in Israel and dated (by them) to the approximate time of Jeremiah, to the names in the Hebrew text of Jeremiah itself. Mostly the differences are statistically insignificant. But that's not a satisfying result to an anti-Christian. So they worked harder and found a minor difference in spelling of the "-iah" ending of the names that ended that way, so that 42% of the Jeremiah names omitted the terminal waw but only 2% of the names from secular sources did. Then they happily attributed the difference to an intentional "process of redaction" in Jeremiah. Which is nonsense.
The most important thing to understand here is that this is a distinction without a difference.
American English -- perhaps including also the British mother tongue around the world -- and possibly also the French, we insist that there is a Right Way to spell each word in the dictionary, and every other spelling is Wrong. We hold national contests where school children compete on their success at spelling obscure words correctly. But you don't need to look through very many pages of any English language dictionary before you discover how many words there are with alternate spellings. "Catsup" = "catchup" = "ketchup" are all different (correct) spellings of the same tomato paste product for dunking French fries and adding savor to otherwise dull hamburgers. Some of these differences are regional, mostly based on a different way to pronounce the word. But far more words are carefully spelled the approved way, and only pronounced differently. Every language has variations in the pronunciation of words across different geographical regions. Before there was a fast and inexpensive way to travel long distances -- and an even faster and less expensive to hear people from different parts of the country and the world talking -- these differences increased until they became separate languages. Except for English and French (and obviously Chinese, where they make no effort to achieve phonetic writing) every language in the world is written exactly the way it is pronounced, so you get different spellings all the time to match up with the regional pronunciations.
I frequently see this in Biblical Hebrew, minor variations in spelling for the exact same word, sometimes the vowels are in there as consonants, sometimes as dots and small dashes below the line or above, but they are the same word. Just spelled differently. Like catchup. When you hear a guy in any language of the world -- except English, French, and Chinese (and probably also modern Hebrew, because most immigrants to Israel never learn how to pronounce the difficult ayin) -- speak his name, you probably know exactly how to spell it (if you know the language). Then somebody else repeats that name, and their accent is different, and you get a different spelling. Names are like that more than anything else. And this article was about names.
So I wondered, who would put forth such a blunder? I flip back to the
first page, and the author name is foreign, vaguely Hebrew. I hunt around
and found the Authors page, and sure enough she is a "research fellow"
at the Hebrew University in Jerusalem. She should know better. However,
if her goal is to deprecate Scripture, you can't really expect her to spend
much time reading it, can you? The feminazi focus of the new editor
tends to drag in more female-authored crud like this than his predecessor
did. Oh well. The stats were interesting (if you ignore the author's innumeracy).
Dunno what this guy was smoking at the interview, but some of his claims for 5G are just plain goofy. He said he invented the term "tactile internet" in 2012 to mean latency of 1 millisecond or less, which is a misnomer, because the word "tactile" normally is about touch, not lag time. But what's in a name? The "1 millisecond" part he got from psychologists at Berkeley, but I suspect they just made it up.
I once actually timed (my own) human reaction time, and learned that I could not push a button and get back off it faster than about 50ms. That was back when I was also young enough to resolve 50-micron objects with my naked eye, and I had access to an oscilloscope to accurately measure the the pulse width. 50ms is the average time between keystrokes of a 200wpm typist, but she has multiple fingers going at once; you cannot exceed half that with a single finger (50ms down, then 50ms up, the same rate at which I was actually able to dial a rotary phone with a broken or missing dial (I forget which), by pounding the hang-up button as fast as I could, while counting the hits, thus achieving the required nominal 10 pulses per second).
The human eye cannot perceive the gap between optical events 10ms apart (that's the time between blinks of a neon light in Europe; the US standard is 8ms). I can detect and measure blink time down to 5ms or less, but only by swishing my eyes across the scene. Some modern LED car headlights and taillights are quite annoying (to me) at a blink rate closer to 1ms, but nobody can actually see that just looking straight at it. Straight at it is more like the just-barely noticeable delay of a new LED room light to come on after I flip the switch, less than a half-second, but more than a tenth (100ms); no delay is detectable on an incandescent light (probably closer to 50ms). I can see the difference on car brakelights, where the LED is operated at battery voltage, so there's no delay, thus enabling me to know which I'm seeing.
Maybe 5G is that fast, maybe not, but humans won't see the difference from their smart phone through the local 5G node (which cannot be farther than 30 feet, because the signal won't go through walls) and back, even if they are going 4G (25ms latency, 50ms round trip, plus substantial compute time, because it's sluggardly unix at both ends in either case).
Then he says that 1ms latency will enable remote control of robots. Perhaps on a factory floor, but you won't see any significant difference from 4G until you get rid of unix, both in the robot and in the controller (in other words, not a smart phone). Then he goes on to say:
Pedestrians using 5G-enabled smartphones could be able to walk safely into the street without checking for cars, because 5G-enabled cars would be routed automatically around the person or come to a full stop. In 20 years, most fatalities on the road should be a thing of the past.
One-minute past, that is. In other words, I don't believe this.
It's hard to find figures on this, but I found a New Zealand site that
I could interpolate their minimalistic table to show a stopping distance
(excluding driver reaction) of 5m (=15 feet, the average length of a compact
to mid-sized car) for a car moving at 20mph (10 meters/sec, the downtown
speed limit in many USA cities). So a pedestrian that steps out into downtown
traffic less than one car-length away will get run over, no matter how
fast their internet. At highway speed the distance is ten times longer,
far enough so the pedestrian wouldn't even think to see that far, which
is worse. Oh wait, there's no 5G out on the highways. And if the autopilot
of the car doing this alleged evasion chooses to change lanes, and there's
a car there, the resulting collision will have one or both cars skidding
sideways (so the ABS has no effect) and the stopping distance is even farther.
The pedestrian is just as dead 20 years from now as if they did that stupid
thing today.
But will 5G internet speeds even have an effect on it? For "1 millisecond latency" to have any meaning at all, every pedestrian at a busy traffic intersection must be broadcasting their GPS location a thousand (or more) times a second. Or they could do it 20 or 50 times per second (4G speeds, as stated in this interview) with no perceptible difference in whether the car hits them or not. So who is going to tell the car to stop? An on-board computer processing 20,000 GPS reports every second from nearby pedestrians, in addition to all the other stuff the robot car needs to think about? Or maybe a super-computer on every street corner, processing 50,000 transactions per second (including all the cars) plus up to a thousand car-to-people distance and direction calculations -- every millisecond?? On every street corner? Maybe Moore's Law will make that kind of computing power affordable in 17 years (20 from when he said all that), but I wouldn't invest in the company offering to build it.
"For sports," he says, "fans wearing smartglasses will be able to actually see the action from the player's point of view, [using] hundreds of ultrahi-definition cameras ... around the field." Maybe 5G will support the bandwidth required for every fan to get a separate feed, but everything else could be done now in 4G (if they wanted to pay for it).
Bottom line, penultimate paragraph, this is only for dense urban areas,
not out in the country side -- nor in low-income areas, I suspect. The
poor people, "Let them eat cake." And, like
Wi-Fi even today, radio bandwidth is always finite and Parkinson's
Law applies: The harder they sell 5G, the quicker it will saturate (and
latency will go back up).
In fine print on the bottom of each page, the IEEESpectrum announces that this is the "North American" edition. Obviously they don't want to antagonize their Chinese colleagues by the machinations of those nasty entrepreneurial Americans. Maybe the reason the 2020 Spectrum is so much better in editorial quality over 2017 is that they no longer separate out a "North American" edition. About the same time, another piece celebrated the separation of their "Institute" for that specific reason (offending the advertisers).
Article #2 in the same issue is titled "The Patchable Internet of Things" and if you are not yourself a castrated (eunuchs) programmer, you might easily infer from the title the sole reason for the lament in the first pull-quote, "Why are IoT devices so vulnerable to hacking?" There are actually two reasons, and both must be true for a device to be hackable.
First and foremost, they are Patchable. An electronic device cannot be hacked unless it is remotely alterable.
Secondly, they are eunuchs -- I mean unix (same pronunciation, same difference). An electronic device cannot be hacked to mount a DDoS attack on other internet targets unless they are Turing-capable, meaning you can upload an arbitrary program to your "smart lightbulb" and expect it to turn the TV on in another location and set the channel. Light bulbs have no business sending arbitrary messages to other devices on the internet. The authors of this absurd article want every "Internet of Things" device to have a complete unix operating system (they didn't say so by name, but they want the facilities of it) so that it can be hacked, and they have the temerity to suppose they can make it less hackable by making it more so. They are going the wrong direction. Really.
It's easy to make a device that can be remotely programmed to do its job without making it hackable. It's very hard to make a unix (or unix wannabe, like Windoze) device that is safe from remote hacking. That's why the original Mac had a full order of magnitude fewer known vulnerabilities than the next-best OS, back when there was a MacOS (under the covers OSX is unix, not a Mac). Another good reason not to get an iPhone.
Now that I fully understand what idiots they are who make these things, you will never get an IoT device into my house. You tell me it's on the internet, I don't want it. You tell me it's smart, I really don't want it: it's probably smarter than its own programmers, which isn't saying much.
PermaLink, with additional
remarks
Anyway, June 2017 has a cover feature on making computers that supposedly work like the human brain. This issue came out about the same time I posted my essay denigrating "21st Century AI" but I had too much work on my plate to take time reading hogwash like this. Some of the Spectrum articles admit we are nowhere close to matching human brainpower, others hope to see it by 2020.
The first of the series mentioned Alan Turing's eponymous "Turing Test" where an ordinary person is unable to distinguish the output of a computer from a real person over the same channel. They quoted Turing as expecting it within 50 years (by 2000). The people who really believe in this stuff tend to be more optimistic than realistic. Partly it's ignorance, the human mind is far more complex than anyone ever imagined -- and still imagine, but we won't know that for sure for another decade or two, after we've learned more and once again pronounce immanent success at cloning it. This author predicts better understanding of ourselves after we succeed in cloning human intelligence in a machine, with "first benefits ... in mental health, organizational behavior, or even international relations." I can think of the kind of international relations that resulted from the logical implications of a mechanistic view of human intelligence (and specifically evolution, on which all of these writers base their optimism), and the international disaster he provoked on the world starting in Germany some 80 or 90 years ago. IF we succeed (and for entropic reasons I don't think it possible), then we might unleash a super-intelligent evil on the world that will make Hitler and Stalin and Pol Pot look like children.
A couple of the articles dismiss current neural nets as an over-simplification of what goes on in the brain -- which is true -- and they have proposed new hardware that supposedly works more realistically. It will be a long time before these new, more complicated neurons turn up in anything big enough to do useful work -- recall that the current version is just "21 lines of code" and yet takes many hours to train such a net to recognize simple scenes by reasons only supposed to be logical.
One article "Navigate Like a Rat" seems to have abandoned NNs entirely and focussed on attempting to reproduce the larger behavior that we see in rats exploring their territory. What he calls "SLAM" (Simultaneous Localization And Mapping) resembles rather remarkably what I was attempting to persuade our autonomous car project kids to do in order to win the international race. Of course I was not reading about the research in SLAM, I just re-invented it. The kids were not taking advice, so we did not succeed.
The final capstone article "Can We Quantify Machine Consciousness?" takes a sort of vitalistic approach to human consciousness, claiming it's about how the hardware is wired up, and that modern computers lack the feedback paths present in the brain, and this cannot be supplied in software. Which is pure nonsense. *I* claim that if they can successfully describe (and therefore build) a hardware device with the necessary feedback paths, then *I* can take that description and build a software "machine" on a conventional computer that is indistinguishable from their hardware version (except possibly for execution speed), because his description is the program, all I need to write is a compiler for it. That doesn't mean I believe he will succeed, but if he does, I can do the same thing in software. Software is like that.
Stronger than that, beginning with the IBM 360, there is no firm boundary between hardware and software. Even granting his (in my opinion bogus, but certainly unproved) claim that feedback between the components is necessary, I can come back and point out that the components where he sees feedback, they also are made up of components (transistors, wires) which lack that feedback in the same way that (so he claims) modern computers lack it.
Bottom line: they know nothing at all, they have imagined an attribute that they imagine is present in humans and dogs, but not in computers and software, but all this is speculation and a vivid imagination. If they ever actually build something like what they think might work, I'm pretty sure they will discover it doesn't.
So why should anybody believe I'm right and Koch&Tononi are wrong?
You shouldn't. Go look at the primary evidence yourself. Where has anybody
ever made a device smarter than themself? It has never happened. We have
computers that beat humans at chess (except that the computer programmers
cheated: they changed the program mid-tournament) and at Go, but the programs
are not smarter, they are just faster. Those same programs cannot engage
their opponents in idle banter over supper the way a human opponent could
and would. We don't even have programs that can hold their own for ten
minutes of banter, when that's all they do. We have never seen any evidence
that machines can become smarter all on their own. One game program supposedly
played itself a million games and "became smarter" but the intelligent
way to become smarter at that same game does not require a million nor
even a thousand games, it requires intuitive inference, which people can
do and machines can only pretend at. Everybody who believes they will achieve
true machine intelligence, they think it will happen by the incremental
accumulation of random events, by Darwinistic evolution, but there's no
evidence that it ever happened! It's religion,
"Believing what you know ain't so." It didn't happen in Lenski's
E.coli experiment, and they suppressed the evidence of the failure
-- just try and find or get access to the second 10,000 generation
data, where it curved back down. I saw the chart. The text is still there
in the Wiki archives (with a disclaimer) but the chart is long gone.
I think we should blame the Russians for Trump losing this election. Some four years ago the Dems lost by a bigger margin, so they started casting about for somebody to blame (other than themselves) and decided that the Russians threw the election, so they embarked on a program to eliminate free speech from the electronic media (see my blog post "The New Tinfoil Hats"). If the Russian meddling on FaceBook did anything to the 2016 election, the left-wing bigots at Google and YouTube and FaceBook and Twitter did it more this year. Trump will scream and yell and challenge the results in court, but if there is actual voter fraud, you can be sure the fraudsters will have covered their tracks very carefully, so they won't find anything. In particular, the left-wing bias on the electronic media (like the Russians in 2016) at most only influenced voters, but they cast actual and authentic votes (just like last time). Or so the recounts and courts will determine.
From My perspective, God is bigger than all that, and things need to
get a lot worse here before the End, because I read the Last Chapter of
the Book, and the USA is not in it. Some of us are, but not the atheists
and God-haters. Their problem, not mine. I mean I wish I could do something
about it, but God won't even let me help fix problems inside the church,
let alone outside it. That's OK, I need to do the work God gave me to do,
and I think I'm doing that. Worrying about who got elected isn't part of
it.
Take the cover, which is meant to illustrate the future deployment of robot cargo ships -- obviously they couldn't show an actual robot ship, so the artist Eddie Guy PhotoShopped a conventional ship hull on top of a stormy rough water sea in which the ship makes no dent, then added two colors (dark red, light blue) of otherwise identical shipping containers in eight visible layers 18x12 (including numerous improbable gaps below the top layer), plus a little snow drifted inside the bow railing (maybe it was already on the ship photo he used) and overlaid everything with a pouring rain (rain falls in straight parallel lines, but snow flutters about in the wind), the kind of weather that would wash away any drifted snow in minutes. The pictures accompanying the article inside -- this Guy obviously didn't see them -- mostly look like bloated submarines (no people, so you don't need decks for them to walk on, nor windows for them to look out, nor railings to keep them from falling overboard, plus the smooth exterior reduces wind drag and increases battery life). It was an interesting article, they expected some prototypes actually running in Norway territorial (coastal) waters "by 2020" (I wonder if they succeeded?) while waiting for maritime treaties to decide what people can do on the high seas.
Then some guy is trying to scale his VirtualReality (VR) gaming platform up to a billion users, I think again by 2020. I think OpenSource is fine and dandy for low-grade business profits and low-quality user experience, but I don't think he did the math. He requires high-end goggles ($700) and PC ($800) which is a pretty big chunk of change for people used to doing Pokemon for free on their phones. Worse, his system requires 10Mb/s internet connection, which is hard to find here in the hinterland, especially during prime evening (entertainment) time. Either the entrepreneur or the author (it was unquoted and unattributed) claimed
That's well within reach of many people in the United States, where broadband networks currently provide an average connection speed of 15.2Mb/s [emphasis added]You calculate the average by adding up all the particulars and then dividing by the number. So a million broadband users could include a hundred corporate accounts with terabit throughput and all the rest at dialup speed (50KB/s, like I get here around 9pm) and the average is 100 trillion and small change divided by one million = 100MB/s. But nobody actually gets that, they still get only dialup speed. Or 2% of your million users have gigabit throughput, and all the rest at piddly dialup speeds, and the average is still 20Mb/s. Average is a lousy way to find out who has sufficient bandwidth, because a few very large numbers can skew the average. I think a median number is more useful, that's the bandwidth threshold where half of the users are above, and half below. If the median were 15.2Mb/s, then he'd have a sizable market (more than half of the USA) for his product. But the number crunchers didn't say that. Maybe it's because the average looks better than the truth. Or maybe they are merely innumerate.
The next article is a hard-sell promo for hydrogen fuel cells powering cars and trucks. The author quotes Elon Musk against it -- but of course Musk has a corporate reason for preferring battery-powered electrics like his company's Tesla cars -- then tries to answer the criticisms. Not very successfully, I would say, (like Musk) he tries too hard. Hydrogen is very energy-dense -- well duh, that's why they want to use it -- which means if the tank is compromised, you have very high pressure very flammable fuel spraying at high velocity, so the resulting explosion and fire is not limited to 20 or 30 feet from the car like gasoline, but possibly as much as 100 feet, depending on the nature of the breach. The government knows that fuel oil + ammonia makes a car bomb, so they watch large purchases of that combination; I think hydrogen has more energy per pound, so a bigger bomb is possible in a smaller space -- and if it's a standard car fuel, it will be harder for the government to watch bomb makers. That's not necessarily safer than gasoline.
Storing and controlling the pressure is much more expensive than pumping (liquid) gasoline at air pressure. The author claims that "At full build-out, only 15 percent of current gasoline filling stations operating in the state need to transition to hydrogen." I don't think so. People living in the boonies need a gas station every 20-50 miles, because that's as far as you can reasonably hitch a ride with a jerrycan -- oh wait, you can't load up a jerrycan with hydrogen and hitch a ride back to the depleted car, there's a high-pressure issue to deal with. People living in the boonies -- this is California he's talking about, when people move out to the foothills and commute because the cost of living in the cities is prohibitive (I knew people like that 20 years ago, when I lived there), the idiots in Sacramento are driving the poor out of their state like they drove most of the mid-range businesses out when I was there. I wonder if they realize that the poor are the ones who voted for them? Whatever, not my problem.
Musk's final criticism is that the hydrogen used in FCEVs (fuel cell electrics) is not produced from renewable sources. He is again not correct. Today 33 percent of the hydrogen dispensed in California must, by law, be produced from renewable sources.That's a tough mandate to enforce. Electrons can travel the whole length of the state and back in less time than it takes for your 60Hz light bulb to flicker once. If you are on the grid -- and you can be sure all the hydrogen producers in California are -- 37% of your electricity on the average comes from fossil fuel (Google gives that as the California average this year), no matter who you paid for it, nor what they used to push their electrons onto the grid, because all the electrons are co-mingled in one big soup. Only when California gets entirely off of fossil fuels -- as they hope, but probably cannot succeed at in your or my lifetime -- can you be sure the hydrogen (confined to in-state producers) is renewable energy only. I still don't believe it: How much coal and oil-fired energy went into the Chinese and Indian products that increasingly must be used to support our local no-carbon fantasies? Nobody is saying. They probably don't dare.
The last article is pushing the developers' idea of "microgrid" solar panel plus a controller and battery to supply 150 watts of power in remote villages in India. It's a fine idea, but again, this guy oversells his technology. It's not the fact that they are powering the house with DC that saves energy, but the fact that vendors for low-power products assume a battery source so they work harder at reducing the power needs than vendors of appliances that run on wall power, which can be (inaccurately, at least in India) be assumed to source more power.
Taken together, the DC line from the main grid and the solar microgrid are enough to power five fans, eight LED lights, two small flat-screen TVs, several cellphone and tablet chargers, and a laptop. [emphasis added]That sounds like he is saying all those things can run at once on the 150-watt supply, but if you do the math -- he said DC fans consume 30 watts, so five of them use up the whole available 150 watts; similarly, LED as bright as the one in his picture consume 16 watts, so eight of them is most of the 150 watts coming out of the power source, allowing for some voltage conversion energy loss. I think maybe he should have said "or" in his third-last word. On the next page he said DC-to-DC conversion is "much more efficient" than AC-to-DC. It seems to me that the same technology works on both, so it shouldn't make a difference -- unless the high-voltage transistors used for street power conversion down to the (safer) voltages needed for DC distribution are inherently less efficient. I couldn't find anybody who said, but Google can no longer be trusted to give you the truth (see "New Tinfoil Hats" two weeks ago); I did see a couple sources that gave 90% efficiency for DC-to-DC conversion and something closer to 75% for AC-to-DC. Perhaps again the market pressure isn't there for the last tweak of energy savings when wall power is assumed. In any case a 20% improvement is not at all like the difference between tungsten and LED lights, where the power savings is a factor of six.
All in all, I was disappointed in the reduced accuracy in the flagship
organ for the Electrical engineers. Oh well, it's not like anything
I
do depends on it.
I moved here because the last couple years before she died, my sister was hospitalized a lot, and I realized I needed to be near family when that started happening to me. My nephew moved here (to Oregon) because he took seriously his Biblical obligation to look after his aging father; I'm only an uncle, but maybe that's better than a total stranger. One of the things I can do to "pay forward" the anticipation of his attention when I've lost my own marbles is to be available to sit with his Dad when he and his wife need to be elsewhere. They are going to a night class sponsored by his job, so I'm up there every Tuesday evening this fall. For a while I watched "Person of Interest" on Netflix while there, but Netflix killed it a month ago. It was a thriller, the heroes were very good at what they did (some, but very little inner turmoil), and one of them was a computer whiz. The technology was a little over-the-top, but forgivable -- it's fiction, and a fun watch (the third season started getting goofy just before Netflix took it down, but not like Sherlock, where -- like the ghost-writer who took over Griffin's Presidential Agent series (see "New Town Woes" four years ago) -- the writers obviously got tired of Conan Doyle's winning formula and made a joke of the form (it was awful, in both cases); MacGyver also did that in his final season (see "MacGyver Blowout") probably what killed the show. Mostly Netflix is like TV (see "Netflix Failed" four months ago) more trash than good stuff, and they make it really hard to find anything worth watching, so I'm back to reading.
Another WIRED magazine had a few marks showing I'd read those articles, but a lot more I did not recognize. They had a full page listing streaming services for when you get tired of Netflix -- some of them "free" -- but one of them, whoever bought it from Walmart, they need to monetize what they bought, so they mostly promote "buy" and "rent" when you get there. Another is supposed to work like your library, but (obviously they have no revenue source to make a quality product) is doing a cheap imitation of Netflix, but with no search capability. I managed to punch past their sign-in page to get some information, and apparently they have no way to opt out; you can terminate a "membership" but only in one of several libraries of which you hold a card, but not the last, and not out of their service entirely. I didn't join.
The property tax bill came last week: I'm still paying the tax to support that library whose doors have been locked for seven months. Part of the inflation (hidden tax on the poor) that "not one dime" Obama's socialist business policies discouraged businesses from spending the money to prepare for the pandemic that everybody a decade ago knew was coming, and could have done something a decade ago to mitigate, but didn't. If Biden doesn't win in a couple weeks, one of his party associates will in 2024, and just knowing that is enough to scare off investment in infrastructure to prevent the next pandemic from being worse than this one.
Oh wait, isn't "plague" (and bad weather) among the signs of the End?
Maybe there won't be another election.
Nobody wants to consider the possibility that more people hated Hillary than hated Trump. It's much easier to assume, as one of their colleagues said several decades ago, that the other half of the country "are poor, uneducated, and easily led." And given that unshakable but patently false premise (thoughtfully now unstated in such blatant racist terms), the clear intent of every one of these featured people is to lead some of these poor and uneducated morons into voting for the other camp. If you look carefully at what some of them propose to do -- and in Google's case, actually are doing -- it is to replace the so-called "misinformation" (translation: "we don't like their politics") with exactly the same kind of stuff, but which pushes the other political agenda.
Did you get that? You can no longer trust Google searches! I already knew that. Yes, they will find actual content on the web, but the priorities will be biassed. The top ten won't be what you want to see. YouTube, which is owned by Google and claims that 70% of their viewing time is triggered by recommendations, has intentionally changed their recommendation policies so that it is no longer what people in your demographic also watched, but rather what their programmers decided you should watch. The team leader in this project ran for Congress as a Democrat in the current election cycle -- he failed in the primary -- so he's obviously not trying to boost Republican Party votes.
The subtitle on another article claimed that their featured person helped Trump win in 2016, but is now actively working against Trump. Buried deep in the middle of the story is the truth: he built some of the software on FaceBook that other people used to push Trump (not known if they succeeded, or if people just ignored that stuff and voted for whom they wanted), but this guy actually voted against Trump in 2016 also. None of these people were convinced by facts, they just don't want to believe that honest, thoughtful people chose to vote against the left-wing bigots.
The techies don't realize it -- their Religion (believing what you know ain't so) won't let them see it -- but people are smarter than 21st-century "AI" (artificial stupidity) robots. The political party that wants to win in 2024 -- assuming that tech-driven propaganda actually works -- will have smart people producing videos to beat the AI take-down bots. It's the eternal game of cat-and-mice, cops-and-robbers, but the people have the technological edge over the robots. I wonder how long it will take the atheist techies -- Democrats, all of them -- to realize on a level playing field they've been had by their own priests?
Next month's election will be interesting. With the left-wingers now pushing their AI/technology, will they convince the "poor uneducated and easily led" morons to vote out Trump? Or will intelligent people everywhere choose to vote for whom they want to vote for, rather than whomever has the biggest technology budget? And if Trump wins again, what will the whiney Dems blame it on this year? Maybe they should look at the name of their own party.
Full disclosure: I didn't vote for Trump last time, and I don't like
him any better this year, but at least he's kept his promises, which is
better than his "Not One Dime" predecessor. I don't know anything about
Biden, maybe we'll find out next year, maybe not.
So they have this booklet that explains in detail these policies and intentions. They are basically middle-of-the-road conservative Protestant. Most of it is pretty much identical with every church I looked at over my life (excluding a couple churches that were closer to one edge or another), some of the sexuality stuff spelled out in more detail to point the lawyers at because our "Established Religion" (what the government pays for and enforces) is so far from the Biblical standard these days.
One of their "Core Values" is "Unity in essentials, freedom in non-essentials" (I think that line dates back to Augustine) but they are a little vague about what counts as "essentials." Another page lists "Our Core Beliefs" in two sections, one labelled "What We Believe (Statements of Faith)" the other "Additional Beliefs & Positions". Is one of these "essential"? Or is it both? I didn't think to ask. No matter, I have no problem with any of them.
One of the other attendees there asked about Arminianism vs Calvinism (nothing in the booklet about that topic). I grew up in a denomination that taught 5-point Calvinism (without using those terms). As I got older and read the Bible for myself, I started to see the Scriptures supporting the other side of the debate, and began to call myself a "two-and-a-half-point Calvinist," which gradually degraded to something like "3/4-point" today. The pastor came down solidly on the fence, then added that the church leadership (I forget his exact words) does accept unconditionally OSAS (Once-Saved-Always-Saved). That's the "P" = "Perseverance of the Saints," the fifth point in Calvinism's "TULIP."
The problem is, the Calvinists must disregard (or re-interpret) Heb.6:6 and 1Jn.5:12. Jesus had strong words [Mark 7:13] for people who did that to Scripture. As a non-member, I can sweep such things under the carpet of "non-essential," but to become a member I must confess agreement. I am not yet ready to give up 2Tim.3:16 for the questionable benefits of church membership.
Another three or four booklet pages list the benefits and obligations of church membership, nothing really new there that I did not already discuss in my essay "Local Church Membership" five years ago. The pastor said they weren't legalistic about membership, which works for me.
Bottom line: I'm happier with this church today than I was a week ago,
but I had already decided a couple months ago to call it home without formal
membership (if possible, which has not changed). Everything they think
they need membership for, they already have without it (at least for me).
If they Do The Right Thing, it will work. If they violate their own policies
and/or Scripture -- God alone can protect me from that, as He already has
done in other situations, a half-dozen or more times.
This particular song leader exuberantly belts out her song with a hearty voice and vigorously strums her guitar, holding it so that its polished surface reflects into my eyes the spotlight trained on her (it would be less distracting if the house lights were up, but this is entertainment, not true worship). I have no doubt that her faith is genuine (most likely good Relationshipism, like every other musician and preacher in American churches), and this is the first time since I started coming to this church that I noticed them repeating a song sung in a previous service -- it must be one of her favorites, because they rotate between three or four music teams (or at least leaders, the others are not so prominent that I noticed), and the previous time we had the same song she was also the lead.
It's a classic "7/11" song (7 words repeated 11 times), in this case the 7-word line is repeated only six times, but the 3-word title is repeated 15 times and accentuated ("Yes. I. Will.") by the music so there is no mistake, this song is about me, what I am promising to God, an act of my will. And that's not bad -- except God says don't do that. Don't make promises you can't keep. God seems to consider it important, He said it several times, both in the Old Testament and the New, not only in the Ten Commandments (indirectly) but also in the musings of Solomon and in the Epistles and even on the lips of Jesus, and by example in several other places. One Old Testament story [Judges 11:31] makes no sense to us moderns at all, except as an example teaching us not to make promises to God that we might be unwilling to keep.
A Relationshipist considers affirmation (like making promises, which is a form of telling somebody how much you love them, an affirmation) to be more important than truth (like keeping those promises, or actually giving up your convenience and rights to do something of value to the person so "loved"). The best Relationshipist prayer starts out "Father we/I love you," despite that the supplicant openly admits to God and everybody else that he does no such thing. Jesus said "If you love me, you will keep my commandments," but the Relationshipists all insist that it cannot be done -- which is not true: we may make mistakes, but if we practice it, we get better at it and slip up less often -- but God seems to think (and teach) we can actually do what He commands. The Relationshipist husband tells his Relationshipist wife how much he loves her, and she is content with the lie (most of the time) and eager to forgive him the rest of the time, but less so now in these days of Feminazi empowerment and the destruction of marriage. God calls us to Repent, which as the preacher reminded us yesterday, means "Turn around and go the other direction."
Back to the song, it has 300 words (I counted them), but only 72 if you eliminate all the repetitions. 42 of those 300 are pronouns, 34 of them first-person singular (I/me/my) and the other eight second person (you/your, referring to God). There are only four explicit references to God Himself. There are almost three times as many references to the singer in this song as to God. This is a very Self-ish song indeed. Pretty much all of her songs were, I think one had approximately equal first- and second-person pronouns (but I lost count, and cannot remember the name to look it up on the internet). The person driving the PowerPoint computer was less skilled than usual, but even under the best conditions the words are up there only a few seconds, not nearly enough for me to internalize them, nor even remember their gist most of the time.
The previous week (different song leader), the first song was almost entirely about God. It can be done. His second song was more selfish and the final (post-sermon, nobody calls it an "invitation" song any more) song was almost as opposite to the sermon as one can get. Jesus said "If anybody wants to be my disciple, they must deny themself..." We don't want to do that. We like being selfish. Me, I think the church leadership (in any church, I actually know of one church that made the effort) should instruct the music team(s) to select songs that glorify God rather than self. Perhaps they don't want to antagonize the temperamental musicians. I was in another church where the founding pastor was a heavy-handed controller, and he couldn't keep his choir directors, they kept quitting. Surely there's a middle ground here? Probably not my place to say anything -- certainly not in this church, this year: I'm new here and they have more important things to worry about (like Covid).
Jesus told about a farmer with two sons. He asked the first one to go
work the crop and the kid said "Sure," but he didn't go. So he made the
same request of the second one and the kid said "No way," but later changed
his mind and went. Which one made the father happier? I'm told that it's
better business policy to under-promise and over-produce than the other
way around. I suspect God agrees.
The Books and the Parchments (B&P) is almost as old as Regress, but F.F.Bruce concentrates more on the history of the Biblical text. The Dead Sea scrolls were fresh news when he wrote it, but still mostly unpublished a half-century later when Herschel Shanks, then founder and editor of Biblical Archaeology Review, lit a fire under them (we won't be seeing that kind of stellar journalism by the current editor, who promised to disregard "unprovenanced" findings like the DSS, see "BAR Commits Suicide"). Anyway, B&P is a good read, even if a bit dated. The final chapter is not the longest -- that would be the previous chapter on English Bibles to date (that is, up until the 1960s) -- but it does dwell rather too long on the New English Bible, of which only the New Testament had been published at the time. Bruce obviously thinks it's the cat's whiskers. I generally recommend the New Living Translation, I think some 30 years later, as "most accurate," meaning: average readers are most likely to understand it correctly.
Looking for something a little more lively, I saw a Grisham novel on my niece's bookshelf and she offered to lend it to me. There's rather more irrelevant detail than you usually see in a novel, and then I saw a bunch of photographs a few pages ahead and realized you don't find photos in a novel. This was Grisham's first nonfiction, and knowing that rescued me from some of the tedium. My recent experience with the local police and other "good guys" doing harm made his story all the more credible. So no, I don't want to read another one of those. After the central character -- you can't really call him a "hero" -- was exonerated of the crime he didn't commit, he had lost so many of his marbles in prison that he couldn't stay in one place more than three months. One of the places Grisham mentioned is the nursing home where my mother spent the last few months of her life, but the guy came and went years before my mother arrived. Not even as close as my fleeting brush with movie star Q'orianka Kilcher (see "Roots" ten years ago), but enough to add some temporary sparkle to my otherwise drab life.
Back to my personal library, I found amongst the other outdated books there The Digital Villain (DV), which appears to be a philosophical treatment of computers in our life today. Unfortunately it was published the year the first (4-bit) microprocessor became available to the public, so most of his history is moldy. I got started while waiting on a long compile (which just now finished), but it will be interesting to see if he was prescient or merely fictitious in an ignorant sort of way. He's a prof at Berkeley (after my time there) and his approach is somewhat academic.
PostScript. About the middle of DV he quotes Marvin Minsky, a pioneer in artificial intelligence at MIT, claiming
In from three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level and a few months after that its powers will be incalculable. -- Life Magazine, Nov.1970Of course none of that has happened, not in 8 years, not in 50 years, and we are farther from it today than we were when he said that, because what is taken for intelligent machine behavior today is no smarter than an earthworm, and certainly not smarter than what AI used to be in the 1970s (see "21st Century AI").
The whole back half of the book (beginning with this quote) is devoted to AI in fiction. One of the longer chapters is little more than a political screed against the Cold War. This was obviously written before Reagan's "StarWars" program ended it. The remarkable thing is that the (leftist, all of them) university professors were the most vehement against Reagan's "StarWars" yet that was the very thing that brought about the end of the Cold War that everybody feared most.
Now we have a new global catastrophe for everybody
to fear -- before and after COVID, and for some idiots,
even during it -- and the same leftists who brought "Global Warming" to
national attention are the ones whose policies are most likely to obscure
its departure. Who said university professors are smart? I should know,
I was one, once or twice. Not any more, I didn't like their politics.
When I started in grad school I thought "God probably created using evolution," and my major professor suggested I look at the evidence. I was astounded to see that everybody arguing in favor of the Darwinist hypothesis always pointed to somebody else's research, not their own (see my essay on "Biological Evolution"). In the course of asking what I came to call The Question, now as of more than four decades, of anybody and everybody doing primary research in any field at all, and still finding only complete silence and/or pointers to somebody else, I have become rather more evangelistic for the side that talks science, not hearsay. For a while I also read one of the publications by the Darwinists, but they wanted only True Believers reading their stuff -- aka "preaching to the choir," which is what you do when your religion is not credible on its own merits -- and I didn't want them claiming I agreed with them.
The Institute for Creation Research is rather more confident in (and therefore less restrictive of) their material, so I've been reading their stuff for several decades. They tend to have the "NIH" ("Not Invented Here") attitude common among uncontested experts at the top of their field (Apple and IBM both had that attitude back when they were at the top; neither still is now, no wonder why), so ICR scientists sometimes come off a little goofy, but mostly it's good stuff when they stick to their expertise.
The October issue of their flagship Acts & Facts (no link, they unnecessarily started encrypting their website a year or two ago, so I read the dead-tree version) offers a piece by Jeffry Tomkins making the case that "The New Testament Upholds Created Kind Stasis" based on the perfect tense of a single Greek word in Col.1:16. He gave the word in Greek, and it did not look to me like it was perfect tense (which usually begins with the initial consonant "reduplicated" and a kappa in the middle, which this word had neither), but sometimes there are irregular inflections, except they are usually so identified in the dictionary at the back of my GNT, and this one was not.
I spent 15+ years working on Bible translation software (see BibleTrans) and my program gives the complete parse of every word in the GNT, and the data files are still on my computer, so I looked it up and discovered I no longer remember what the parse codes mean. No problem, the program itself spells them out, but I could not find the text files I built that from. Worse, the last compile from that development effort, which ended about ten years ago, crashes when I try to open that verse.
Google Knows All, and I found five different websites that parse out the individual GNT words, one that agreed with Tomkins, one that agreed with me (against Tomkins), one that agreed with Tomkins that it's perfect, but called it "middle/passive" which would detract from the point Tomkins was making about it being passive, a fourth site that did not parse out that one word in the verse (but instead referred to a note at the bottom of the page:
[Note] Here at Abarim Publications we're thoroughly excited about our interlinear New Testament, and work is continuously in progress. To date we have about 90.3% of the New Testament coveredIn other words, this is a hard word that fell out of their automated parse engine, and they have not got around to solving it.
The fifth site used ambiguous parse codes similar to what I was using in my program, but offered no explanation. They had a feedback / question mechanism, so I asked and got a reasonably prompt reply referring me to three other websites, one of which was the site with the note. The site he said was definitive was not useful at all, but the third site was one I had not seen. It also gave the parse as "perfect middle/passive".
Finally, I remembered I had a version of my program running on WinXP, the laptop I bought for that purpose but now mostly use for watching downloaded movies. It opened the verse right up, and gave the parse as perfect middle/passive. It still doesn't look like a perfect, but like I said, there are irregular verbs, and I'm no expert. "Middle/passive" means that the same form can be either passive (Somebody else did the creating) or middle (it created itself, as the atheists claim). The same verb earlier in the verse is aorist passive (not middle, not perfect).
It's not that Tomkins is wrong, only that (according to his bio) he
has no formal theological education. Being self-taught is not bad (I've
done it myself in several topics, including Creation Science) but without
the formalities it's harder to be sure you know where the potholes are.
If Tomkins actually knew Greek, he might have realized that the point he
was making is probably not so sturdy as he made it out to be. Probably
no more wrong than claiming the Paluxy river footprints are both dino and
human combined (which ICR backed off from, because they know what they
are doing in that arena), not wrong in the Big Picture, but you can't make
the case from this particular piece of data.
Google Knows All. Whenever I can't remember the words to a song, I type "lyrics:" and the song name or even a memorable phrase and there they are! One of the hits said that Tom Lehrer had done a spoof on it, and sure enough, there was a YouTube video -- well, actually an audio taken from a record, with the words karaoke-style instead of video, which was helpful because Tom Lehrer can sing very fast, and in this case also in Italian. It was delightful.
When I was in high school, my Uncle Tom was singing folk songs at the local station where he was in college, and when we saw him (Thanksgiving and sometimes Christmas) he would sing for us. We urged him to sing the funny songs, like the Old Woman Who Swallowed a Fly amd Rickety-Tickety-Tin, and he also introduced us to Tom Lehrer's songs, some of which I can repeat to this day, but usually only parts. I think Tom Lehrer is gone now, but most of his songs are online for listening.
Tom Lehrer was a mathematician at Harvard when I first heard of him, and he often spoofed the academics in his songs. While I was looking for or listening to his Clementine (I Googled "Tom Lehrer sings Clementine"), there was also a link to his Lobachevsky song about mathematics. That was my major too, so I clicked into it. Lehrer was a good comedian, but whoever did the lyrics video really got the spirit: the Russian names were all written in Cyrillic (I know very little Russian, but enough to know they were spelled correctly), and there were a couple places where Lehrer repeated (in very fast Russian) the alleged Pravda review for the book he was singing about -- and the words were spelled out on the screen in Russian! I cannot read Russian fast enough to know if it was accurate, but I think it was. It was great fun.
Speaking of seeing Russian on the screen, one of the flicks I watched
last Sunday was a (silent) movie about the mutiny on the battleship Potemkin
done in Russian, with English translation added to the dialog screens.
I still remember a few Russian words -- I took it in college and did poorly
because I had trouble memorizing the alphabet, I think the teacher gave
me a C on the condition that I not go on to the second semester -- so I
had some fun reading Russian names and very few other words I could make
out ("borshch" is any soup in Russian, not just the beet soup we all call
borscht). Anyway it was interesting to see this part of the Russian revolution,
as told by the Soviets.
This same pilgrim -- his day job was a "Pardoner," one who goes about selling indulgences, basically "Get out of Jail Free" cards: you pay him a fee, money or goods, he's not particular, and kiss his relic and go straight to Heaven if you die before it expires. The same thing that got Martin Luther going. The pardoner admits to being a greedy shyster, and quotes a lot of Bible against his own practices. The bulk of his story is one long sermon against vice, which he ends with a hard-sell. It certainly is timely, and I was reminded this very evening of my own recent experience with person(s) in the church business to whom this criticism applies:
Alas! Mandkind, how may this thing betideHis problem, not mine.
The to thy dear Creator, Who thee wrought,
And with His precious blood salvation bought,
Thou art so false and so unkind, Alas! -- lines 12,844-12,847
Anyway, we are doing our summer camp thing (see "Summer Videos" three years ago, and other more recent blog posts), this year virtually using Zoom, but these virtualizers were designed for infinite bandwidth. There's no such thing (except what God is and made), and the virus dumped everybody on the internet in vast quantities very suddenly, so they did not have the opportunity of gradually experiencing the limits.
I noticed this a couple months ago, when I was staying at my nephew's while my sewer was out, and they wanted to watch movies on Netflix, which was also designed for infinite bandwidth (see "Netflix Failed" a little over a month ago). So today the same thing happened with Zoom. I mentioned to the other participants that I got more sound when live video of their faces wasn't hogging bandwidth, so mostly they turned their faces off. The screen sharing is mostly static, so it uses less bandwidth. Today the director was giving his wrap-up speech, but he left his video on and I only heard every tenth word or so. The image was also pretty much frozen like digital TV mostly is.
Zoom noticed the problem and said my internet connection was "unstable" but it was fine, just reduced bandwidth. Zoom is what was unstable. Sometimes it just up and crashed. Or dropped out of the meeting (and reconnected, but lost all my chat buffer). Other times -- this is a design bug, not an implementation failure -- its spell checker would insist on changing a word to something else because it didn't know what I meant. Back in the days of punched cards, I had a clerical person transcribe my handwritten code into punched cards, and the better typists would respell my variable names (which introduced bugs in the code). I finally convinced her not to do that. Anyway, Zoom does that and I'm looking at my fingers, so I know I didn't spell the abbreviation for "widget" ("wgt") as "wit", but there it is silently doing the WRONG THING without warning. I hate programs that think they are smarter than I am -- especially when they are wrong. But that's not COVID's fault. I blame unix programmers, the same ones who assume everybody has infinite bandwidth just because their employer has infinite bandwidth (internally). Not really infinite, just they didn't hit the limits when they were testing (before the virus drove the usage through the roof), and unix programmers have limited experience, so their code breaks more often.
A few years ago, before Apple killed the "aging" (17-year old) Mac and
replaced it with a "modern" (34-year old) unix (OSX
is unix under the hood), some wag on MacWeek pointed out that "everybody
knows unix programs crash all the time." Mac programs that crashed all
the time wouldn't sell. Maybe that's why the programmers dissed the Mac
(and everybody else loved it). The world is poorer for the loss.
The fellow who did this "translation" did nothing more than giving modern spelling to most of the words and changing out a few others to preserve the rhyme and meter, with the result that it reads more or less like the King James Bible, which most people are unaware is widely distributed in a revised form not much older than this "translation" of Chaucer. Like the KJV Bible, it still has a substantial number of words that are not part of American English today, like "franklin" which I looked up in the Oxford English Dictionary. I always thought of it as the meaningless name of the fellow who invented lightning rods and bifocal spectacles, and now a metanym for the one piece of currency that might have a total value in circulation (most of it outside the USA) as much as all the rest of the circulating currency of all the countries in the world. But nobody knows for sure. Anyway, "frank" means "free" so a franklin is a freeman (not noble, but also not a serf). All of us Americans are born franklins. Some of the obscure (untranslated) words in this "translation" are footnoted, most are not.
Speaking of obscure language, the Canterbury Tales is the book I cited (with a totally unreadable image of its first page) in a talk I gave on BibleTrans in a church some 16 years ago (scroll down to slide #24), to point out the need for Bible translation, even in Bible times.
Leaving aside technicalities, I'm now halfway (two parts out of four)
through The Knight's Tale (the first of a couple dozen stories, not counting
"prologues"), and it's very readable, a jolly good story, probably at least
as good as the best of the old movies I download from the Archive.org
website, and certainly better than most of the more recent movies I used
to get from the (now shuttered by the virus) public library.
Duh. For some 200 years now, the influential people of the world have deemed themselves too smart for God -- I think that's part of the "Enlightenment" where the darkness of human reason is deemed brighter than the Light of the World. "And this is the condemnation," the Apostle said, "that light is come into the world, and people loved darkness rather than light because their deeds were evil." Jesus said that as the end draws near, it will be like the days of Noah, when (we are told) "every human thought and imagination was only evil continually." Yup, that pretty much describes the vast majority of people today.
So imagine my surprise to read in the biography of Mary Ann Evans (her real name) that she became quite religious. I almost wondered how that would lead to her writing acclaimed "classic" literature, but the next page has her abandonning her faith. Perhaps the title character is somewhat autobiographical, as he also went from an ardent churchgoer to a disillusioned agnostic in the first chapter summary of his back-story. The problem is that England was already past its religious prime -- that state having already been transferred to the USA -- when Miss Evans was growing up, so the religion she got was not the religion that made the British Empire great, nor the religion that made the USA the superpower it is today -- it should be noted that the political power follows Spiritual enlightenment by about a century; we are already on the down side of our peak -- witness this remark on page 3:
...the sense of the Invisible in the minds of men who have always been pressed close by primitive wants, and to who a life of hard toil has never been illuminated by any enthusiastic religious faith.She obviously never had any exposure to the "enthusiastic religious faith" that is to be found among people "pressed close by primitive wants, and ... a life of hard toil" like those where I grew up. But this is fiction, wholly made up in the mind of the author, with little or no basis in Reality. The skewed quality of her opinion of religion was better disclosed on the next page:
...farms which, from a spiritual point of view, paid highly desirable tithes.The desirability of monetary income from the tithes of rich farms is about as UN-spiritual as you can get. "How hard it is," Jesus said, "for a rich man to enter the Kingdom of Heaven." And in another place the great Apostle tells us that "the love of money is the root of every kind of evil." In the second chapter, Silas Marner has come to dearly love his money. The author even goes so far as to declare it to be his deity.
The amazing thing -- well, it's not so amazing, our author was just plain ignorant of what she wrote about -- I also went through the same kind of disappointment in church (at least three or four times, depending on how you count it), but unlike this character and his authoress, I had a faith based soundly in God's Word, with the result that I already knew to "trust God, not people." People are sinners, sometimes worse in the church than elsewhere, but God is faithful.
These six books are older than I am, so obviously I inherited them,
perhaps from one of my parents. I'm pretty sure I would not have brought
this one home from a bookstore or library, if for no other reason than
I never read a female author I enjoyed. I have not yet noticed much of
the characteristic feminine style of writing -- perhaps in adopting a male
pseudonym, she also consciously tried to write like a guy. The introductory
bio remarked that it took literary giant Charles Dickens to detect the
feminine flavor in her writing. So far it's only her mistreatment of the
Christian faith that bothers me. Like the sci-fi athors I have remarked
on from time to time, it's not that they have tried the Christian faith
and found it wanting, but that they never gave it an honest try at all.
Mostly because we in the church have fallen down on the job. I suspect
there will be a lot of surprised church members on Judgment Day, when Jesus
asks them, "Do I know you? Have we even met?"
I finished up The Last of the Mohicans a month ago, and last week (after getting tired of reading or mostly skipping over WIRED's froth) I started in on Hawthorne's House of the Seven Gables. In his preface the author calls it a "romance," but I suspect that the word has changed meaning in the last 200 years. The prose is somewhat more turgid than Cooper's, or maybe it's the indoors-y 'sit and look at the pictures on the wall' style of writing, compared to Cooper's outdoors action. At least it doesn't keep me up all night like a modern thriller.
About a quarter of the way in, one of the lesser (so far) characters
exclaims of the young lady (a major character, I just peeked at the last
chapter and there she is) "I never knew a human creature do her work so
much like one of God's angels as this child Phoebe does!" I can't find
any evidence that Hawthorne was much of a Christian, but he seems to have
captured in this line the classical Christian notion of vocation,
the Calling of God to do His work with joy in the real world. I know people
like that. I hope to be one.
Hmm, I seem to have made most of this point before, see "AI (NNs) as Religion" a couple years ago, and my essay "The Problem with 21st Century AI" a year before that. Well then, think of today's post as an update.
Periodical publications like Spectrum vary a lot in quality. It's the nature of the case. They need to print thus many pages each month -- in commercial pubs the number varies by how many ad pages they sold, which tends to be somewhat seasonal, but non-profits have a monthly budget that tells them how many pages to fill -- and their sources are much more serendipitous, whenever whoever thought up an idea is finished getting it ready to publish. Whatever month Spectrum closed its editorial decisions for May (probably much earlier in the year) was rather thin this year, and they printed two silly paeans of praise for Machine Learning, in two different domains, both of them nonsense. But people are so busy genuflecting at the altar of the gods of stone (mostly silicon with a few metal interconnects), they cannot see that their emperor -- I mean deity -- is naked.
"In the future, AIs -- not humans -- will design our wireless signals,"
proclaims the first of the two. Entropy will surely prevent that in the
future, but for the present it is sufficient to show that the claims they
made for their new "DeepSig" mechanism are bogus. Any time you see the
prefix "Deep" attached to some technology, hold onto your wallet, there
are lies to follow. In this case they didn't say, but you can be sure that
the NNs did not
design their radio circuits from scratch. For one
thing, they never made that claim -- at least not in the text body -- which
they certainly would have if it were true. Instead, as I read between the
lines, real engineers designed a collection of transmitter and receiver
circuits that might be configured using various differing parameters to
alter the transmission characteristics. Then they programmed a computer
to optimize the reception by tweaking the parameters. As explained, it's
a static optimization for each transmitter-receiver pair, which does not
dynamically adjust to changing conditions -- they should be smart enough
to know that buildings come and go, weather changes every day, even solar
activity has its ups and downs, yet all of them affect the signal quality
as they described it -- or maybe it recalibrates itself every few hours
or so, perhaps if the error rate exceeds some threshold; they didn't say.
In any case the computer doing the optimization is not designing
anything at all, it's only tuning the parameters within limits previously
designed by the human engineers who did the real design. Their optimization
probably would work faster and use less hardware if they used standard
linear regression algorithms well known and understood decades ago. But
computers are cheap, and NASA has a big budget for
novel ideas that aren't totally catastrophic.
The next page announces "The AI Poet: 'Deep-speare' crafted Shakespearean
verse that few readers could distinguish from the real thing." There's
that "Deep" word again. The authors don't tell you, but it helps in cutting
through the baloney to recognize that modern poetry can be easily recognized
as such by the fact that it has no rhyme, no meter, and no intelligible
message. So unless they are inviting critics familiar with the forms of
Shakespearean poetry (which has all three of the properties absent in the
modern
stuff that fraudulently goes by the same name) to tell if their computer-generated
stuff passed the Turing Test, it's all a hoax. They admitted that they
got modern readers with a passing knowledge of English to do the critique.
The selected critics were smarter than the researchers: they knew they
had no clue, so they Googled the text, and found all the true Shakespeare
online -- which it is, that's where these "scientists" got their control
and training data, 2700 sonnets from that era, a third of a million words.
These guys worked a little harder and were somewhat more open than the radio guys in telling us exactly what they did. They did what NNs always do, they ran averages, how many times do these two words occur adjacent? And (they didn't say, but considering what they did say) How often do these two words end parallel lines that are required to rhyme in the sonnet form? There are less than 30,000 distinct words total in all of Shakespeare, only some 16,000 that occur more than once, so the Deep-whatever NN does not need to know how to pronounce -- nor even to spell -- these words, a 5-digit (15-bit) number is sufficient.
There is a lot of research studying the form of Shakespearean sonnets -- at the top of Google's search is a site that offers "Reading Shakespeare's Language: Sonnets" (no link, like most sites these days, it is encrypted) -- which, among other things, points out that Shakespeare's sonnets use words that carry several senses, many of which are significant in the same context. There's no way a computer can do that kind of semantic trickery without understanding not just which words are most likely to occur together, but what the word semantics actually is. In days of yore (before NNs were invented and computers were fast enough to run them) "Artificial Intelligence" meant computers doing things that if people did them, they would be considered intelligent. Today all that is out the window. Putting words together based on probability is not intelligent, and it's certainly not poetry, it's just plain silly.
The following month, ComputingEdge (Spectrum's goofy stepdaughter) ran a slightly more academic (they surveyed several products and included some history) "Automated Coding: The Quest to Develop Programs That Write Programs" to tell us about DeepCoder (recognize that prefix?) and DeepCode (same idea, different group), along with RobustFill and SketchAdapt, this last one using a "hybrid model of structural pattern matching [probabilities again] and symbolic reasoning" where I suppose (they didn't say) they are actually doing intelligent work rather than pure probabilistic selection. It's still doomed for entropic reasons, but people who have guzzled the Darwinist Kool-Aid are unlikely to notice it (see my blog posts "End Zone" and "The End of Code" three and four years ago).
Most of the people pushing "deep" foolishness are male -- in these cases,
all of them -- and in my experience, men are less susceptible to bamboozlement
than women, so I have to wonder, do these guys really believe this crock?
Or is it that they've found a cash cow and they are milking it?
The point is that the producer of these "interactive" flicks can only program in a limited number of outcomes, of which most are necessarily boring or silly -- the article pointed out that everybody who wants to talk about it at the water cooler the next day -- this was before COVID -- would have to watch them all anyway, so what's the point?
But it reminded me of my own observation of Netflix at work during the four weeks I was out of my home because a Blue State has (literally) gone to pot so there is nobody doing service work like plumbing, nobody growing food, nobody doing anything. But you can smell the pristine air polluted by burning hemp and boom boxes and too many wi-fi nodes for any stable net access in the state-mandated ghettos (see "Climate Hoax" three weeks ago).
Anyway, the library is closed down, and the laptop computer I had taken with me was flakey, and one of the family members played the TV too loud for me to think or sleep, so the activity of the evening was Netflix movies, which during prime time slowed down to a stuttering crawl, mostly their little red circle going round and round. My host was frustrated, they didn't know what the difficulty was, but *I* did. It's the same problem I have with wi-fi here in the ghetto. Not exactly, they live in a mansion separated from their neighbors on all sides by trees, but their TV is on cable, and the cable capacity is maxed out as people stuck at home, trying to do schoolwork over the internet, or watching movies because they can't work, suck up the bandwidth far above the projections the cable company had assumed from their mathematical models made before the networks pushed all this high-res (high bandwidth) media on everybody. Remember what Vaclav Smil said about models a couple days ago.
Even when it was working, Netflix was trying really hard to make their service unusable. Like there doesn't seem to be any way to tell it "No, I don't want my house and mind polluted by more air pollution [F-bombs] and stupid stories" -- low-budget made-for-TV are generally too stupid to waste time on, you have to look for the "TV-MA" code that graces about 90% of their offerings to skip over it. And if you do see one that might be interesting, you hardly dare keep looking for something better, because it won't be there when you try to come back to it. The whole thing is as unstable as the Unix OS it is probably built on, basically shifting sand.
I guess somebody likes it, or they couldn't stay in business. Or maybe
nobody likes it, which is why they keep trying something different. Whatever.
They won't get *my* money.
Those who put models ahead of reality are bound to make the same false calls again and again. -- Vaclav SmilIt came at the end of a delightful piece pillorying some geologist M.King Hubbert, who in the 50s predicted that oil output would peak in 1970, then fall to the same levels it was in 1920. He accompanied this ridicule with a graph showing a nice smooth blue line that went up, then went down, like a statistical bell curve. Overlaid on this was a ragged red line that more or less followed that curve up to maybe 1965, then jumped up +20% before following the curve down (at that +20% level) until around 2010, then took off for the sky, like the alleged global warming curves. According to this guy, US oil production is now 50% higher than the peak in 1970, and in the wrong direction for the 80% drop predicted. We now lead the world in oil production.
The fact is -- and Smil dare not mention it, dunno whether he believes it or not -- these mathematical models cannot account for human creativity. More than two centuries ago Thomas Malthus predicted (based on his mathematical model) that massive starvation would limit world population growth. Population is now growing faster than ever because we have stopped so much of the disease that used to kill people off. After the Malthus prediction, another Thomas (this time Crapper) invented the flush toilet, which has been credited as being the greatest medical advance in all history, in terms of lives saved. We have population growing faster than Malthus modelled, and starvation is way down -- except in Marxist countries where the government just plain got it wrong. The Marxist mathematical models are more wrong than Malthus or Hubbert.
Which brings me to a much more current mathematical model -- again Smil
dare not mention it, dunno whether he believes it or not, but the magazine
editors where he published this item certainly believe the model -- so-called
climate change. I have been saying for at least a decade (see my essay
"A Christian View of Climate
Change" last year) that it's more about politics than science. The
models are broken. Always were. "Again and again," according to Vaclav
Smil. He's right.
What caught my attention today is a quote on the first page:
"I thought if I built a better moustrap, everyone would want one," Kennedy says. Instead the world has decided they're okay with mice."There's a lot of hidden insight in that line. Basically, this is a NIMBY (Not In My Back Yard) thing. Everybody is OK with mice, as long as those mice are not eating my food and making my children sick. There are a few environmentalists who care about the oil on the water, but they're not offering to pay their own money to clean it up. They don't have any money. They don't know how to make money, all they can do is hoover up the piddly donations they can get from the few people with nothing better to do with their money. The people with the money, the large corporations and the government -- well, the government has no money, they just print it on demand, and if they print too much, the economy tanks like it did in Greece -- and the corporations are required to do what is profitable for their shareholders, and paying somebody to clean up oil spills does not improve the profits for their shareholders. Unless the government forces them to. It still isn't profitable, but it's less costly than fines and jail sentences.
The government, they don't care about the environment any more than the corporations and T.C.Mits (The Common Man In The Street) do, it doesn't put food on their table or keep their children out of the hospital. Besides, oil spills are so last year. The politically correct catastrophe of the day this year is global warming and COVID. Climate change is rapidly becoming stale too. But the virus, that's actually taking food off the grocery shelves and threatening to put kids in the hospital. Well, the empty shelves are really the fault of the mega-chain groceries like Wal-Mart with their idiot Just-In-Time stocking policies that break down at the slightest disruption. COVID just happens to be that disruption this month.
So Kevin Kennedy, you need to find something that people want to pay for. Generally that is food and housing for themselves, and health care to keep their kids out of the hospital. If there's any left over, some entertainment would be nice. Oil on the water thousands of miles away? Not in my back yard, not my problem. Shoddy oil rigging and tankers (resulting in oil spills), that just makes my food and entertainment cheaper, why should I care?
The Christian perspective is very different. "The earth is the LORD's
and the fullness thereof," and you always take care of the Boss's interests
(even if you are not a Christian and the Boss merely pays your salary).
Jesus called it "The First and Greatest Commandment. The Second is
like unto it," he went on, it's the Golden Rule. You always treat other
people the way you want to be treated. Oil on the water makes things bad
for people living on the coast near where you spilled it, and you wouldn't
want to live there, so don't do it. The trouble is, the government took
the Ten Commandments off the school walls 60 years ago, thereby giving
everybody permission to behave dishonorably (just don't get caught). Oil
spills on the shore and empty grocery shelves are part of the consequences.
Thank the ACLU and the government. But mostly it's
selfish people willing to ignore God for a slightly better standard of
living.
They seem to be doing themes, this time climate change. In the opening editorial, the editor admits to filling his pre-teen kids with an unnatural fear of global warming. I guess he hasn't heard that the earth has been cooling the last couple years. Or maybe his parents filled him with an unnatural fear of global nuclear war, and he sees it as his obligation to carry on the tradition. Nuclear war didn't happen -- Ronald Reagan's "Star Wars" actually did what it was supposed to do, but not as originally intended: as the critics predicted, it could never have worked to prevent a Soviet first strike, but instead it scared the red shirt off the Soviets. "Mr. Gorbachev, tear down this wall!" actually resulted in tearing down that wall, but the left-wing bigots don't like to remember that. So they invented another crisis that a grandstanding President can't stop -- but then, neither will will a carbon cap, because God has everything under control (see my essay "A Christian View of Climate Change" a half year ago). This whole issue is built on a lie, but they can't know that, because the gatekeepers are all under the leftist thumb of the Democrat-appointed research funding agencies. Go against the Established Religion and you lose your government grant, as Robert Gentry learned. Everybody else saw that and they don't dare tell the whole truth.
Depending on how you count them, most of the articles in this issue are by female authors. That is significant from a conservative Christian perspective, because they are less likely to question the sources of the hoax [1Tim.2:14]. One of them, her author bio at the end credits her with writing "about climate, justice, and emotion." They got that right. Seeing that the American people as a whole are not as bamboozled as the political party promoting the hoax, she now leans farther to the left -- I did say this was political, not science -- she says "your power in this fight lies not in what you can do as an individual but in your ability to be part of a collective..." That's Marxist-speak, the propaganda of the political far left that took their capstone empire, the now former Soviet Union, crashing into the ash-heap of history. Only ivory-tower academics and women believe it any more.
The next article (another female author) writes about some guy who developed a computer program to calculate the carbon flow of humans and nature, who said "But when I entered all that information into a program that would calculate our total carbon footprint, I was shocked: It estimated that, poof!, the carbon sequestration provided by the forest's cover... cancelled out everything else we did." I've been saying that all along, the whole thing is independent of human activity. Of course this guy is still a True Believer (he must be, in order to retain his Federal grants) and she's only a writer, without even the science chops to ask the hard questions.
A short sidebar piece a few pages later ends "The key to saving the world? It's all politics." At least the so-called climate change thing is, not true science at all. It's all politics.
The next major article (another female name for an author) starts off uncritically reporting a video of some guy fueling his car on air. Now if women could get themselves interested in learning a little science, they might realize that such a notion is a violation of entropy, and that her whole article is very likely more of the same. Right. I stopped reading that drivel.
Nevermind the following article -- it's a cute idea that won't make much difference even if they do it -- the little factoid over the top of each article, this one in particular breathlessly announces "18% [or maybe it's 10%, because they used one of those silly fonts where the eights and zeroes look the same] portion of agriculture-related global heating caused by rice cultivation." Flip over the next page to see "24% Portion of global greenhouse gas emissions generated by agriculture and forestry." Not only are the editors of WIRED scientifically illiterate, they are also innumerate. Or maybe they only suppose -- probably correctly -- that their readers are, and whereas "Figures don't lie, but liars figure." Do the math: 10% of 24% is 2.4% which is statistically insignificant. Poof!, the forest's cover cancelled out everything else we did (or will do).
Next article, another female author, this time blatantly bubble-headed
thinking. She accepts uncritically that "you could survive on just peanut
butter sandwiches and oranges." She admits she didn't do the analysis,
but it's probably true: the proteins in peanuts and the grains in bread
complement each other to supply all the essential amino acids humans need,
and the fresh oranges provide fiber and vitamin C and what-all else. So
she packs a lunch to drive from Oregon to central California, and
includes not only peanut butter sandwiches and oranges, but also coffee.
This in a special issue devoted to human-caused global warming caused by
burning fossil fuel, probably two or more tanks of gas to drive that distance
and back, when a phone call or teleconference app would do it in a tiny
fraction of that much fossil fuel -- or in this case, because both Oregon
and northern Calif get much of their electric power for hydroelectric sources,
zero. Wait, there's more: do you know where they grow coffee? Certainly
not in Ore-gone, all they grow here any more is pot. I think all the coffee
comes from central or southern America, shipped on cargo liners that burn...
fossil fuel! Don't forget the oranges and the peanuts and the wheat in
her bread, they don't grow them in leftist Ore-gone either. Shakespeare
once said something about "full of sound and fury and signifying nothing."
That's climate change talk, and this magazine in particular.
Then there's the piece where this guy is trying to crowd cities closer
together. Ore-gone already does that in a left-wing elitist government
policy that favors crowded ghettos for low-income people (I live in one
of those) where the air pollution (that includes both sonic and electronic,
plus the particulate matter resulting from burning the state's largest
cash crop in close quarters) is insufferable, worse than sprawling LA.
Anyway, he claims that 70% of the global emissions comes from cities, and
15 pages earlier, 24% comes from ag and forests, that would pretty much
account for 94%. Duh. He thinks crowing the people in cities closer together
will improve the climate. Perhaps, but not the air pollution for those
poor schmucks stuck in the ghetto. The people proposing and implementing
these idiotic policies are rich enough to escape the ghetto. Look at the
guy in the picture, he doesn't live in a downtown slum walkup.
A tiny sidebar on the next page, "Tech We Need" includes a way for "influencers" to "post pics of themselves riding public transit instead of jetting to Reykjavik." Of course the "influencers" who jet to Reykjavik are riding public transit. People who can afford private jets do not waste time posting pics of themselves. Several pages later, somebody "estimates that Amtrak is 33 percent more energy efficient than flying on a per-passenger-mile basis." That's not a big difference, and after you factor in all the difficulties of connecting with trains, it probably disappears, except in commuter runs. But everybody who can benefit from that savings already does it.
Me, I think the "climate change" hoax is a crock, but when the oil and coal are gone, they will be gone. It makes sense to switch over to renewable energy sources for that reason alone. The article poking fun at the Wyoming government for stonewalling wind farms was a hoot. I must have gotten tired of the whole climate thing, there are fewer markups as I worked my way to the end of this tiresome issue.
Probably in another context, somebody said that most people pick their
issues out of tribal loyalty without regard to the facts. I suspect that's
generally true, and this issue of WIRED is clearly
a case of it. Whatever. The USA is so wealthy, compared to the rest of
the world, and in all history, we can afford to waste national reasources
on scientific nonsense like "climate change" and still not feel the pinch.
Certainly no politician will ever do anything close to what the True Believer
scientists say would be necessary, let alone what would actually
be necessary to overcome what God is already doing. Like so many other
things, it's Not My Problem.
So here I am reading this "Pop-Sci for Dummies" (WIRED) magazine, which tends to be weak on science, strong on anti-Trump politics. The May issue had a Covid focus, complete with a front-page editorial promoting the magazine's Religion (= believing what you know ain't so) and even used the word "faith -- in each other and in the scientists..." At the bottom of the page he gets to his political jab:
A government can do things to make that [people helping people] happen, and in a better timeline [that is, not this year] it would. Sadly, we don't get to choose a timeline. Luckily, we do get to choose a government.He got that one wrong. The "we" that he considers himself part of does not seem to have gotten to choose the government he wants, it was the rest of the country, the "we" that he pretends doesn't exist, who actually did choose this government. And it appears that Trump did exactly The Right Thing to get people to do the helping.
Clive Thompson is a regular columnist, and his contribution in this issue a few pages later is similarly blindsided. After spending most of his ink praising the people who creatively used their own resources to make up for what the manufacturers did not, he gets around to condemning the left-wing bigot's favorite whipping boys, capitalism and their Prez:
It also shows a failure of capitalism. Part of the reason we're short on essential med-tech is closed-source designs, often created to maximize vendor profits...Of course they're created to maximize vendor profits. Creating stuff takes time and effort, and if that time does not result in financial reward, the creative people will not be able to pay their mortgages and buy their food and their tech toys so they can create innovative med-tech products. Clive Thompson himself creates "closed-source designs," so that he also can maximize profits on his own products. Do you think he wrote this piece for free? WIRED usually hides their text behind a "paywall" (their word, this issue being an explicit exception) so they can pay Thompson the big bucks he gets for writing this stuff. So-called "open-source" products tend to be imitative knock-off copies of the true innovative products that somebody else got paid real dollars to create and perfect. The "free as in beer" open-source products tend to be buggy and very hard to use, sort of like WIRED information, even though WIRED gets paid for their mag -- and rightfully insists on it!
Capitalism works because the capitalists make products that people are willing to pay for. If you don't pay them for their work, then they cannot afford to be making that stuff, and you are far worse off, like it was in the country that no longer exists, the (former) Soviet Union. If you try to insist that the vendors give away their pandemic cures and vaccines for free, then they won't bother to be ready to produce them in the large quantities that will be needed (because that kind of preparation costs money). That is indeed something the government did to us, but it was his predecessor, not Trump, who perpetrated the kind of wealth-destroying policies that drove the (former) Soviet and (present) Venezuelan economies into the ground. It takes time to build for future profits, don't lay this one on Trump.
Thompson's clincher: "It was the US government's job to prepare for a pandemic..." Clive Thompson probably isn't old enough to remember the words of a famous President of the other party, "Ask not what your country [government] can do for you, ask what you can do for your country." If this had happened four or eight years earlier, Obama would certainly have bungled it far worse (as he did for many other things), but the press -- obviously including Thompson and the magazine he publishes in -- would have found somebody else to blame. Obama and Trump were/are both incompetent, but in different ways. Trump's biggest problem is that nobody wants to take Thompson's advice and help out. Except in a pandemic, but of course they were already doing it when he wrote this.
Later in this same issue is a long 16-page whine about the repressive tactics of the Hindu-only party governing India. I read this stuff all the time, in a monthly rag published by Christians trying to do something about it, but WIRED does not support what the Christians are doing there, this article is by a Muslim. Which is rather ironic, because the Muslims are doing exactly the same (or worse) to the minority religions in the countries they dominate. I have a hard time giving much attention to hypocrites.
What was that new neologism I learned in an earlier issue of WIRED? "TL;DR"
Earlier this year / Next
year
Complete Blog Index
Itty Bitty Computers home page