Entry Submitted by Fireswan at 5:19 PM EDT on July 14, 2017
NESARA must be very close.
The NPTB are testing the new financial system on the general public.
How do I know this?
I work in Information Technology (IT). Before we release a new service/product/application, we test it with knowing or unknowing beta customers.
To get real work loads to look for behaviors that were unexpected to catch bugs in the system. Even the super-duper computer cannot "see" the unforseen.
Humans have a random component to them that makes them unexpected. Maddening for the controllers, but also gives us our juicy "secret sauce" that makes humans so interesting. We are not very predictable.
So, when testing a new system on the unsuspecting public, especially with a very "activating" agenda like a new financial system, what does an IT director do?
Release features one-at-a-time in a very controlled way, with a lot of machine learning features turned on to try to capture how the system responds "with actual customer use cases" to "actual loads"... as it's known in the biz...
First. Release info. "You can access your secret FRB account... here's how..."
Test "What/Why" - can the unsuspecting public figure out what FRB is associated with their SSN, and what happens when they try to pay off bills, and which bills do they pay off. Great data set. Does anyone attempt to buy a Bentley? Or are most of the attempts just to provide relief...
Next. Test "How/Where/When" - watch what happens as the unsuspecting users go into their various banking systems and attempt to improve their situations. Some follow the instructions exactly, some don't, but it's all good from a tester's perspective because the data analytics needs both kinds of "data flows".
Next. Reverse some of the payments.
If there are "problems", the unsuspecting users contact customer service (or not). Reach out for help from the folks who released the info. Or do other things to "get it to work". Usually the unsuspecting user has some "skin in the game" and will keep trying alternatives. They will act out varying levels of entitlement, insult, patience, trust, gratitude, disbelief, etc.
Once a user has "gone down the road" once, they're somewhat invested and committed to getting it to work. I know. It's a human thing. Programmers and UX (user experience) designers are always evaluating the fine line between hope and giving up out of hopeless frustration. How creative are the unsuspecting users to come up with new and unforseen work flows? How many attempts do they make before they give up? How do they try to get around the obstacles and road blocks? What do they do? On-and-on.
The programmers will change their models. The work flow analysts will update their processes.
And then what do they do? Turn the test system off and UPDATE to get everything into a "known state" and release it into "production.
At somepoint the test system switches over to the production system when the risks of catastrophic failure from unforeseen data input or workflows are deemed acceptable, and it is more costly to "gold plate"... again, term from the biz... the system... than to "ship it" and make corrections as fine tuning happens in the "wild" or "the field".
OK. So that's all adding up. Now what?
We've tested the FRB system (thank you unsuspecting), now have them test the NPTB system.
Out comes the golden ticket. Only one number.
Users are relieved that there is only one routing number.
Simpler. The advantage of this new test is that the tester will accept a "clean slate" and start over.
If their transactions were returned, they'll try again with the new info. New input. Are they going to do it in exactly the same way, or now try a different approach? What is the learning curve like? What do they do if this also has problems, but they are different problems than using the first codes. How do they respond, adjust? Do they have more hope (investment) or are they more easily frustrated? What do they do when frustrated?
Notice, now the guidence is to report the codes that are given when the system fails. There are leaked places to report these codes...
How able are ordinary people to "game" or reverse engineer the system? What happens when they find the error codes? Are they motivated to fix the problem "with the bank", taking some ownership and responsibility, or are they helpless and hopeless? Do they give up and not report.
Or do they "wait for NESARA"?
Do the reports of success "go viral" faster than the reports of failure? Gotta have some successes for the users to provide their free testing services for the system owners. The carrot has to be more juicy than the stick is painful.
Usually beta testing with actual users/customers doesn't go on too long because they'll need to switch over to production before there's a catastrophic breach of trust and the situation spins out of control. Gotta upgrade the chomping at the bit users to a real/working solution before they start going "awhal".
Therefore, with all of this analysis from an IT perspective, we must be very close.