It what is easily the most shocking and bewildering turn of events in the history of humanity, it turns out Jeffrey Epstein may not have voluntarily given up on his plans to "seed the human race with his DNA" and actually just got whacked, The body of disgraced money man and sex offender Jeffrey Epstein, who was found dead in his Manhattan federal prison cell in August, bore telltale signs of homicide despite an official ruling that he killed himself, a pioneering forensic pathologist revealed to “Fox & Friends” in an exclusive interview Wednesday. Well, I guess despite my rather sarcastic title, I am somewhat surprised this is getting play in the mainstream press. Perhaps it was just too obvious.
Jeffrey Epstein's """"""""""suicide"""""""""" lead to a lot of conspiracy spiraling on my part. At this point, however, I think believing Epstein actually did off himself would require a farcical conspiracy theory the likes of which would make Alex Jones (or Rachel Maddow for that matter) blush.
Comments
My latest article for BiggerPockets goes over eight signs you have a bad real estate agent. Or more accurately, five signs you have a bad real estate agent and three signs you have a real estate agent that doesn't have an "investor-mindset." Now if you're looking to buy a home to live in, that's fine. But if you're looking to invest, that just won't do.
The five signs you have a bad agent are:
And then the three signs your agent isn't the right fit for an investor are:
The article goes into much more detail. So if you're looking to invest in real estate with a buyers agent, check it out!
Check out the second episode of The Good Stewards podcast, this one going over a "BRRRR rehab." These are rehabs for buy-and-hold, rental properties and there is a large tendency for investors to under-budget and over-rehab these types of houses. Remember, you're not rehabbing a house to live in. You don't need granite counter tops or Brazilian hardwoods (unless it's a luxury rental, of course).
If rehabbing rentals is on your horizon, check out this 30-minute discussion between Ryan, my father Bill, Amanda and myself on the topic: My most recent article for BiggerPockets is up. This one goes over when it does and when it doesn't make sense to purchase a property that is in a Homeowners Association (HOA). My thesis can be summed up by my line early on "My own position differs from Mr. Frugalwoods. I believe that HOAs are a detriment to purchasing a property but by no means a non-starter." I go over all sorts of disadvantages, advantages and things to look for with HOA's. That being said, I focused heavily on HOA's for condos and not as much for houses. Usually I don't see HOA's for houses other than in nicer communities we don't buy rentals in. That being said, some investors do and it can be helpful, as my friend Russell Brazil reminded me: No yellow polka dots = No Deal!
I have three books out right now (and will be coming out with a fourth on real estate investment at the end of this year to promote our real estate podcast The Good Stewards, which just launched). The first one is the only one I've published with a publisher, in this case Thought Catalog. And that is Awesomeness: An Amateur Porpourri of a How-to Guide in which I give every piece of advise I can for living well from my own experiences and the many books I've read on these topics. (And yes, there have been a lot, so it's definitely worth reading.) Awesomeness: An Amateur Porpourri of a How-to Guide My second book is a compilation of articles I wrote for SwiftEconomics with a few original pieces on how easily economic statistics (and others) can be manipulated and/or misused. Examples include income stagnation, the gender wage gap and the myth that war is good for an economy. The title comes from Benjamin Disraeli's famous dictum about the three kinds of lies. Economic Lies, Damned Lies and Statistics You can read most of those articles on this website here (although they are much better in book form and easily worth the 99 cents the Kindle version costs). I also co-wrote a book with Ryan Swift, which was a compilation of the best articles we had written for SwiftEconomics.com. Confessions of Amateur Intellectuals You can still see the archives of SwiftEconomics here.
Make sure to check them out. It would be greatly appreciated and I think you'll enjoy them.
I know I have a bad (good?) habit of posting on Charisma on Command video after another, but hey, it's good content. This time, they take on happiness and echo some of the points I've made before, for example here and here. Namely, there is no "there" to get to. As I put it,
I've come to accept that we need to look at this from a more macro perspective. There will always be ebbs and flows in life. We should never plan on "getting there" because there is no "there" to get. We can always do better than we are now. Even Warren Buffet could make more money. Hell, Jeff Bezos has more than him.
Charlie from Charisma on Command lists a hierarchy of things that lead to happiness. At the bottom is "stuff" or even accomplishments I would add. These things are fleeting and not that important. His list is as follows, from least important to most:
The more we can focus on the higher level items (connect and appreciation) the happier we'll be. Ont he other hand, if we focus too much on stuff and experience, we'll limit that potential.
Hey everyone, make sure to check out the launch of The Good Stewards Podcast, which is "dedicated to seasoned real estate investors who want to maximize the cashflow potential in their business." The podcast features me, my father Bill (who started Stewardship Properties back in 1989), our Operations Officer Amanda Perkins and colleague Ryan Dossey.
The early launch has five episodes out along with the following trailer:
And check out the episode where we go over the BRRRR method of real estate investing here:
Legacy Development, a Kansas City-based property development and commercial management company which I profiled here and here, is really stepping up there game. They've now moved into the 10 figure ballpark. From The Kansas City Business Journal, Legacy Development will lead an project that could attract nearly $1 billion in private investment in Hutto, Texas — located in a growing suburb of Austin... The highlight of the new development is a national headquarters of Perfect Game, a baseball scouting organization. Along with baseball fields, the project will include an indoor sports and events center, convention hotel, restaurants and retail space. Sounds like quite the project. Congrats and good luck to everyone at Legacy!
Stewardship Properties will get there soon enough... we're on the way!
Another good video by Charisma on Command, this time on how to increase your willpower. Unsurprisingly, he bases much of his argument on Roy Baumeister's great book Willpower (although admittedly, some of the ideas have come under scrutiny amidst the replication crisis in psychology).
Still very sound advise: Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat is quite the wake up call. Indeed, it almost reads like horror at certain points. The book opens with a hypothetical future where an Artificial Super Intelligence is created and immediately becomes the smartest and most powerful being on the planet. It then invents nanotechnology so advanced that it can reconfigure all matter into itself. This matter, of course, includes our own. Thus, the entire human race is consumed into extinction by microscopic robots and the Artificial Super Intelligence who created them.
As absurd as this story sounds, it appears to be much closer to science fiction than science fantasy. First, a few definitions. Artificial General Intelligence is "The intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can." Many scientists presume and Artificial General Intelligence is the first step toward Artificial Super Intelligence, which "is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds." This would come about in what Barrat refers to as an "intelligence explosion" by a machine that had the ability to improve itself. A series of "recursive self-improvement" (which with computing technology would probably mean a doubling of "intelligence" several million times per second) would create a machine with cognitive capacities that were as far above us as our capacity is above that of ants. We can argue all we want about whether or not this type of Super Intelligence would be "alive" or not. I'm not sure if I would call it "alive," but for all intents and purposes, it doesn't matter. The key point is that Barrat is convinced we will soon be able to create such a thing and while I have some doubts, I would absolutely not bet against it. Already, we have algorithms that improve themselves through "recursive self-improvement" as bots, for example, figure out which Youtube video you are most likely to watch next if you watched the series of videos you had watched before. That way they can feed those particular videos to you and increase the amount of time you spend on the site. (A good explanation of these bots can be found here.) This makes the question jump out: Does anyone even understand the most complex algorithms that these various tech companies, intelligence agencies and the like have created? And Barrat wrote his book in 2015. Technology has improved substantially since then. Many experts were predicting Artificial General Intelligence by 2030 or 2040 or thereabout. Some have an extremely rosy outlook of it (such as the famous futurist Ray Kurzweil, who popularized the term "singularity"), but a growing number are becoming concerned this invention will turn around and exterminate us (such as those at the Machine Intelligence Research Institute that Barrat profiles). After all, why would an Aritficial Super Intelligence care about us anymore than we care about ants? And how do we program into a machine the mandate to be kind to us when it can improve and change itself immensely? People like to cite Asimov's Three Laws, but they certainly won't do. Hell, they didn't even "do" in Asimov's own books! Barrat also stresses that with so many actors seeking out Artificial Intelligence, it is probably impossible to stop. These actors include governments, rogue governments such as North Korea, intelligence agencies, tech companies like Google, "stealth companies" who are trying to keep a low profile and various research institutes. Banning it will likely just mean the group that discovers it is more likely to have malicious intent. Even if a friendly organization finds it, it may accidentally release it. I don't consider the CIA to be particularly friendly, but maybe you do. Regardless, an accidental release is what happened with Stuxnet. Stuxnet was a proto-AI malware created by the United States and Israel to damage Iran's nuclear program. That part worked. But computer viruses don't blow up when they go off and it appears we lost control of it and who knows who all has access to it now. Barrat isn't optimistic about our ability to escape an AI-induced extinction. That being said, the positive potential is incredible: from healing diseases, solving poverty, ending global warming, etc. etc. But only if, you know, it doesn't exterminate us. On that note, as Gary Marcus points out, we should question whether the "tendencies toward self-preservation and resource acquisition are inherent in any sufficiently complex, goal-driven system." IBM's Deep Blue and Watson have shown no sign of this. In fact, assuming resource acquisition and self-preservation are inherent goals of such a system flirts with "anthropomorphizing" artificial intelligence; something Barrat warns us against. Barrat recommends first trying to build in an apoptosis mechanism into the AI. Apoptosis is the process cells have for programmed death. If they begin multiplying without dying afterward, that's cancer. This would at least prevent the AI from spreading all over the place. My thought (which I emailed Barrat about and will add his response to this review if I am lucky enough to get one) is that anything programmed into a Superintelligence should be as objective as possible. Programming to "be kind to humans" is subjective and open to very different interpretations (see, for example, the myriad of political opinions out there). How about programming into it to have the "correct" view on those two points Marcus isn't sure the AI will have:
And more importantly,
Sure, I know that is easier said than done. But I would think it is easier to do that than program it to be nice to us even once it becomes several trillion times more intelligent than us. An indifference to self-preservation would allow us to turn it off as soon as it started to become dangerous. This would also allow us to beta test it. Which is kind of important since we don't want to turn something on that might exterminate us without proper testing. Regardless, Barrat has written a very important book that we should all take seriously. The biggest threats are often things we don't expect instead of things we are hunkered down preparing for. Right now, it doesn't appear like many people are expecting a threat from AI, if they are expecting AI at all. |
Andrew Syrios"Every day is a new life to the wise man." Archives
November 2022
Blog Roll
The Real Estate Brothers The Good Stewards Bigger Pockets REI Club Meet Kevin Tim Ferris Joe Rogan Adam Carolla MAREI 1500 Days Worcester Investments Just Ask Ben Why Entrepreneur Inc. KC Source Link The Righteous Mind Star Slate Codex Mises Institute Tom Woods Michael Tracey Consulting by RPM The Scott Horton Show Swift Economics The Critical Drinker Red Letter Media Categories |