Hi readthesequences.com,
I'm Alessandro from X best World Ou, where we promote innovative software solutions that help companies streamline processes and reduce costs.
I thought this article might interest you: **Wix has just acquired an $80 million no-code AI-powered platform** that lets you create complete web or mobile apps simply by describing your idea.
It's a quick and cost-effective way for **startups and SMBs** to launch prototypes or complete products without developers.
Here you can find a link to their website:
https://base44.pxf.io/c/5171776/2049275/25619?trafcat=base
Please note: we promote selected products through affiliate marketing, which means we recommend verified and high-quality tools that we personally evaluate. However, all commercial relationships, subscriptions, and payments are handled directly between you and the company providing the service, ensuring full transparency and security.
Best regards
Best regards, Alessandro Molari X Best World OÜ
"To ask which beliefs make you happy is to turn inward, not outward—it tells you something about yourself, but it is not evidence entangled with the environment. I have nothing against happiness, but it should follow from your picture of the world, rather than tampering with the mental paintbrushes."
I don't understand the justification for this. Is it a utilitarian heuristic? Or what is the reason why happiness must follow from correct beliefs about objective reality.
FALSE its 230 am.
[[!TestTag
"And “winning” here need not come at the expense of others."
Then "winning" was the wrong choice of word.
Other ideas for entries: A: affect heuristic, akrasia, anthropics, artificial general intelligence, artificial intelligence, Aumann’s Agreement Theorem, availability heuristic, average utilitarianism, axon B: backward chaining, base rate, base rate neglect, Bayes net, Bayes’s Theorem, Bayes-structure, Bayesian, Bayesian probability, Bayesian reasoner, Bayesian reasoning, Bayesian statistics, Bayesianism, bit, black swan, Blue, Born rule C: calibration, capitalization, causal decision theory, cognitive bias, cognitive heuristic, cognitive science, collapse, complement, complex number, complexity, computation, conditional independence, confidence interval, configuration space, confirmation bias, consequentialism, conspiracy, cooperation, Copenhagen Interpretation, cosmological horizon, counterfactual, Cox’s Theorem, cryonics D: D-separation, dan, de novo, decibel, decoherence, deduction, Deep Blue, defection, deontology, dimension, directed acyclic graph, doublethink, dukkha, Dutch Book, dynamic, dysrationalia E: economies of scale, Egan’s Law, emergence, empiricism, epistemic rationality, epistemology, epsilon, eudaimonia, EURISKO, Everett branch, evidential decision theory, existential angst, existential risk, exponentiation F: factorization, falsificationism, Fermi Paradox, frequentism, Friendly AI, fun theory, fungibility, futurism, fuzzy G: g-factor, game theory, gene, General Relativity, gensym, Go, Green, grey goo, Gricean implication, group selection H: hindsight bias, holodeck, humility, hyper-real number I: induction, inductive bias, inefficient market, information theory, integer, instrumental, instrumental rationality, intelligence, intelligence explosion, intentionality, intuition pump, intuitionism, IRC, isomorphism J: joint probability K: koan, Kolmogorov complexity L: Laplace’s Rule of Succession, Less Wrong, likelihood ratio, LISP token, Litany of Gendlin, Litany of Tarski, log odds, logarithm, logic, lookup table M: machine code, Many Worlds Interpretation, marginal efficiency, marginal probability, marginal returns, market economy, Mind Projection Fallacy, modesty N: necessary condition, neural networks O: optimization, order of magnitude P: package deal fallacy, particle, Peano arithmetic, prediction market, probability mass, p-value R: real number, renormalization, reversible computer S: scalar factor, science, second-order, self-handicapping, solipsism, sufficient condition, superposition - RobbBB (talk) 12:27, 10 March 2015 (AEDT)
—from https://wiki.lesswrong.com/wiki/Talk:RAZ_Glossary
they will usually [+have] more reason
- icecream17 (17 Jan 2023)
Fixed! —Said Achmiz March 12, 2023, at 10:45 PM
faith in crisis is faith as is.
Space: Fascinating
I see nothing wrong with hitting explain over and over, with not end in sight. Each step is progress, because it gives deeper understanding and knowledge. Even if there is no final dialog box, I will still keep hitting explain. If the knowledge gained doesn't seem to be worth the effort, I will ignore it for now, until I have better methods available.
What does insulting Wulky mean in this context? And how does his status as a neo-utopian affect the probability of him being a crime boss (this part is a joke)?
"4.5 × 10−31 m/s2" should end in "...s^2" or the nicely typeset equivalent.
Fixed, thanks! —Said Achmiz January 11, 2020, at 07:04 PM
A Taxonomy for the “Abuse of Definition” Zoo
I wrote of this before. This kind of arguing tactic is like abuse of notation in the name of being more precise, but instead of stopping strategic equivocation, you shut down all meaningful communication. Time to get precise about the different kinds and how to recognise them.
Stealing Words
I talk about love and you ask me to define what I mean by “love”. You give your own, reductive definition for ”love”. I cannot say “love“ any more, because now it doesn’t mean what I want to say.
Don’t play games with me, I say. You ask me to define “game”. Unfortunately, I don’t have either of Homo Ludens by Johan Huizinga or Games People Play by Eric Berne handy, so I look it up in an encyclopedia, and only find definitions from sports, board games, logic (game semantics) and strategic game theory. I cannot say “game“ any more, because now it doesn’t mean what I want to say.
Too Many Small Concepts
I talk about feelings, and you ask me to distinguish between “affects”, “moods”, “qualia”, “emotions” and “sensory stimuli”. I throw my copy of Descartes’ Error at your face. This level of distinction doesn’t even make sense. Emotions cause body states, which cause sensory stimuli, which all together cause feelings.
I talk about the probability of rain tomorrow, and you ask me if this a “subjective probability”, “frequentist probability”, or a “propensity”. The math would be the same. The expected utility of carrying an umbrella is high.
I talk about what is good, or what is right, and you ask me to distinguish between “ethics”, “morals” and “utility”. For the rest of the conversation, I have to use a phrase like “moral AND ethical“ and I sound like Sam Harris.
I say I love you, and you ask me to decide between “storge“, “philia“, “amor“ and “agape“. Something got lost in translation here.
Loading Definitions
You tell me about toxic masculinity. You tell me that “toxic masculinity” is always bad. I ask about certain positive masculine traits. You tell me that these traits are not “toxic masculinity“. You explain that “toxic masculinity” is bad by definition.
Tunnel-Vision Prescriptivism
I want to talk about common ideas of “politics“. Not about politics, but about what people commonly mean or understand when they say “politics”. You ask me to define “politics”. That’s not my point at all. You give a definition of “politics”. Is that what other people understand when you say “politics“, I ask. Who cares what clueless people think about politics, you counter.
Making Words Unwieldly
You introduce a bunch of phrases like “being-in-itself“, “the feeling of emotion“, “sex-worker-excluding radical feminist” and insist I be precise and always use the long form.
(from https://the-grey-tribe.tumblr.com/post/173889994273/a-taxonomy-for-the-abuse-of-definition-zoo )