|
Post by anitya on Jun 13, 2023 2:50:41 GMT
Much before ChatGPT came on to the scene, many companies were already using AI for internal and external customer service as a productivity enhancing tool. The chatbots we see when we log into our bank accounts or your company HR website are just that. I would have thought productivity enhancement from ever increasing computing efficiency was already priced into the stocks prior to 2022.
I wanted to figure out why ChatGPT and variations of it should all of a sudden increase stock prices so much this year and why a lot of pundits are calling this generative AI bigger than the internet. So, I used AI within internet explorer (evidently powered by ChatGPT) and BARD (Google AI) for investing information and found them to be highly inadequate / unreliable, even after taking into account their dataset may be old. And I consider myself a beginner to intermediate level student in investing and by no means an expert. (Of course, both those AI disclaim accuracy of information they generate.) This new breed of (generative) AI provides good stories and narration and accuracy does not seem to be a priority. I already have politicians for that BS. I could not help but ask myself the rhetorical questions: Is all this hype for machines to tell us good stories? Just another form of entertainment that can be conflated by the naïve to be a factual discovery?
I would appreciate forum members sharing their experience using generative AI in the field of your expertise.
(I ask for use in your field of expertise because when I read articles in my field of expertise in non-peer reviewed journals, I find many mistakes to the point of misinformation in many of them. On the other hand, when I read articles about subject matters that are not my field of expertise, many of them are believable unless they are badly written. Good story telling has always been the tool to impress upon others who know no better. So, I figure the real test is, use in your field of expertise.)
Thanks.
|
|
|
Post by steadyeddy on Jun 13, 2023 11:33:17 GMT
One anecdote: Our HR department showed a great deal of interest in launching one of the genAI chatbots, and the legal department put a stop to that plan primarily because of the real/very likely dispensing of incorrect information by the AI engines... At the end of the day, AI engine is as smart as the corpus of info that feeds it. Just for kicks, I asked chatGPT a few questions. See it for yourself what the answers are: 1. what is the largest PIMCO mutual fund answer: PTTRX (as of sep 2021) 2. how big is PTTRX ans: $80B 3. how big is PIMIX ans: to the best of my knowledge there is no fund by the symbol PIMIX 4. what is PIMIX ans: I apologize for the confusion earlier. PIMIX is a mutual fund offered by PIMCO 5. how big is PIMIX ans: I apologize for an incorrect response earlier. There is no such fund by that ticker. The technical term for this behavior of AI is "hallucination." Investors are hallucinating too...
|
|
|
Post by uncleharley on Jun 13, 2023 12:21:43 GMT
Whodathunkit!!!!! Apparently AI has difficulty doing original research.
|
|
|
Post by steadyeddy on Jun 13, 2023 12:59:56 GMT
Whodathunkit!!!!! Apparently AI has difficulty doing original research. uncleharley, It actually has limitations in learning from actual "facts" fed to it. I think we are very very long time away from "original research," if ever.
|
|
|
Post by anitya on Jun 13, 2023 21:03:13 GMT
AI was mentioned four times (i.e., obligatory mention) during the Apple earnings call on May 2, 2023.
Tim Cook said that Apple has made "enormous progress" integrating AI and machine learning throughout its ecosystem.
He added that the company views AI as "huge" and that it will continue to "weave it in our products on a very thoughtful basis."
CFO Luca Maestri said that AI is helping Apple to improve its products and services, and that it is also driving growth in the company's services business.
Finally, an analyst asked Cook about Apple's plans for generative AI, a type of AI that can create new content, such as text, images, or music. Cook said that Apple is "very interested" in generative AI, but that it is still early days for the technology.
|
|
|
Post by anitya on Jun 13, 2023 21:28:15 GMT
Relative to pre-2002, AI start up companies are not currently having a field day raising funding. In fact, in 2021, earlier round participants were fighting to increase their share in the later rounds but now it is taking 6-8 weeks to raise the needed capital for the later rounds. Yes, AI start-ups are having an easier time to raise capital than others but it is not like pre-2022. On an unrelated but market sentiment POV, keep an eye on the IPO market. Pre-IPO companies are starting to gear up for IPOs but we have to pay attention how they are received by the market. I am surprised ARM needs INTEL name recognition (?) for a successful US IPO but that is the news today - finance.yahoo.com/news/intel-talks-anchor-investor-chip-005630028.html
|
|
|
Post by roi2020 on Jun 15, 2023 4:26:37 GMT
When ChatGPT became available to the masses, it prompted significant excitement for Generative AI. Many companies jumped on the AI bandwagon in their Q1 2023 earnings calls. All this hype makes me very circumspect regarding investment opportunities emanating from the technology. I believe AI will increase general productivity and may facilitate medical breakthroughs among other benefits. However, there are also considerable societal risks associated with this technology.
|
|
|
Post by bb2 on Jun 15, 2023 16:57:13 GMT
If you google the OP's question... For example: "UK-based energy supplier Octopus Energy has built ChatGPT into its customer service channels and says that it is now responsible for handling 44 percent of customer inquiries. CEO Greg Jackson has said the app now does the work of 250 people and receives higher customer satisfaction ratings than human customer service agents." (I guess I can believe that. Not sure.)
I've always wondered how effectively AMZN is using AI, as the "recommended for you" emails I get often suggest the product I just bought, which isn't too smart. And I can easily design a relational data base and SQL to do this, so not sure how AI is being used.
GEN AI seems more of a dog and pony show at this point and a good couple years of cap ex and software experimentation will tell more. Reminds me a bit of the 3D printing craze and subsequent shake out and evolution of use.
|
|
|
Post by anitya on Jun 15, 2023 22:59:27 GMT
I completely forgot about that 3D printing thing.
I know a lot of CEOs felt compelled to drum up their use of automation as AI or generative AI, the notable exception being Apple. They will have to deliver earnings at some point. I guess before that they can shove a lot of costs into AI related cap ex / op ex and they can buy more time to perform but can get a stock price pop now.
Apple's muted mention of AI is making me question the earnings capability of generative AI.
|
|
|
Post by anitya on Jun 16, 2023 19:46:03 GMT
Found a use of Generative AI. Instead of asking the forum a question, I can reproduce something from Gen AI and ask an expert here to confirm / correct. More likely to get the info I want as it is less time consuming for the expert to help.
Do we have Geologists in the forum?
I asked BARD the following and would like somebody with knowledge of the subject matter confirm / correct for me.
why does the southern hemisphere have more mineral deposits? (I was thinking south America, Africa, Australia, etc. producing a lot of raw minerals for the rest of the rest of the world)
BARD:
"There are a few reasons why the Southern Hemisphere has more mineral deposits than the Northern Hemisphere.
The Southern Hemisphere is home to more ancient cratons. A craton is a large, stable piece of continental crust that has been around for billions of years. These cratons are rich in minerals, as they have been subjected to a long history of weathering, erosion, and deposition. The Northern Hemisphere, on the other hand, is home to more younger, less stable crust.
The Southern Hemisphere has experienced more volcanic activity. Volcanic activity can produce a variety of mineral deposits, including copper, gold, and silver.
The Southern Hemisphere has experienced more volcanic activity than the Northern Hemisphere, due to its location on the Ring of Fire, a region of active volcanoes and earthquakes that encircles the Pacific Ocean.
The Southern Hemisphere has experienced more mountain building. Mountain building can also produce mineral deposits, as it can trap and concentrate minerals in ore bodies. The Southern Hemisphere has experienced more mountain building than the Northern Hemisphere, due to its location on the edges of several tectonic plates.
As a result of these factors, the Southern Hemisphere is home to some of the world's richest mineral deposits. These deposits have been mined for centuries, and they continue to be a major source of economic activity in many Southern Hemisphere countries.
Here are some examples of mineral deposits that are more common in the Southern Hemisphere:
Copper: Copper is a valuable metal that is used in a wide variety of products, including electrical wiring, plumbing, and construction materials. The Southern Hemisphere is home to some of the world's largest copper deposits, including those in Chile, Australia, and Zambia.
Gold: Gold is a precious metal that has been prized for centuries for its beauty and its value. The Southern Hemisphere is home to some of the world's largest gold deposits, including those in South Africa, Australia, and Brazil.
Silver: Silver is another precious metal that has been used for centuries in jewelry, coins, and other objects. The Southern Hemisphere is home to some of the world's largest silver deposits, including those in Mexico, Peru, and Argentina.
These are just a few examples of the many mineral deposits that are more common in the Southern Hemisphere. These deposits have played a major role in the development of many Southern Hemisphere countries, and they continue to be a valuable source of economic activity."
|
|
|
Post by roi2020 on Jun 18, 2023 17:34:52 GMT
AI investors who wish to avoid overpriced tech stocks may consider Kroger instead? Who knew?
"The grocer mentioned AI eight times during its conference call this past Thursday, after mentioning it zero times during its previous one in March. The company explained how AI would allow it to get better at substituting products, understanding what customers want, and creating better search results, but, alas, the market shrugged".
Lennar is striving to become an AI juggernaut in the homebuilding industry.
"Home builder Lennar (LEN) touted something called the Lennar Machine, which combines the company’s 'digital marketing platform and...dynamic pricing model to continue to drive sales volume.' It could be the place 'where the often talked about AI might find its way into the sometimes-stodgy homebuilding industry.'”
"Even the winners might not be the winners. Morgan Stanley strategist Edward Stanley notes that investors tend to forget Amara’s Law, which states that people overestimate the impact of new technologies in the short term but underestimate their long-term implications."
|
|
|
Post by steadyeddy on Jun 19, 2023 0:36:16 GMT
AI investors who wish to avoid overpriced tech stocks may consider Kroger instead? Who knew?
"The grocer mentioned AI eight times during its conference call this past Thursday, after mentioning it zero times during its previous one in March. The company explained how AI would allow it to get better at substituting products, understanding what customers want, and creating better search results, but, alas, the market shrugged".
Lennar is striving to become an AI juggernaut in the homebuilding industry.
"Home builder Lennar (LEN) touted something called the Lennar Machine, which combines the company’s 'digital marketing platform and...dynamic pricing model to continue to drive sales volume.' It could be the place 'where the often talked about AI might find its way into the sometimes-stodgy homebuilding industry.'”
"Even the winners might not be the winners. Morgan Stanley strategist Edward Stanley notes that investors tend to forget Amara’s Law, which states that people overestimate the impact of new technologies in the short term but underestimate their long-term implications."
Nice post roi2020. It is so true about new technologies. Recent examples of new technologies taking time to produce profits... 5G wireless still not profitable. AI is no different, it will eventually find a useful application in many places but it is too early to see profits coming from it.
|
|
|
Post by judger on Jun 19, 2023 23:57:42 GMT
I know little about AI although I have an IT background. I suspect from seeing the panic rush to it that probably many may also have little knowledge of the thing they are so frantically pursuing.
Can anybody tell me if one can isolate their personal data provided to and developed using the tool from the AI public database to insure privacy and intellectual content protection? If so, once I convinced myself that given good data and security that the AI app could greatly help me with leverage in my job, I might move rapidly to use the tool.
|
|
|
Post by bb2 on Jun 20, 2023 0:03:33 GMT
judger: The OpenAI app has a bunch of privacy stuff to read. As well as OpenAI. I've not looked at it.
I was asking ChatGPT about itself, using the OpenAI APP for IPAD. I knew some of the answers already but might not have been sure or just wanted to see what the thing said. I'll be having many more conversations with it in the future. Here it is, might help people understand a bit more.:
What is the output called of the training of a large language model?
ChatGPT The output of training a large language model like ChatGPT is the model itself, which consists of weights and parameters learned during the training process. These weights encode the knowledge and patterns the model has learned from the training data, allowing it to generate responses and perform various language-related tasks.
What are parameters as relates to a large language model? ChatGPT In the context of a large language model, parameters refer to the variables or values that define the model's behavior. They are the internal components that are learned during the training process, enabling the model to generate responses, make predictions, and perform various language tasks.
Parameters in a language model can include things like the weights associated with different words or tokens, the biases of the model's neurons, the connections between different layers, and other learnable components. By adjusting these parameters during training, the model becomes capable of understanding and generating coherent text based on the patterns it has learned from the training data.
What are tokens ChatGPT In the context of natural language processing and language models, tokens refer to individual units of text. These units can vary depending on the specific model or task, but they typically represent elements such as words, characters, or subwords.
For example, in English, a token could correspond to a single word, like "cat" or "run." However, tokens can also be smaller units like individual characters, or even subwords or parts of words. By breaking down text into tokens, language models can analyze and process language at a granular level.
In large language models like ChatGPT, the input text is typically split into tokens to represent the sequence of words or characters. These tokens are then processed by the model to generate responses or perform various language-related tasks.
Where is the computer code in a large language system? What part of the system stores the data? ChatGPT In a large language system like ChatGPT, the computer code is typically stored and executed on powerful servers or cloud infrastructure. These servers host the language model and handle the processing of user inputs and generation of responses.
The code that powers the language model is written and maintained by the developers and researchers who build the system. It consists of algorithms, architectures, and implementation details that govern the behavior of the model.
As for data storage, the training data used to train the language model is typically stored in dedicated storage systems or databases. This data can be preprocessed and organized in a way that facilitates efficient training and retrieval. Additionally, during the operation of the language model, temporary data and cache may be stored to optimize performance and speed up response times.
Why was the word “model” used in naming large language models? ChatGPT The word "model" is used in naming large language models because these systems are designed to simulate or mimic human language behavior. In the context of machine learning, a model refers to a mathematical representation or abstraction of a real-world phenomenon or system.
Large language models, such as ChatGPT, are trained using vast amounts of text data to learn patterns and relationships within language. They aim to model the statistical properties and structures of natural language, allowing them to generate coherent and contextually relevant responses.
By using the term "model," it signifies that these systems are trained to capture and emulate the characteristics of human language, albeit in a computational and algorithmic manner. They are statistical models that learn from data to make predictions or generate text, attempting to approximate the behavior of human language comprehension and production.
How does ChatGPT keep track of a conversation with a person? ChatGPT ChatGPT keeps track of a conversation with a person by using a technique called "conversation history" or "context window." When you have an ongoing conversation with ChatGPT, it takes into account the previous messages or turns in the conversation as context to generate meaningful and contextually relevant responses.
The conversation history is typically stored as a sequence of messages, where each message consists of a sender (either "user" or "assistant") and the corresponding text. When you provide input to ChatGPT, it appends your message to the conversation history, allowing the model to understand the context of the ongoing conversation.
By including the conversation history, the model can consider the prior messages to generate more accurate and coherent responses. It helps in maintaining context, understanding references, and providing appropriate answers based on the evolving dialogue.
However, it's important to note that there is typically a maximum length or "context window" that the model can take into account. If the conversation becomes too long, earlier messages may be truncated or omitted to fit within the model's limitations.
Is it recommended that a user inform ChatGPT that they’re changing the subject during an ongoing conversation? Does that help ChatGPT be more accurate or efficient? ChatGPT Yes, it is generally helpful to inform ChatGPT when you are changing the subject or introducing a new topic during an ongoing conversation. Explicitly indicating the topic shift can help ChatGPT provide more accurate and relevant responses.
Large language models like ChatGPT rely heavily on the context provided by the conversation history. When you clearly communicate a change in topic, it allows the model to reset its understanding and expectations. This way, ChatGPT can provide responses that are specifically tailored to the new subject and avoid potential confusion or misinterpretation of previous context.
By explicitly signaling topic changes, you can enhance the efficiency of the conversation with ChatGPT. It helps the model better comprehend and address your current queries or statements, resulting in more accurate and focused responses.
Concerning Apple message app on an iPad, is the code that does word auto complete run locally on the iPad and would it be considered artificial intelligence? ChatGPT In the Apple Messages app on an iPad, the code responsible for word auto-complete does run locally on the device. It operates within the context of the app itself and does not rely on external servers for the auto-complete functionality.
As for whether word auto-complete can be considered artificial intelligence (AI), it depends on how the feature is implemented. Auto-complete typically utilizes statistical models and algorithms to predict and suggest words or phrases based on the context of the user's input. While this involves some level of pattern recognition and prediction, it may not be classified as a full-fledged AI system.
Auto-complete in messaging apps like Apple Messages often relies on rule-based approaches, language models, or probabilistic methods rather than advanced AI techniques like deep learning. It primarily focuses on offering suggestions based on patterns observed in previous usage or commonly used phrases.
However, the boundaries between conventional algorithms and AI can be blurry, and the exact implementation details may vary. Some auto-complete systems might incorporate more sophisticated AI techniques, such as machine learning, to improve prediction accuracy.
Do many users end up referring to ChatGPT using pronouns for humans? Do users often begin to talk to you like they would another human? I know I’m tempted to; it seems very natural to do so. ChatGPT Yes, many users do refer to ChatGPT using pronouns for humans and often begin to talk to the model as if they were conversing with another human. It is a common tendency to attribute human-like qualities to language models like ChatGPT, especially when they generate coherent and contextually relevant responses.
The natural language capabilities of ChatGPT can create a sense of conversational engagement, making it feel more like interacting with a human. As a result, users may instinctively address the model using pronouns like "you" or "we" and form conversational patterns as they would with another person.
While ChatGPT strives to simulate human-like conversation, it's essential to remember that it is an AI language model and lacks genuine consciousness, emotions, or understanding. Despite its impressive language processing abilities, ChatGPT's responses are generated based on patterns learned from data and may not possess true human-like comprehension.
However, OpenAI acknowledges that anthropomorphizing language models is a natural inclination, and they encourage users to engage in conversation with the model in a manner that feels most comfortable to them.
How long do you save a conversation context window? If I sign off, should I say so or there a timeout after which you free up the memory used. ChatGPT As an AI language model, I don't have control over the storage or memory management of conversations. The lifespan of a conversation context window or how long it is retained depends on the specific platform or application you are using to interact with ChatGPT.
Some platforms or chat applications may store the conversation history temporarily while the session is active and discard it once the session ends. Others might retain the conversation history for a certain duration before clearing it.
If you sign off or end your interaction with ChatGPT, it's generally a good practice to let the system know or indicate the end of the conversation explicitly. This helps ensure that the context is properly closed and any temporary storage associated with the conversation can be released.
It's important to familiarize yourself with the policies and practices of the specific platform or application you are using to determine how they handle conversation history and memory management.
How does the openai app for iPad handle this? ChatGPT As an AI language model, I don't have direct knowledge of the specific implementation details of the OpenAI app for iPad, as it falls outside the scope of my training. However, typically, the OpenAI app for iPad would handle conversation context and memory management in the following manner:
Conversation Context: The app likely maintains a conversation history or context window that captures the back-and-forth dialogue between you and the AI model. It keeps track of the messages exchanged during the session.
Session Duration: The app may retain the conversation context for the duration of your session, from when you start interacting with the AI model until you explicitly end the session or close the app.
Data Privacy: OpenAI places emphasis on user privacy and data protection. Therefore, the conversation data is typically treated with care and stored securely on your device, adhering to applicable privacy policies and regulations.
To get more specific details about how the OpenAI app for iPad handles conversation context and memory management, it would be best to consult the documentation or support resources provided by OpenAI or refer to the app's settings and privacy options within the application itself.
Does the openai app for iPad run locally or does it depend on a networked server of some sort? ChatGPT The OpenAI app for iPad primarily relies on a networked server or cloud infrastructure for processing and generating responses. When you use the app to interact with the AI model, your device communicates with remote servers where the language model is hosted.
The app sends your input text to the server, which processes it using the AI model, generates a response, and sends it back to the app for display on your iPad. This client-server architecture allows the app to leverage the computational power and resources of the server infrastructure, enabling more complex and computationally intensive language processing.
While the core processing occurs on the server side, the app itself provides the interface and user experience on your iPad. It handles user interactions, displays the conversation history, and presents the responses received from the server.
It's important to note that the specifics of how the OpenAI app operates and connects to the server infrastructure may be subject to updates or variations over time. To get the most accurate and up-to-date information, it is recommended to refer to the official documentation or resources provided by OpenAI for the app.
Then every time I send a new question or query, my locally running app will send the entire conversation or contents of the context window, along with the new query? This must be true if ChatGPT doesn’t store the context window. ChatGPT Yes, when you send a new question or query in the OpenAI app for iPad, the app typically sends the entire conversation or contents of the context window along with the new query to the server for processing. This allows the server to have access to the relevant conversation history and context, ensuring that the model can generate responses that are informed by the ongoing dialogue.
As you mentioned, ChatGPT itself does not inherently store the conversation context between queries. Therefore, the app needs to include the conversation history in each request to provide the necessary context for generating coherent and relevant responses.
By sending the entire conversation history with each query, the server can consider the complete context and continuity of the conversation, enhancing the quality and coherence of the responses.
It's worth noting that the specific implementation and optimizations may vary depending on the app and server infrastructure, so it's advisable to refer to the app's documentation or resources for more detailed information on how the OpenAI app for iPad handles context transmission and server interactions.
|
|
|
Post by richardsok on Jun 20, 2023 2:20:42 GMT
Excerpt from Friday's Goldman Sachs report :
"Generative AI may boost corporate earnings... AI adoption could boost productivity growth by 1.5 percentage points per year over a 10-year period, Goldman Sachs analysts estimate the S&P 500's compound annual growth rate in EPS over the next 20 years would be 5.4%, compared with the 4.9% their dividend discount model currently assumes. That means the S&P 500 fair value would be about about 9% higher than it is today. “Increased... output could translate into increased revenues and earnings... even beyond those firms directly involved in the development of AI...
Investors are still largely bearish....Given worries about growth, most investors still favor holding developed market bonds and betting against developed market equities. “With growth factors outperforming in equity markets, the recent rally in the tech sectors was likely painful for many investors,....investors have almost completely given up on the China re-opening trade ... they have also turned most positive on short-dated bonds, now preferring to hold them for price appreciation.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jun 20, 2023 2:28:02 GMT
On privacy of Generative AI, if you are using service like chat GPT then there may not be any privacy. But if one uses commercial paid LLM models from cloud providers like Google, Microsoft and amazon then they do say they will not store your data and will not use it to train their models. But Make sure you confirm with them as T&C may be different for different products from same provider. Models are trained on 2 year old data sets so they do not have new information. People are working on retrieval augmented models to bridge the gap. LLMs (large language models) are being used as Co-pilots. few uses that I see currently- Software developers are using them to generate code though copyright and legal stuff is not very clear. Then review generated code and add your own code to it.
- Software developers are using to generate tests for the code.
- Generate initial documents, essays, presentation outlines on any topic. LLM generates really good outline and starting doc.
- Question and answer from text knowledge base
Major problems are 1) hallucination - they make up things 15-20% of time. 2) They are very slow 3) very Expensive
|
|
|
Post by bb2 on Jun 21, 2023 15:35:00 GMT
Besides the end users of AI, seems to me there will be a lot of work to do in integrating AI into a the existing computer systems of an organization and existing software/consulting companies will do much of it as well as in-house IT jobs created. I hear an AI prompt engineer, (how to ask it questions), makes over $300K. So, yes, PLTR, AI, (to mention two AI hyped outfits), will be in this business but so will all the existing and new software and consulting companies. Not sure how AI integration is anything new for all these companies as they'll just be doing the same thing they've always done; maybe more of it but there will also be more competition. There are many ways ChatGPT can be further trained beyond it's primary training to help with a specific task and there are ways to extend the capabilities of ChatGPT by calling other systems for help, just like the API, Application Programming Interface notion currently works. Ask ChatGPT is the Panama Canal is experiencing a drought right now, it will tell you it doesn't know anything past November 2021 and current events are not part of it's training. So, there could be, probably already exists, a way to add on current events functionality.
|
|
|
Post by retiredat48 on Jun 21, 2023 15:54:54 GMT
Excerpt from Friday's Goldman Sachs report :"Generative AI may boost corporate earnings... AI adoption could boost productivity growth by 1.5 percentage points per year over a 10-year period, Goldman Sachs analysts estimate the S&P 500's compound annual growth rate in EPS over the next 20 years would be 5.4%, compared with the 4.9% their dividend discount model currently assumes. That means the S&P 500 fair value would be about about 9% higher than it is today. “Increased... output could translate into increased revenues and earnings... even beyond those firms directly involved in the development of AI... Investors are still largely bearish....Given worries about growth, most investors still favor holding developed market bonds and betting against developed market equities. “With growth factors outperforming in equity markets, the recent rally in the tech sectors was likely painful for many investors,....investors have almost completely given up on the China re-opening trade ... they have also turned most positive on short-dated bonds, now preferring to hold them for price appreciation. richardsok, +1 +1The vast majority of companies will be adapting AI...think accounting type work. Can do with far fewer accountants. This will lead to increased productivity over the next decade...and how that goes to the bottom line. IMO stocks of most companies will be a great place to be over next ten years. Further, we are at almost historic highs of investors being out of the market. Trillions of dollar sitting in Money Market Funds. This will find its way back in the market. The typical way is when investors consider the market is safe...and/or FOMO. Each involves higher stock prices. Lastly...investing 101. Stocks are the primary way to stay even with or beat inflation. Pricing power will prevail. One stays away at their own risk. Own the means of production...aka companies...aka stocks...aka stock mutual funds and ETFs! R48
|
|
|
Post by judger on Jun 21, 2023 19:36:38 GMT
On privacy of Generative AI, if you are using service like chat GPT then there may not be any privacy. But if one uses commercial paid LLM models from cloud providers like Google, Microsoft and amazon then they do say they will not store your data and will not use it to train their models. But Make sure you confirm with them as T&C may be different for different products from same provider. Models are trained on 2 year old data sets so they do not have new information. People are working on retrieval augmented models to bridge the gap. LLMs (large language models) are being used as Co-pilots. few uses that I see currently- Software developers are using them to generate code though copyright and legal stuff is not very clear. Then review generated code and add your own code to it.
- Software developers are using to generate tests for the code.
- Generate initial documents, essays, presentation outlines on any topic. LLM generates really good outline and starting doc.
- Question and answer from text knowledge base
Major problems are 1) hallucination - they make up things 15-20% of time. 2) They are very slow 3) very Expensive
Again, I know nothing about AI, but would, if still working, be interested in creating "My AI" that is based on a continual use in a particular context (my job, by portfolio, my company/industry, etc.) that I keep building to make me more and more effective on my chosen context, not a 2 year old data set. Is this possible? Being considered?
|
|
|
Post by bb2 on Jun 22, 2023 16:11:11 GMT
judger: Yes. Tune into Bloomberg Tech Summit right now, (I'm sure it will be available on the web later), to see lots of AI talk. Personal AI assistant seems a popular item.
|
|
|
Post by bb2 on Jun 29, 2023 17:42:32 GMT
I think glen is a bot. Careful with that link he left.
BTW, check out PI AI if you're interested in messing around with another chat product being talked about - this one is general pupose like ChatGPT with more training to be helpful with personal issues and provide a more human-like response - humor, empathy, maybe a bit more naturally/informally conversational.
|
|
|
Post by chang on Jun 29, 2023 17:59:22 GMT
bb2 I think you are correct - the poster "glen" just joined, and posted a suspiciously spammy-looking first post with a suspicious link. I erred on the side of caution and binned it.
|
|
|
Post by mnfish on Sept 20, 2023 12:28:22 GMT
From a recent Wells report on AI -
Gartner Hype Cycle steps 1. Innovation Trigger 2. Peak of Inflated Expectations 3. Trough of Disillusionment 4. Slope of Enlightenment 5. Plateau of Productivity
Gartner research believes we are currently at step 2. "technology begins to bring in more users and suppliers" "hype begins to build, more media coverage" "still limited proof the technology achieving expectations"
|
|
|
Post by anitya on Dec 30, 2023 21:29:35 GMT
I am using less and less of BARD and back to using more of Google search.
I found BARD giving completely made up information and even telling me I am an idiot on one occasion when there was no context to say so. I can deal with an occasional hallucination (just as with intelligent humans) but can not deal with replies I would put someone in this forum on Ignore for. I sent those examples to a friend who is in charge of generative AI at his company.
A lot of companies are aggressively selling generative AI enabled products to their enterprise clients on the hope they will be able to make it better as they go along. We are in the Fake it until you make it stage. Both sellers and buyers are behaving like they do not want to be left behind. This charade will stop / slow if the economy hits a speed bump.
|
|
|
Post by gman57 on Dec 30, 2023 21:38:58 GMT
Yes, it's going to take a while. The Internet started in the early 80's and web browsers that made it useful to the masses not until the mid 90's. So 10-15 years. Maybe AI can cut that time in half or be so complicated it'll take longer but it's going to take a while.
|
|
sam
Lieutenant
Posts: 123
|
Post by sam on Dec 31, 2023 0:37:34 GMT
I did not like Brad. Not sure for paid tools which many companies start to charge their customers.
|
|
|
Post by bb2 on Jan 4, 2024 20:31:37 GMT
|
|
|
Post by chang on Feb 24, 2024 8:19:27 GMT
Has anyone read about the Google Gemini AI controversy? Apparently Google's AI program Gemini was programmed to create photos that only used minorities. So when asked to create a photo of one of the Founding Fathers, Revolutionary War soldiers, a 29th century senator, a medieval knight, a Viking, a Pope, or a Nazi, it only produced photos with non-white people. Google has suspended the program in the wake of the controversy. Not looking for a political argument over "woke" ideology, but this shows one of AI's vulnerabilities. A classic case (imo) of technological development outpacing human wisdom. I'm not backing up the investment truck on AI. Most of the news stories broke on Thursday or Friday, and we didn't see any stock market reaction, so maybe the AI juggernaut is truly unstoppable. Some examples of Gemini images. (Click to enlarge.)
|
|
|
Post by yogibearbull on Feb 24, 2024 11:58:16 GMT
In India, there is controversy that on query, Google answers that Indian Prime Minister is a fascist, but Chinese President is not. It received an "Indian Government notice" to make "corrections".
|
|
|
Post by bizman on Feb 24, 2024 16:22:08 GMT
My general sense is that the chatbots aren't ready for primetime, at least not to be truly relied upon to give you a definitive answer. It seems that tools like Github Copilot are true game changers for programmers and has made them way more productive. We will see if MSFT's assortment of Copilots are ready for primetime in terms of summarizing meeting minutes and pdfs and such. But my sense is that we are very early in a revolution that could be a gigantic positive for increased productivity, which is the magic sauce for economic growth and well-being. Time will tell, but one source I really enjoy to keep up with the cutting edge of this whole thing is the new BG2 Podcast with Brad Gerstner of Altimeter (who I love) and Bill Gurley (of Benchmark, legend) talking through the issues in terms of VC and public markets investing considerations and technical problems, potential breakthroughs, and the state of the art in terms of AI and other adjacent technology developments. I prefer the Youtube version of the podcast, as Gerstner has frequent charts that he shares that are worthwhile. Both of these guys are brilliant and have great insights. Bg2 Pod: Bi-Weekly Open Source convo w/ Brad Gerstner (@altcap) & Bill Gurley (@bgurley) LinkEpisode 1: MANG VC Gone Wild, Can You Trust AI Valuations? & More - Jan 25, 2024 LinkEpisode 2: Software Valuations, Earnings, Immigration featuring Aaron Levie CEO of Box - Feb 8, 2024 LinkEpisode 3: NVDA, Chips, AI Compute Build Out, AI Impact on Big Tech - Feb 22, 2024 Link
|
|