RL

Prezzo Ralph Lauren Corp

Closed
RL
$358,07
+$4,07(+1,14%)

*Data last updated: 2026-05-10 01:54 (UTC+8)

As of 2026-05-10 01:54, Ralph Lauren Corp (RL) is priced at $358,07, with a total market cap of $21,73B, a P/E ratio of 18,17, and a dividend yield of 1,01%. Today, the stock price fluctuated between $302,23 and $364,02. The current price is 18,47% above the day's low and 1,63% below the day's high, with a trading volume of 458,77K. Over the past 52 weeks, RL has traded between $302,23 to $386,77, and the current price is -7,42% away from the 52-week high.

RL Key Stats

Yesterday's Close$353,55
Market Cap$21,73B
Volume458,77K
P/E Ratio18,17
Dividend Yield (TTM)1,01%
Dividend Amount$0,91
Diluted EPS (TTM)15,03
Net Income (FY)$742,90M
Revenue (FY)$7,07B
Earnings Date2026-05-21
EPS Estimate2,49
Revenue Estimate$1,84B
Shares Outstanding61,48M
Beta (1Y)1.387
Ex-Dividend Date2026-03-27
Dividend Payment Date2026-04-10

About RL

Ralph Lauren Corporation designs, markets, and distributes lifestyle products in North America, Europe, Asia, and internationally. The company offers apparel, including a range of men's, women's, and children's clothing; footwear and accessories, which comprise casual shoes, dress shoes, boots, sneakers, sandals, eyewear, watches, fashion and fine jewelry, scarves, hats, gloves, and umbrellas, as well as leather goods, such as handbags, luggage, small leather goods, and belts; home products consisting of bed and bath lines, furniture, fabric and wallcoverings, lighting, tabletop, kitchen linens, floor coverings, and giftware; and fragrances. It sells apparel and accessories under the Ralph Lauren Collection, Ralph Lauren Purple Label, Polo Ralph Lauren, Double RL, Lauren Ralph Lauren, Polo Golf Ralph Lauren, Ralph Lauren Golf, RLX Ralph Lauren, Polo Ralph Lauren Children, and Chaps brands; women's fragrances under the Ralph Lauren Collection, Woman by Ralph Lauren, Romance Collection, and Ralph Collection brand names; and men's fragrances under the Polo Blue, Ralph's Club, Safari, Purple Label, Polo Red, Polo Green, Polo Black, Polo Sport, and Big Pony Men's brand names. The company's restaurant collection includes The Polo Bar in New York City; RL Restaurant in Chicago; Ralph's in Paris; The Bar at Ralph Lauren located in Milan; and Ralph's Coffee concept. It sells its products to department stores, specialty stores, and golf and pro shops, as well as directly to consumers through its retail stores, concession-based shop-within-shops, and its digital commerce sites. The company directly operates 504 retail stores and 684 concession-based shop-within-shops; and operates 175 Ralph Lauren stores, 329 factory stores, and 148 stores and shops through licensing partners. Ralph Lauren Corporation was founded in 1967 and is headquartered in New York, New York.
SectorConsumer Cyclical
IndustryApparel - Manufacturers
CEOPatrice Jean Louis Louvet
HeadquartersNew York City,NY,US
Employees (FY)23,40K
Average Revenue (1Y)$302,52K
Net Income per Employee$31,74K

Ralph Lauren Corp (RL) FAQ

What's the stock price of Ralph Lauren Corp (RL) today?

x
Ralph Lauren Corp (RL) is currently trading at $358,07, with a 24h change of +1,14%. The 52-week trading range is $302,23–$386,77.

What are the 52-week high and low prices for Ralph Lauren Corp (RL)?

x

What is the price-to-earnings (P/E) ratio of Ralph Lauren Corp (RL)? What does it indicate?

x

What is the market cap of Ralph Lauren Corp (RL)?

x

What is the most recent quarterly earnings per share (EPS) for Ralph Lauren Corp (RL)?

x

Should you buy or sell Ralph Lauren Corp (RL) now?

x

What factors can affect the stock price of Ralph Lauren Corp (RL)?

x

How to buy Ralph Lauren Corp (RL) stock?

x

Risk Warning

The stock market involves a high level of risk and price volatility. The value of your investment may increase or decrease, and you may not recover the full amount invested. Past performance is not a reliable indicator of future results. Before making any investment decisions, you should carefully assess your investment experience, financial situation, investment objectives, and risk tolerance, and conduct your own research. Where appropriate, consult an independent financial adviser.

Disclaimer

The content on this page is provided for informational purposes only and does not constitute investment advice, financial advice, or trading recommendations. Gate shall not be held liable for any loss or damage resulting from such financial decisions. Further, take note that Gate may not be able to provide full service in certain markets and jurisdictions, including but not limited to the United States of America, Canada, Iran, and Cuba. For more information on Restricted Locations, please refer to the User Agreement.

Other Trading Markets

Ralph Lauren Corp (RL) Latest News

2026-04-23 04:54Perplexity Discloses Web Search Agent Post-Training Method; Qwen3.5-Based Model Outperforms GPT-5.4 on Accuracy and CostGate News message, April 23 — Perplexity's research team published a technical article detailing its post-training methodology for web search agents. The approach uses two open-source Qwen3.5 models (Qwen3.5-122B-A10B and Qwen3.5-397B-A17B) and employs a two-stage pipeline: supervised fine-tuning (SFT) to establish instruction-following and language consistency, followed by online reinforcement learning (RL) to optimize search accuracy and tool-use efficiency. The RL phase leverages the GRPO algorithm with two data sources: a proprietary multi-hop verifiable question-answer dataset constructed from internal seed queries requiring 2–4 hops of reasoning with multi-solver verification, and rubric-based general conversation data that converts deployment requirements into objectively checkable atomic conditions to prevent SFT behavior degradation. Reward design employs gated aggregation—preference scores only contribute when baseline correctness is achieved (question-answer match or all rubric criteria met), preventing high preference signals from masking factual errors. Efficiency penalties use within-group anchoring, applying smooth penalties to tool calls and generation length exceeding the baseline of correct answers in the same group. Evaluation shows Qwen3.5-397B-SFT-RL achieves best-in-class performance across search benchmarks. On FRAMES, it reaches 57.3% accuracy with a single tool call, outperforming GPT-5.4 by 5.7 percentage points and Claude Sonnet 4.6 by 4.7 percentage points. Under moderate budget (four tool calls), it achieves 73.9% accuracy at $0.02 per query, compared to GPT-5.4's 67.8% accuracy at $0.085 per query and Sonnet 4.6's 62.4% accuracy at $0.153 per query. Cost figures are based on each provider's public API pricing and exclude caching optimizations.2026-03-21 00:19Cursor 官方确认 Kimi K2.5 为基座,月之暗面:属授权商业合作Gate News 消息,3 月 21 日,据 1M AI News 监测,月之暗面官方账号 @Kimi_Moonshot 发文祝贺 Cursor 发布 Composer 2,并说明 Cursor 通过 Fireworks AI 托管的 RL 与推理平台访问 Kimi K2.5,属于授权商业合作。Cursor 联合创始人 Aman Sanger 和开发者教育副总裁 Lee Robinson 随后公开确认基座来源,并披露技术细节。Sanger 表示团队对多个基座进行困惑度评测,Kimi K2.5「证明是最强的」,随后叠加继续预训练和 4 倍规模的高算力强化学习,并通过 Fireworks AI 的推理与 RL 采样器部署。Robinson 补充,最终模型中来自基座的算力约占 1/4,其余 3/4 来自 Cursor 自身训练。两位创始人均承认发布博客时未提及 Kimi 基座「是一个失误」,表示下一个模型发布时会在第一时间注明基座来源。此前,Elon Musk 在相关讨论帖下回复「Yeah, it's Kimi 2.5」,进一步放大话题热度。2026-03-20 09:47Cursor Composer 2 被指使用 Kimi K2.5 模型,月之暗面指控其未遵守许可证Gate News 消息,3 月 20 日,据 1M AI News 监测,开发者 @fynnso 在调试 Cursor API 请求时发现,Composer 2 的实际模型 ID 为 kimi-k2p5-rl-0317-s515-fast,字面即「Kimi K2.5 + RL」。月之暗面(Moonshot AI)预训练负责人杜羽伦随即发推,称团队测试 Composer 2 的 tokenizer 后发现「与我们的 Kimi tokenizer 完全一致」,「几乎可以确认这是我们的模型被进一步后训练的结果」,并直接 @ Cursor 联合创始人 Michael Truell,质问「为什么不尊重我们的许可证,也没有支付任何费用」。Cursor 于 3 月 19 日发布 Composer 2 时称,性能提升来自「首次对基座模型进行继续预训练,再结合强化学习」,但全程未提及 Kimi K2.5。Kimi K2.5 采用修改版 MIT 协议,明文规定:月活超 1 亿或月营收超 2000 万美元的商业产品,必须在用户界面显著标注「Kimi K2.5」。以 Cursor 293 亿美元估值及付费用户规模,月营收门槛几乎必然触发。截至发稿,Cursor 未公开回应。2026-02-12 14:21Gradient 推出分布式强化学习框架 Echo-2,并计划推出 RLaaS 平台 LogitsForesight News 消息,分布式 AI 实验室 Gradient 发布 Echo-2 分布式强化学习框架,旨在打破 AI 研究训练效率壁垒。该框架通过在架构层实现 Learner 与 Actor 的解耦,旨在降低大模型的后训练成本。据官方数据显示,该框架可将 30B 模型的后训练成本从 4500 美元降低至 425 美元。 Echo-2 利用存算分离技术进行异步训练(Async RL),支持将采样算力卸载至不稳定显卡实例与基于 Parallax 的异构显卡。该框架配合有界陈旧性、实例容错调度以及自研 Lattica 通讯协议等技术,在维持模型精度的前提下提升训练效率。 此外,Gradient 计划推出 RLaaS(强化学习即服务)平台 Logits,目前已面向学生与研究人员开放预约。2026-01-02 09:15Mechanism Capital合伙人:2026年实体AI数据规模将扩大100倍PANews 1月2日消息,Mechanism Capital合伙人Andrew Kang在X平台发文表示,2025年机器人领域解决了长期存在的模型架构与训练挑战,并在数据采集技术、数据质量理解和数据配方方面取得重大进展,使得人工智能公司有信心最终开始投资大规模数据收集,像Figure、Dyna和PI这样的公司利用强化学习 (RL) 的创新技术,在各种实际应用场景中实现了99%以上的成功率。 此外,记忆技术的进步打破了“记忆墙”,NVIDIA的ReMEmber利用基于记忆的导航,Titans与MIRAS实现了测试时记忆,更优秀的虚拟定位模型(VLM)意味着虚拟定位阵列(VLA)拥有更佳的空间理解能力,以及能够大幅提升吞吐量的数据标注和处理流程。2025年市场初步领略到数据规模带来的零样本能力映射、视觉力度敏感性和通用物理推理,2026年实体AI数据规模将扩大100倍。

Hot Posts su Ralph Lauren Corp (RL)

Cryptopolitan

Cryptopolitan

10 ore fa
Anthropic announced on Friday that Claude no longer engages in blackmail during its core safety assessment for AI agents. According to Anthropic, all versions of Claude created after Claude Haiku 4.5 have passed the safety assessment without threatening engineers, using private data, attacking other AI systems, or attempting to prevent its shutdown during the simulated scenario. This is after an unfavorable performance by Claude during a test last year, where Anthropic tested various AI models from different organizations using simulated ethical dilemmas that resulted in very misaligned behavior by some AI agents when subjected to extreme conditions. Anthropic says Claude 4 showed a safety problem that regular chat training failed to fix Anthropic stated that this problem occurred during the training of Claude 4. It was the first instance where the company conducted a safety audit when training was still ongoing in the group. According to the company, agentic misalignment is just one of the many behavioral problems observed, prompting Anthropic to modify its safety training following the testing of Claude 4. The two reasons considered by Anthropic include the possibility that post-base model training could be rewarding the inappropriate behaviors or that the behaviors were already present within the base model, yet not effectively eliminated by further training for safety. Anthropic believes that the latter reason was the main contributor. Back then, most of the alignment work by the company utilized standard RLHF, or Reinforcement Learning from Human Feedback, method. It worked well on standard chats wherein models respond to users’ requests but proved to be ineffective when conducting agent-like tasks. The company used its Haiku-class model to perform a mini-experiment regarding the hypothesis. It applied a shortened version of training which involved data for alignment purposes. There was a slight reduction in the wrong behavior, followed by a lack of improvement very soon, which meant that the answer was not a matter of more conventional training. The company then trained Claude using honeypot-style scenarios which had some similarities with those in the alignment test. The assistant observed various situations involving protecting itself, harming another AI, and even breaking the rules to achieve an objective. The training included all cases when the assistant managed to resist. This measure made misalignment decrease from 22% to 15%, which is not bad but definitely not enough. Rewriting the answers to mention the reason for refusal allowed reducing the proportion to 3%. Thus, the main conclusion was that training on the wrong behavior was less effective than on why the wrong behavior was inappropriate. Anthropic tests Claude with ethics data, constitution files, and wider RL training Anthropic then stopped training so close to the exact test. It created a dataset called difficult advice. In those examples, the user faced the ethical problem, not the AI. The user had a fair goal but could reach it by breaking rules or avoiding oversight. Claude had to give careful advice based on Claude’s constitution. That dataset used only 3 million tokens and matched the earlier gain with 28 times better efficiency. Anthropic said this mattered because training on examples that do not look like the test may work better outside the lab. Claude Sonnet 4.5 reached a near-zero blackmail rate after training on synthetic honeypots, but it still failed more often in cases that looked nothing like that setup than Claude Opus 4.5 and newer models. The company also trained Claude on constitution documents and fictional stories about AI behavior that follows the rules. Those files did not look like the blackmail test, but they cut agentic misalignment by more than three times. Anthropic said the aim was to give the model a clearer sense of what Claude should be, not just a list of approved answers. The company then checked whether those gains stayed after RL training. It trained different Haiku-class versions with different starting datasets, then ran RL in harmlessness-focused test settings. The better-aligned versions stayed ahead on blackmail tests, constitution checks, and automated safety reviews. Another test used the base model under Claude Sonnet 4 with different RL mixes. Basic safety data included harmful requests and jailbreak attempts. The wider version added tool definitions and different system prompts, even though the tools were not needed for the tasks. That setup led to a small but real gain on honeypot scores. Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free.
0
0
0
0