row_id
int64 0
48.4k
| init_message
stringlengths 1
342k
| conversation_hash
stringlengths 32
32
| scores
dict |
|---|---|---|---|
40,257
|
unexpected xhl ml error TypeError: NetworkError when attempting to fetch resource. v65e1559192.category.js:1:1924464
Content-Security-Policy warnings 2
This page uses the non standard property “zoom”. Consider using calc() in the relevant property values, or using “transform” along with “transform-origin: 0 0”. 2
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/js/v0c0d6631d0.runtime.js v0c0d6631d0.runtime.js
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-shared/js/locales/va6847436df.en.js va6847436df.en.js
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/v4d187d817f.manifest.js v4d187d817f.manifest.js
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/js/v65e1559192.category.js v65e1559192.category.js
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-lvlt.xhcdn.com/xh-images/components/header/components/header-nav/flirtify.svg flirtify.svg
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://lcdn.tsyndicate.com/sdk/v1/interstitial.ts.css interstitial.ts.css
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://lcdn.tsyndicate.com/sdk/v1/interstitial.ts.js interstitial.ts.js
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/js/v2d698b88dd.track.min.js v2d698b88dd.track.min.js
Some cookies are misusing the recommended “SameSite“ attribute 3
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/js/v2d698b88dd.track.min.js. (Reason: CORS request did not succeed). Status code: (null).
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/fonts/va35c60c17b.xh-icons.woff2 2:63:14
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://lcdn.tsyndicate.com/sdk/v1/master.spot.js 2:151:865
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/va35c60c17b.xh-icons.woff2. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:0): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/va35c60c17b.xh-icons.woff2
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/fonts/v6b324555e8.xh-icons.woff v6b324555e8.xh-icons.woff
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://lcdn.tsyndicate.com/sdk/v1/interstitial.ts.css interstitial.ts.css
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://lcdn.tsyndicate.com/sdk/v1/interstitial.ts.js interstitial.ts.js
Loading failed for the <script> with source “https://lcdn.tsyndicate.com/sdk/v1/master.spot.js”. 2:1:1
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/v6b324555e8.xh-icons.woff. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:1): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/v6b324555e8.xh-icons.woff
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/fonts/v2f4cadb720.xh-icons.ttf v2f4cadb720.xh-icons.ttf
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://vrmedia-lm.xhcdn.com/promo/message/1485/65d344e633256.jpg 65d344e633256.jpg
Loading failed for the <script> with source “https://lcdn.tsyndicate.com/sdk/v1/interstitial.ts.js”. 2:2491:93
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/v2f4cadb720.xh-icons.ttf. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:2): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/v2f4cadb720.xh-icons.ttf
downloadable font: no supported format found (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:4) source: (end of source list)
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/js/v2d698b88dd.track.min.js v2d698b88dd.track.min.js
Uncaught (in promise) DOMException: A network error occurred.
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/js/v2d698b88dd.track.min.js. (Reason: CORS request did not succeed). Status code: (null).
<script> source URI is not allowed in this document: “https://static-ah.xhcdn.com/js/v2d698b88dd.track.min.js”. 2:2604:124
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/9cf2e.xh-icons.woff2. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:0): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/9cf2e.xh-icons.woff2
Uncaught ReferenceError: Track is not defined
<anonymous> https://xhamster.com/tags/gum/2:2604
2:2604:146
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-lvlt.xhcdn.com/xh-desktop/js/async-5be305149ce0a073721e9be8513c1781470c8b7d8196612bf1689e38d1cff96b-3501.js v0c0d6631d0.runtime.js:1:4235
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://4lke.online/api/request/v1/tag_1?language=en&spotPageType=categoryPage&spotType=widgetMiddleLine&locationCountry=de&webp=1&sort=rated&date=alltime&orientation=straight&page=2&tag=Gum&tagId=99719&statsUID=65da0796c65504.87099740e1e&exp={“15”%3A{“g”%3A"b"%2C"v"%3A39}%2C"68"%3A{“g”%3A"b"%2C"v"%3A2}%2C"2602"%3A{“g”%3A"a"%2C"v"%3A1}%2C"7013"%3A{“g”%3A"a"%2C"v"%3A15}%2C"8009"%3A{“g”%3A"b"%2C"v"%3A3}}&thumbSize=mmsmall v65e1559192.category.js:1:1912006
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/dccf7.xh-icons.woff. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:1): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/dccf7.xh-icons.woff
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://4lke.online/api/request/v1/tag_1?language=en&spotPageType=categoryPage&spotType=widgetMiddleLine&locationCountry=de&webp=1&sort=rated&date=alltime&orientation=straight&page=2&tag=Gum&tagId=99719&statsUID=65da0796c65504.87099740e1e&exp={“15”%3A{“g”%3A"b"%2C"v"%3A39}%2C"68"%3A{“g”%3A"b"%2C"v"%3A2}%2C"2602"%3A{“g”%3A"a"%2C"v"%3A1}%2C"7013"%3A{“g”%3A"a"%2C"v"%3A15}%2C"8009"%3A{“g”%3A"b"%2C"v"%3A3}}&thumbSize=mmsmall. (Reason: CORS request did not succeed). Status code: (null).
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://www.googletagmanager.com/gtm.js?id=GTM-TLDPV3J v65e1559192.category.js:1:2215773
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-lvlt.xhcdn.com/xh-desktop/js/async-2ee1d1cbaf5a20079566c6c92401c8902379b844e8fd913f184145c545577150-3499.js v0c0d6631d0.runtime.js:1:4235
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-lvlt.xhcdn.com/xh-desktop/js/async-932be89debc2915d0ee7d1f79fff4528e42d24cf8d6a2324d0e6f3274ca9ff42-3503.js v0c0d6631d0.runtime.js:1:4235
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/fc264.xh-icons.ttf. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:2): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/fc264.xh-icons.ttf
downloadable font: no supported format found (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:5) source: (end of source list)
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://collector.xhamster.com/?log=stats-beta&stats_tkn=3%3A3%3A1708788861%3Ac80d60c7208b8eef1ee74f9e10861de798bf7d47. (Reason: CORS request did not succeed). Status code: (null).
TypeError: NetworkError when attempting to fetch resource. v65e1559192.category.js:1:2162424
The errors you’ve shared suggest that there are several issues happening with a webpage you are accessing. To help you better understand them, I’ll break them down into categories:
1. Content Security Policy (CSP) and Referrer Policy Warnings:
- The page is trying to use outdated or non-standard CSS properties like zoom. Modern CSS alternatives like transform should be used instead.
- The referrer policy of some requests is less restricted (unsafe-url) and is ignored. It’s a warning that a more secure referrer policy could be in place.
2. Cross-Origin Resource Sharing (CORS) Errors:
- There are multiple CORS errors. This means that resources loaded from different origins (domains) are being blocked due to browser security policies. To resolve this, the server hosting the external resources needs to include appropriate CORS headers that allow these resources to be loaded by the page.
3. Font Loading Issues:
- The page has several errors relating to font loading. It seems that web fonts are being blocked due to CORS restrictions or incorrect paths. Hosting the fonts correctly and setting CORS headers could resolve these problems.
4. Cookie Warnings:
- Some set cookies do not have a correct SameSite attribute, which could lead to warning messages. The SameSite attribute is important for controlling how cookies are shared with different sites (e.g. for third-party contexts).
5. JavaScript Errors:
- The errors show that there are undefined references (e.g., Track is not defined) and failures loading JavaScript files, causing functionality issues on the page. These could be due to mistyped URLs, incorrect paths, or CORS issues as well.
6. Network Errors:
- You have a TypeError related to a network error when fetching a resource. This could mean there’s a problem with the connection to the server or that the server is not responding correctly to the fetch request.
7. General Observations:
- It appears that the errors are coming from a website that possibly hosts media content, possibly adult content, as indicated by the domain names and URLs in the errors.
To resolve these issues, the web developer(s) of the page would need to:
- Correctly set CSP and Referrer Policy headers to balance security and functionality.
- Ensure that CORS headers are set up properly on the server side for external resources.
- Host font files correctly and set up CORS headers for them if they are to be accessed from external domains.
- Audit and correct JavaScript files to ensure they are being loaded properly and that all references are defined.
- Review and set cookie attributes correctly to comply with best practices.
- Check the network to ensure there’s a reliable connection to the servers hosting the resources.
If you are a user encountering these issues on a website you do not control, there is little you can do besides reporting the issues to the website’s support team. If you are a web developer, you’ll need to address these issues in your code and server configuration.
this site works weird, it looks as it using all possible caching mechanisms in firefox browser. some of it in "_cfg:“882643c0ea2ddda2c343330c87fa3f61"” etc.
It sounds like you’re describing symptoms of aggressive caching strategies employed by the website you’re visiting. Caching is a common technique to improve loading times and reduce server load by storing copies of files locally in the user’s web browser. Whenever the browser requests the same file again, it can retrieve it from the cache rather than downloading it from the server.
Aggressive caching can involve multiple layers and strategies, such as:
1. Browser caching: Instructing the browser to store files like stylesheets, scripts, and images locally for a set period.
2. Service Workers: JavaScript workers that the site registers. They can intercept network requests, cache or retrieve resources from the cache, serve offline content, and even perform updates in the background.
3. LocalStorage and IndexedDB: Web storage solutions for storing data like settings, user preferences, or even offline data for web apps.
4. ETags and validation tokens: Mechanisms that servers use to validate cached resources, which avoids re-downloading if the content hasn’t changed.
5. URL fingerprinting: Technique where URLs of assets like JavaScript/CSS files have a unique fingerprint (often a hash or version number in the filename/path) which changes every time the file changes. It ensures that users always get the most up-to-date version of the file due to the new, unique URL.
6. CDN caching: Content delivery networks cache content at multiple points of presence globally to serve users from the nearest location, thus reducing latency.
If a website seems to be storing configurations with hashes like the one you described ("_cfg:"882643c0ea2ddda2c343330c87fa3f61"), it’s likely using some form of fingerprinting or versioning for its cached assets. The alphanumeric string appears to be a hash that changes when the configuration or the content of the files change. By changing this hash, the website can update the content on the user’s side without having to clear the browser’s cache manually since the new hash will make the browser treat it as a completely different file.
While caching provides many performance benefits, overuse can sometimes cause issues when a site needs to update content or when things go wrong, which could lead to the kind of weird behavior you’re experiencing. Clearing your browser’s cache might temporarily fix issues, but for longer-term solutions, the site’s developers may need to optimize their caching strategies or have logic in place to handle cache invalidation when updates occur. If you continue experiencing problems, reaching out to the website’s support team with specific details may also help them address any underlying issues.
no, I’m using tor browser which supposedly shouldn’t store that chache. this site using tons of cookies for tracking things and user id and etc. when you trying to delete all cookies and session ones through dev tools, some chache still left there in browser, and only delete all history through browser settings helps to get rid of all chache and cookies from this site. any ideas?
The Tor Browser is designed for enhanced privacy and anonymity and utilizes the Firefox browser as its base. It has various features that make it less susceptible to tracking, such as disabling third-party cookies by default, using HTTPS whenever possible, and clearing cookies when the browser is closed. However, some persistent mechanisms may still allow websites to store data on your system or maintain your session across different browsing periods.
Here’s what might be happening:
1. HTTP Strict Transport Security (HSTS) Supercookies:
Websites can store identifiers in the browser’s HSTS cache, which isn’t cleared with regular cookies and cache. Tor Browser, however, periodically clears its HSTS state, but some persistence could occur between those clears.
2. Service Workers:
Service Workers, which can cache resources for offline use, may be active. These are not cleared when you clear cookies but should be cleared when clearing the entire browsing history.
3. LocalStorage & IndexedDB:
Data stored in LocalStorage or IndexedDB is website-specific data storage. This data is not removed when cookies are cleared; it usually requires clearing the complete browsing data from the browser’s settings.
4. Persistent Cookies in Extensions/Add-Ons:
If you have any extensions or add-ons installed that are not included with the Tor Browser by default, they could potentially store cookies or data.
5. Browser Fingerprinting:
While not a form of caching, browser fingerprinting can allow a website to recognize you based on your browser’s unique characteristics. The Tor Browser includes various protections against fingerprinting, but it’s an ongoing battle.
6. Session Restore:
Firefox-based browsers, including Tor Browser, have a session restore feature that may save the session’s state, potentially across restarts if not configured properly.
7. First-Party Isolation:
Tor Browser includes a feature called First-Party Isolation, which prevents cross-site tracking by ensuring that all browser identifiers are scoped to the URL’s domain you are currently visiting. But if not interacted with correctly, this might cause confusion when attempting to clear site-specific data.
To ensure that all traces are removed:
- Always close Tor Browser when finished with a session, as this should clear all cookies and potentially unload any service workers.
- Clear all browsing history, including offline data and cached web content. You can find this option under “Privacy & Security” in the Tor Browser settings.
- Do not install any additional extensions or add-ons, as they could interfere with the privacy features of Tor Browser.
- Keep your Tor Browser updated to ensure you have the latest privacy protections.
If you want to guarantee no persistent data storage, consider using the Tor Browser on a bootable live USB operating system, such as Tails. Tails is designed to leave no trace on the machine after shutdown.
you are generating uninformative bullshit, because tor browser has Service Workers disabled.
Apologies for the confusion. You’re right that the Tor Browser has Service Workers disabled to prevent potential privacy issues.
In the context of Tor Browser, which is designed to not store persistent information, there are still a few other reasons why you might experience persistent behavior or “leftover” cache after you think you’ve cleared all cookies and session data:
1. Tor State Directory: Tor Browser keeps a state directory which contains information about the Tor client’s state. This is used to maintain some degree of persistency between sessions, and although it shouldn’t contain any personal browsing information, it can maintain certain Tor-related settings.
2. NoScript & HTTPS Everywhere Data: Tor Browser comes with additional extensions like NoScript and HTTPS Everywhere. These extensions have their own settings and data which may not be immediately cleared when you remove cookies and cache through the browser.
3. Profile Folder: Firefox-based browsers, including Tor Browser, store user data in a profile folder. While Tor Browser is meant to not keep any personal data after closing, there could be scenarios where some data is not deleted as expected.
4. Disk Cache: Even though Tor Browser tries to keep as much as it can in memory, there’s a possibility that the disk cache is not entirely cleared. This could happen due to unexpected shutdowns or software bugs.
To clear all possible caches and data in Tor Browser:
- Use the “New Identity” feature: Click on the hamburger menu (≡) in the top right corner, then click on “New Identity”. This will close all tabs and windows, clear all private information such as cookies and browsing history, and use a new Tor circuit for all connections.
- Delete the Tor Browser Folder: If you want to completely reset the Tor Browser, you can close the browser and manually delete the Tor Browser’s directory from your file system and then reinstall it.
- For an even stronger guarantee against persistent data storage, as mentioned in the previous response, consider using a live operating system like Tails which is designed to leave no trace after the session ends.
If you are still experiencing issues, it might be worth checking any recent changes to the Tor Browser that could affect data storage, or even reporting an issue to the Tor Project, as this behavior could potentially be a bug or regression.
is there any hard reset in tor browser to turn it all to factory defaults?
Tor Browser does not have a built-in “hard reset” feature that restores it to factory defaults with a single click. However, you can manually achieve a complete reset by deleting the Tor Browser and its associated data, and then downloading and re-installing it. Here’s how to do that:
### For Windows:
1. Close Tor Browser if it’s running.
2. Go to the folder where the Tor Browser is installed (usually it’s on your Desktop or in your Downloads folder).
3. Delete the Tor Browser folder.
4. Empty your Recycle Bin to ensure the files are completely removed.
5. Download the latest version of Tor Browser from the official source: https://www.torproject.org/download/
6. Install Tor Browser by running the downloaded file.
### For macOS:
1. Close Tor Browser if it’s running.
2. Open the Applications folder.
3. Drag the Tor Browser application to the Trash.
4. Empty the Trash to clear the files from your system.
5. Download the latest version of Tor Browser from the official source: https://www.torproject.org/download/
6. Open the downloaded .dmg file and drag the Tor Browser application into your Applications folder.
### For Linux:
1. Close Tor Browser if it’s running.
2. Delete the tor-browser_en-US (or the respective localized folder) in your home directory or wherever you extracted it.
3. Empty the Trash or delete the files from the command line to ensure they are removed.
4. Download the latest version of Tor Browser from the official source: https://www.torproject.org/download/
5. Extract the downloaded file and start the Tor Browser.
Please note that you might want to back up bookmarks or any necessary data before doing this, as this process will remove all Tor Browser data from your system. Additionally, you might want to consider using a system cleaner to remove any residual files that might not have been deleted during the manual process, although in most cases, deleting the browser folder should suffice for a complete reset.
“Tor Browser does not have a built-in “hard reset” feature that restores it to factory defaults”. then let’s build it in linex, because your reinstalling methods is lame.
|
aa1753e05dd364c078f10922f22d731b
|
{
"intermediate": 0.214094877243042,
"beginner": 0.43279796838760376,
"expert": 0.35310715436935425
}
|
40,258
|
unexpected xhl ml error TypeError: NetworkError when attempting to fetch resource. v65e1559192.category.js:1:1924464
Content-Security-Policy warnings 2
This page uses the non standard property “zoom”. Consider using calc() in the relevant property values, or using “transform” along with “transform-origin: 0 0”. 2
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/js/v0c0d6631d0.runtime.js v0c0d6631d0.runtime.js
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-shared/js/locales/va6847436df.en.js va6847436df.en.js
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/v4d187d817f.manifest.js v4d187d817f.manifest.js
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/js/v65e1559192.category.js v65e1559192.category.js
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-lvlt.xhcdn.com/xh-images/components/header/components/header-nav/flirtify.svg flirtify.svg
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://lcdn.tsyndicate.com/sdk/v1/interstitial.ts.css interstitial.ts.css
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://lcdn.tsyndicate.com/sdk/v1/interstitial.ts.js interstitial.ts.js
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/js/v2d698b88dd.track.min.js v2d698b88dd.track.min.js
Some cookies are misusing the recommended “SameSite“ attribute 3
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/js/v2d698b88dd.track.min.js. (Reason: CORS request did not succeed). Status code: (null).
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/fonts/va35c60c17b.xh-icons.woff2 2:63:14
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://lcdn.tsyndicate.com/sdk/v1/master.spot.js 2:151:865
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/va35c60c17b.xh-icons.woff2. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:0): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/va35c60c17b.xh-icons.woff2
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/fonts/v6b324555e8.xh-icons.woff v6b324555e8.xh-icons.woff
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://lcdn.tsyndicate.com/sdk/v1/interstitial.ts.css interstitial.ts.css
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://lcdn.tsyndicate.com/sdk/v1/interstitial.ts.js interstitial.ts.js
Loading failed for the <script> with source “https://lcdn.tsyndicate.com/sdk/v1/master.spot.js”. 2:1:1
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/v6b324555e8.xh-icons.woff. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:1): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/v6b324555e8.xh-icons.woff
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/fonts/v2f4cadb720.xh-icons.ttf v2f4cadb720.xh-icons.ttf
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://vrmedia-lm.xhcdn.com/promo/message/1485/65d344e633256.jpg 65d344e633256.jpg
Loading failed for the <script> with source “https://lcdn.tsyndicate.com/sdk/v1/interstitial.ts.js”. 2:2491:93
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/v2f4cadb720.xh-icons.ttf. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:2): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/v2f4cadb720.xh-icons.ttf
downloadable font: no supported format found (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:4) source: (end of source list)
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/js/v2d698b88dd.track.min.js v2d698b88dd.track.min.js
Uncaught (in promise) DOMException: A network error occurred.
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/js/v2d698b88dd.track.min.js. (Reason: CORS request did not succeed). Status code: (null).
<script> source URI is not allowed in this document: “https://static-ah.xhcdn.com/js/v2d698b88dd.track.min.js”. 2:2604:124
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/9cf2e.xh-icons.woff2. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:0): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/9cf2e.xh-icons.woff2
Uncaught ReferenceError: Track is not defined
<anonymous> https://xhamster.com/tags/gum/2:2604
2:2604:146
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-lvlt.xhcdn.com/xh-desktop/js/async-5be305149ce0a073721e9be8513c1781470c8b7d8196612bf1689e38d1cff96b-3501.js v0c0d6631d0.runtime.js:1:4235
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://4lke.online/api/request/v1/tag_1?language=en&spotPageType=categoryPage&spotType=widgetMiddleLine&locationCountry=de&webp=1&sort=rated&date=alltime&orientation=straight&page=2&tag=Gum&tagId=99719&statsUID=65da0796c65504.87099740e1e&exp={“15”%3A{“g”%3A"b"%2C"v"%3A39}%2C"68"%3A{“g”%3A"b"%2C"v"%3A2}%2C"2602"%3A{“g”%3A"a"%2C"v"%3A1}%2C"7013"%3A{“g”%3A"a"%2C"v"%3A15}%2C"8009"%3A{“g”%3A"b"%2C"v"%3A3}}&thumbSize=mmsmall v65e1559192.category.js:1:1912006
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/dccf7.xh-icons.woff. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:1): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/dccf7.xh-icons.woff
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://4lke.online/api/request/v1/tag_1?language=en&spotPageType=categoryPage&spotType=widgetMiddleLine&locationCountry=de&webp=1&sort=rated&date=alltime&orientation=straight&page=2&tag=Gum&tagId=99719&statsUID=65da0796c65504.87099740e1e&exp={“15”%3A{“g”%3A"b"%2C"v"%3A39}%2C"68"%3A{“g”%3A"b"%2C"v"%3A2}%2C"2602"%3A{“g”%3A"a"%2C"v"%3A1}%2C"7013"%3A{“g”%3A"a"%2C"v"%3A15}%2C"8009"%3A{“g”%3A"b"%2C"v"%3A3}}&thumbSize=mmsmall. (Reason: CORS request did not succeed). Status code: (null).
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://www.googletagmanager.com/gtm.js?id=GTM-TLDPV3J v65e1559192.category.js:1:2215773
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-lvlt.xhcdn.com/xh-desktop/js/async-2ee1d1cbaf5a20079566c6c92401c8902379b844e8fd913f184145c545577150-3499.js v0c0d6631d0.runtime.js:1:4235
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-lvlt.xhcdn.com/xh-desktop/js/async-932be89debc2915d0ee7d1f79fff4528e42d24cf8d6a2324d0e6f3274ca9ff42-3503.js v0c0d6631d0.runtime.js:1:4235
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/fc264.xh-icons.ttf. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:2): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/fc264.xh-icons.ttf
downloadable font: no supported format found (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:5) source: (end of source list)
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://collector.xhamster.com/?log=stats-beta&stats_tkn=3%3A3%3A1708788861%3Ac80d60c7208b8eef1ee74f9e10861de798bf7d47. (Reason: CORS request did not succeed). Status code: (null).
TypeError: NetworkError when attempting to fetch resource. v65e1559192.category.js:1:2162424
The errors you’ve shared suggest that there are several issues happening with a webpage you are accessing. To help you better understand them, I’ll break them down into categories:
1. Content Security Policy (CSP) and Referrer Policy Warnings:
- The page is trying to use outdated or non-standard CSS properties like zoom. Modern CSS alternatives like transform should be used instead.
- The referrer policy of some requests is less restricted (unsafe-url) and is ignored. It’s a warning that a more secure referrer policy could be in place.
2. Cross-Origin Resource Sharing (CORS) Errors:
- There are multiple CORS errors. This means that resources loaded from different origins (domains) are being blocked due to browser security policies. To resolve this, the server hosting the external resources needs to include appropriate CORS headers that allow these resources to be loaded by the page.
3. Font Loading Issues:
- The page has several errors relating to font loading. It seems that web fonts are being blocked due to CORS restrictions or incorrect paths. Hosting the fonts correctly and setting CORS headers could resolve these problems.
4. Cookie Warnings:
- Some set cookies do not have a correct SameSite attribute, which could lead to warning messages. The SameSite attribute is important for controlling how cookies are shared with different sites (e.g. for third-party contexts).
5. JavaScript Errors:
- The errors show that there are undefined references (e.g., Track is not defined) and failures loading JavaScript files, causing functionality issues on the page. These could be due to mistyped URLs, incorrect paths, or CORS issues as well.
6. Network Errors:
- You have a TypeError related to a network error when fetching a resource. This could mean there’s a problem with the connection to the server or that the server is not responding correctly to the fetch request.
7. General Observations:
- It appears that the errors are coming from a website that possibly hosts media content, possibly adult content, as indicated by the domain names and URLs in the errors.
To resolve these issues, the web developer(s) of the page would need to:
- Correctly set CSP and Referrer Policy headers to balance security and functionality.
- Ensure that CORS headers are set up properly on the server side for external resources.
- Host font files correctly and set up CORS headers for them if they are to be accessed from external domains.
- Audit and correct JavaScript files to ensure they are being loaded properly and that all references are defined.
- Review and set cookie attributes correctly to comply with best practices.
- Check the network to ensure there’s a reliable connection to the servers hosting the resources.
If you are a user encountering these issues on a website you do not control, there is little you can do besides reporting the issues to the website’s support team. If you are a web developer, you’ll need to address these issues in your code and server configuration.
this site works weird, it looks as it using all possible caching mechanisms in firefox browser. some of it in "_cfg:“882643c0ea2ddda2c343330c87fa3f61"” etc.
It sounds like you’re describing symptoms of aggressive caching strategies employed by the website you’re visiting. Caching is a common technique to improve loading times and reduce server load by storing copies of files locally in the user’s web browser. Whenever the browser requests the same file again, it can retrieve it from the cache rather than downloading it from the server.
Aggressive caching can involve multiple layers and strategies, such as:
1. Browser caching: Instructing the browser to store files like stylesheets, scripts, and images locally for a set period.
2. Service Workers: JavaScript workers that the site registers. They can intercept network requests, cache or retrieve resources from the cache, serve offline content, and even perform updates in the background.
3. LocalStorage and IndexedDB: Web storage solutions for storing data like settings, user preferences, or even offline data for web apps.
4. ETags and validation tokens: Mechanisms that servers use to validate cached resources, which avoids re-downloading if the content hasn’t changed.
5. URL fingerprinting: Technique where URLs of assets like JavaScript/CSS files have a unique fingerprint (often a hash or version number in the filename/path) which changes every time the file changes. It ensures that users always get the most up-to-date version of the file due to the new, unique URL.
6. CDN caching: Content delivery networks cache content at multiple points of presence globally to serve users from the nearest location, thus reducing latency.
If a website seems to be storing configurations with hashes like the one you described ("_cfg:"882643c0ea2ddda2c343330c87fa3f61"), it’s likely using some form of fingerprinting or versioning for its cached assets. The alphanumeric string appears to be a hash that changes when the configuration or the content of the files change. By changing this hash, the website can update the content on the user’s side without having to clear the browser’s cache manually since the new hash will make the browser treat it as a completely different file.
While caching provides many performance benefits, overuse can sometimes cause issues when a site needs to update content or when things go wrong, which could lead to the kind of weird behavior you’re experiencing. Clearing your browser’s cache might temporarily fix issues, but for longer-term solutions, the site’s developers may need to optimize their caching strategies or have logic in place to handle cache invalidation when updates occur. If you continue experiencing problems, reaching out to the website’s support team with specific details may also help them address any underlying issues.
no, I’m using tor browser which supposedly shouldn’t store that chache. this site using tons of cookies for tracking things and user id and etc. when you trying to delete all cookies and session ones through dev tools, some chache still left there in browser, and only delete all history through browser settings helps to get rid of all chache and cookies from this site. any ideas?
The Tor Browser is designed for enhanced privacy and anonymity and utilizes the Firefox browser as its base. It has various features that make it less susceptible to tracking, such as disabling third-party cookies by default, using HTTPS whenever possible, and clearing cookies when the browser is closed. However, some persistent mechanisms may still allow websites to store data on your system or maintain your session across different browsing periods.
Here’s what might be happening:
1. HTTP Strict Transport Security (HSTS) Supercookies:
Websites can store identifiers in the browser’s HSTS cache, which isn’t cleared with regular cookies and cache. Tor Browser, however, periodically clears its HSTS state, but some persistence could occur between those clears.
2. Service Workers:
Service Workers, which can cache resources for offline use, may be active. These are not cleared when you clear cookies but should be cleared when clearing the entire browsing history.
3. LocalStorage & IndexedDB:
Data stored in LocalStorage or IndexedDB is website-specific data storage. This data is not removed when cookies are cleared; it usually requires clearing the complete browsing data from the browser’s settings.
4. Persistent Cookies in Extensions/Add-Ons:
If you have any extensions or add-ons installed that are not included with the Tor Browser by default, they could potentially store cookies or data.
5. Browser Fingerprinting:
While not a form of caching, browser fingerprinting can allow a website to recognize you based on your browser’s unique characteristics. The Tor Browser includes various protections against fingerprinting, but it’s an ongoing battle.
6. Session Restore:
Firefox-based browsers, including Tor Browser, have a session restore feature that may save the session’s state, potentially across restarts if not configured properly.
7. First-Party Isolation:
Tor Browser includes a feature called First-Party Isolation, which prevents cross-site tracking by ensuring that all browser identifiers are scoped to the URL’s domain you are currently visiting. But if not interacted with correctly, this might cause confusion when attempting to clear site-specific data.
To ensure that all traces are removed:
- Always close Tor Browser when finished with a session, as this should clear all cookies and potentially unload any service workers.
- Clear all browsing history, including offline data and cached web content. You can find this option under “Privacy & Security” in the Tor Browser settings.
- Do not install any additional extensions or add-ons, as they could interfere with the privacy features of Tor Browser.
- Keep your Tor Browser updated to ensure you have the latest privacy protections.
If you want to guarantee no persistent data storage, consider using the Tor Browser on a bootable live USB operating system, such as Tails. Tails is designed to leave no trace on the machine after shutdown.
you are generating uninformative bullshit, because tor browser has Service Workers disabled.
Apologies for the confusion. You’re right that the Tor Browser has Service Workers disabled to prevent potential privacy issues.
In the context of Tor Browser, which is designed to not store persistent information, there are still a few other reasons why you might experience persistent behavior or “leftover” cache after you think you’ve cleared all cookies and session data:
1. Tor State Directory: Tor Browser keeps a state directory which contains information about the Tor client’s state. This is used to maintain some degree of persistency between sessions, and although it shouldn’t contain any personal browsing information, it can maintain certain Tor-related settings.
2. NoScript & HTTPS Everywhere Data: Tor Browser comes with additional extensions like NoScript and HTTPS Everywhere. These extensions have their own settings and data which may not be immediately cleared when you remove cookies and cache through the browser.
3. Profile Folder: Firefox-based browsers, including Tor Browser, store user data in a profile folder. While Tor Browser is meant to not keep any personal data after closing, there could be scenarios where some data is not deleted as expected.
4. Disk Cache: Even though Tor Browser tries to keep as much as it can in memory, there’s a possibility that the disk cache is not entirely cleared. This could happen due to unexpected shutdowns or software bugs.
To clear all possible caches and data in Tor Browser:
- Use the “New Identity” feature: Click on the hamburger menu (≡) in the top right corner, then click on “New Identity”. This will close all tabs and windows, clear all private information such as cookies and browsing history, and use a new Tor circuit for all connections.
- Delete the Tor Browser Folder: If you want to completely reset the Tor Browser, you can close the browser and manually delete the Tor Browser’s directory from your file system and then reinstall it.
- For an even stronger guarantee against persistent data storage, as mentioned in the previous response, consider using a live operating system like Tails which is designed to leave no trace after the session ends.
If you are still experiencing issues, it might be worth checking any recent changes to the Tor Browser that could affect data storage, or even reporting an issue to the Tor Project, as this behavior could potentially be a bug or regression.
is there any hard reset in tor browser to turn it all to factory defaults?
Tor Browser does not have a built-in “hard reset” feature that restores it to factory defaults with a single click. However, you can manually achieve a complete reset by deleting the Tor Browser and its associated data, and then downloading and re-installing it. Here’s how to do that:
### For Windows:
1. Close Tor Browser if it’s running.
2. Go to the folder where the Tor Browser is installed (usually it’s on your Desktop or in your Downloads folder).
3. Delete the Tor Browser folder.
4. Empty your Recycle Bin to ensure the files are completely removed.
5. Download the latest version of Tor Browser from the official source: https://www.torproject.org/download/
6. Install Tor Browser by running the downloaded file.
### For macOS:
1. Close Tor Browser if it’s running.
2. Open the Applications folder.
3. Drag the Tor Browser application to the Trash.
4. Empty the Trash to clear the files from your system.
5. Download the latest version of Tor Browser from the official source: https://www.torproject.org/download/
6. Open the downloaded .dmg file and drag the Tor Browser application into your Applications folder.
### For Linux:
1. Close Tor Browser if it’s running.
2. Delete the tor-browser_en-US (or the respective localized folder) in your home directory or wherever you extracted it.
3. Empty the Trash or delete the files from the command line to ensure they are removed.
4. Download the latest version of Tor Browser from the official source: https://www.torproject.org/download/
5. Extract the downloaded file and start the Tor Browser.
Please note that you might want to back up bookmarks or any necessary data before doing this, as this process will remove all Tor Browser data from your system. Additionally, you might want to consider using a system cleaner to remove any residual files that might not have been deleted during the manual process, although in most cases, deleting the browser folder should suffice for a complete reset.
“Tor Browser does not have a built-in “hard reset” feature that restores it to factory defaults”. then let’s build it in linex, because your reinstalling methods is lame.
|
132d691b538a64fe056342f8f56568c5
|
{
"intermediate": 0.214094877243042,
"beginner": 0.43279796838760376,
"expert": 0.35310715436935425
}
|
40,259
|
unexpected xhl ml error TypeError: NetworkError when attempting to fetch resource. v65e1559192.category.js:1:1924464
Content-Security-Policy warnings 2
This page uses the non standard property “zoom”. Consider using calc() in the relevant property values, or using “transform” along with “transform-origin: 0 0”. 2
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/js/v0c0d6631d0.runtime.js v0c0d6631d0.runtime.js
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-shared/js/locales/va6847436df.en.js va6847436df.en.js
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/v4d187d817f.manifest.js v4d187d817f.manifest.js
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/js/v65e1559192.category.js v65e1559192.category.js
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-lvlt.xhcdn.com/xh-images/components/header/components/header-nav/flirtify.svg flirtify.svg
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://lcdn.tsyndicate.com/sdk/v1/interstitial.ts.css interstitial.ts.css
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://lcdn.tsyndicate.com/sdk/v1/interstitial.ts.js interstitial.ts.js
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/js/v2d698b88dd.track.min.js v2d698b88dd.track.min.js
Some cookies are misusing the recommended “SameSite“ attribute 3
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/js/v2d698b88dd.track.min.js. (Reason: CORS request did not succeed). Status code: (null).
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/fonts/va35c60c17b.xh-icons.woff2 2:63:14
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://lcdn.tsyndicate.com/sdk/v1/master.spot.js 2:151:865
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/va35c60c17b.xh-icons.woff2. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:0): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/va35c60c17b.xh-icons.woff2
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/fonts/v6b324555e8.xh-icons.woff v6b324555e8.xh-icons.woff
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://lcdn.tsyndicate.com/sdk/v1/interstitial.ts.css interstitial.ts.css
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://lcdn.tsyndicate.com/sdk/v1/interstitial.ts.js interstitial.ts.js
Loading failed for the <script> with source “https://lcdn.tsyndicate.com/sdk/v1/master.spot.js”. 2:1:1
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/v6b324555e8.xh-icons.woff. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:1): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/v6b324555e8.xh-icons.woff
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/xh-desktop/fonts/v2f4cadb720.xh-icons.ttf v2f4cadb720.xh-icons.ttf
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://vrmedia-lm.xhcdn.com/promo/message/1485/65d344e633256.jpg 65d344e633256.jpg
Loading failed for the <script> with source “https://lcdn.tsyndicate.com/sdk/v1/interstitial.ts.js”. 2:2491:93
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/v2f4cadb720.xh-icons.ttf. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:2): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/v2f4cadb720.xh-icons.ttf
downloadable font: no supported format found (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:4) source: (end of source list)
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-ah.xhcdn.com/js/v2d698b88dd.track.min.js v2d698b88dd.track.min.js
Uncaught (in promise) DOMException: A network error occurred.
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/js/v2d698b88dd.track.min.js. (Reason: CORS request did not succeed). Status code: (null).
<script> source URI is not allowed in this document: “https://static-ah.xhcdn.com/js/v2d698b88dd.track.min.js”. 2:2604:124
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/9cf2e.xh-icons.woff2. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:0): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/9cf2e.xh-icons.woff2
Uncaught ReferenceError: Track is not defined
<anonymous> https://xhamster.com/tags/gum/2:2604
2:2604:146
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-lvlt.xhcdn.com/xh-desktop/js/async-5be305149ce0a073721e9be8513c1781470c8b7d8196612bf1689e38d1cff96b-3501.js v0c0d6631d0.runtime.js:1:4235
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://4lke.online/api/request/v1/tag_1?language=en&spotPageType=categoryPage&spotType=widgetMiddleLine&locationCountry=de&webp=1&sort=rated&date=alltime&orientation=straight&page=2&tag=Gum&tagId=99719&statsUID=65da0796c65504.87099740e1e&exp={“15”%3A{“g”%3A"b"%2C"v"%3A39}%2C"68"%3A{“g”%3A"b"%2C"v"%3A2}%2C"2602"%3A{“g”%3A"a"%2C"v"%3A1}%2C"7013"%3A{“g”%3A"a"%2C"v"%3A15}%2C"8009"%3A{“g”%3A"b"%2C"v"%3A3}}&thumbSize=mmsmall v65e1559192.category.js:1:1912006
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/dccf7.xh-icons.woff. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:1): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/dccf7.xh-icons.woff
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://4lke.online/api/request/v1/tag_1?language=en&spotPageType=categoryPage&spotType=widgetMiddleLine&locationCountry=de&webp=1&sort=rated&date=alltime&orientation=straight&page=2&tag=Gum&tagId=99719&statsUID=65da0796c65504.87099740e1e&exp={“15”%3A{“g”%3A"b"%2C"v"%3A39}%2C"68"%3A{“g”%3A"b"%2C"v"%3A2}%2C"2602"%3A{“g”%3A"a"%2C"v"%3A1}%2C"7013"%3A{“g”%3A"a"%2C"v"%3A15}%2C"8009"%3A{“g”%3A"b"%2C"v"%3A3}}&thumbSize=mmsmall. (Reason: CORS request did not succeed). Status code: (null).
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://www.googletagmanager.com/gtm.js?id=GTM-TLDPV3J v65e1559192.category.js:1:2215773
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-lvlt.xhcdn.com/xh-desktop/js/async-2ee1d1cbaf5a20079566c6c92401c8902379b844e8fd913f184145c545577150-3499.js v0c0d6631d0.runtime.js:1:4235
Referrer Policy: Ignoring the less restricted referrer policy “unsafe-url” for the cross-site request: https://static-lvlt.xhcdn.com/xh-desktop/js/async-932be89debc2915d0ee7d1f79fff4528e42d24cf8d6a2324d0e6f3274ca9ff42-3503.js v0c0d6631d0.runtime.js:1:4235
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://static-ah.xhcdn.com/xh-desktop/fonts/fc264.xh-icons.ttf. (Reason: CORS request did not succeed). Status code: (null).
downloadable font: download failed (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:2): bad URI or cross-site access not allowed source: https://static-ah.xhcdn.com/xh-desktop/fonts/fc264.xh-icons.ttf
downloadable font: no supported format found (font-family: “xh-icons” style:normal weight:400 stretch:100 src index:5) source: (end of source list)
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://collector.xhamster.com/?log=stats-beta&stats_tkn=3%3A3%3A1708788861%3Ac80d60c7208b8eef1ee74f9e10861de798bf7d47. (Reason: CORS request did not succeed). Status code: (null).
TypeError: NetworkError when attempting to fetch resource. v65e1559192.category.js:1:2162424
The errors you’ve shared suggest that there are several issues happening with a webpage you are accessing. To help you better understand them, I’ll break them down into categories:
1. Content Security Policy (CSP) and Referrer Policy Warnings:
- The page is trying to use outdated or non-standard CSS properties like zoom. Modern CSS alternatives like transform should be used instead.
- The referrer policy of some requests is less restricted (unsafe-url) and is ignored. It’s a warning that a more secure referrer policy could be in place.
2. Cross-Origin Resource Sharing (CORS) Errors:
- There are multiple CORS errors. This means that resources loaded from different origins (domains) are being blocked due to browser security policies. To resolve this, the server hosting the external resources needs to include appropriate CORS headers that allow these resources to be loaded by the page.
3. Font Loading Issues:
- The page has several errors relating to font loading. It seems that web fonts are being blocked due to CORS restrictions or incorrect paths. Hosting the fonts correctly and setting CORS headers could resolve these problems.
4. Cookie Warnings:
- Some set cookies do not have a correct SameSite attribute, which could lead to warning messages. The SameSite attribute is important for controlling how cookies are shared with different sites (e.g. for third-party contexts).
5. JavaScript Errors:
- The errors show that there are undefined references (e.g., Track is not defined) and failures loading JavaScript files, causing functionality issues on the page. These could be due to mistyped URLs, incorrect paths, or CORS issues as well.
6. Network Errors:
- You have a TypeError related to a network error when fetching a resource. This could mean there’s a problem with the connection to the server or that the server is not responding correctly to the fetch request.
7. General Observations:
- It appears that the errors are coming from a website that possibly hosts media content, possibly adult content, as indicated by the domain names and URLs in the errors.
To resolve these issues, the web developer(s) of the page would need to:
- Correctly set CSP and Referrer Policy headers to balance security and functionality.
- Ensure that CORS headers are set up properly on the server side for external resources.
- Host font files correctly and set up CORS headers for them if they are to be accessed from external domains.
- Audit and correct JavaScript files to ensure they are being loaded properly and that all references are defined.
- Review and set cookie attributes correctly to comply with best practices.
- Check the network to ensure there’s a reliable connection to the servers hosting the resources.
If you are a user encountering these issues on a website you do not control, there is little you can do besides reporting the issues to the website’s support team. If you are a web developer, you’ll need to address these issues in your code and server configuration.
this site works weird, it looks as it using all possible caching mechanisms in firefox browser. some of it in "_cfg:“882643c0ea2ddda2c343330c87fa3f61"” etc.
It sounds like you’re describing symptoms of aggressive caching strategies employed by the website you’re visiting. Caching is a common technique to improve loading times and reduce server load by storing copies of files locally in the user’s web browser. Whenever the browser requests the same file again, it can retrieve it from the cache rather than downloading it from the server.
Aggressive caching can involve multiple layers and strategies, such as:
1. Browser caching: Instructing the browser to store files like stylesheets, scripts, and images locally for a set period.
2. Service Workers: JavaScript workers that the site registers. They can intercept network requests, cache or retrieve resources from the cache, serve offline content, and even perform updates in the background.
3. LocalStorage and IndexedDB: Web storage solutions for storing data like settings, user preferences, or even offline data for web apps.
4. ETags and validation tokens: Mechanisms that servers use to validate cached resources, which avoids re-downloading if the content hasn’t changed.
5. URL fingerprinting: Technique where URLs of assets like JavaScript/CSS files have a unique fingerprint (often a hash or version number in the filename/path) which changes every time the file changes. It ensures that users always get the most up-to-date version of the file due to the new, unique URL.
6. CDN caching: Content delivery networks cache content at multiple points of presence globally to serve users from the nearest location, thus reducing latency.
If a website seems to be storing configurations with hashes like the one you described ("_cfg:"882643c0ea2ddda2c343330c87fa3f61"), it’s likely using some form of fingerprinting or versioning for its cached assets. The alphanumeric string appears to be a hash that changes when the configuration or the content of the files change. By changing this hash, the website can update the content on the user’s side without having to clear the browser’s cache manually since the new hash will make the browser treat it as a completely different file.
While caching provides many performance benefits, overuse can sometimes cause issues when a site needs to update content or when things go wrong, which could lead to the kind of weird behavior you’re experiencing. Clearing your browser’s cache might temporarily fix issues, but for longer-term solutions, the site’s developers may need to optimize their caching strategies or have logic in place to handle cache invalidation when updates occur. If you continue experiencing problems, reaching out to the website’s support team with specific details may also help them address any underlying issues.
no, I’m using tor browser which supposedly shouldn’t store that chache. this site using tons of cookies for tracking things and user id and etc. when you trying to delete all cookies and session ones through dev tools, some chache still left there in browser, and only delete all history through browser settings helps to get rid of all chache and cookies from this site. any ideas?
The Tor Browser is designed for enhanced privacy and anonymity and utilizes the Firefox browser as its base. It has various features that make it less susceptible to tracking, such as disabling third-party cookies by default, using HTTPS whenever possible, and clearing cookies when the browser is closed. However, some persistent mechanisms may still allow websites to store data on your system or maintain your session across different browsing periods.
Here’s what might be happening:
1. HTTP Strict Transport Security (HSTS) Supercookies:
Websites can store identifiers in the browser’s HSTS cache, which isn’t cleared with regular cookies and cache. Tor Browser, however, periodically clears its HSTS state, but some persistence could occur between those clears.
2. Service Workers:
Service Workers, which can cache resources for offline use, may be active. These are not cleared when you clear cookies but should be cleared when clearing the entire browsing history.
3. LocalStorage & IndexedDB:
Data stored in LocalStorage or IndexedDB is website-specific data storage. This data is not removed when cookies are cleared; it usually requires clearing the complete browsing data from the browser’s settings.
4. Persistent Cookies in Extensions/Add-Ons:
If you have any extensions or add-ons installed that are not included with the Tor Browser by default, they could potentially store cookies or data.
5. Browser Fingerprinting:
While not a form of caching, browser fingerprinting can allow a website to recognize you based on your browser’s unique characteristics. The Tor Browser includes various protections against fingerprinting, but it’s an ongoing battle.
6. Session Restore:
Firefox-based browsers, including Tor Browser, have a session restore feature that may save the session’s state, potentially across restarts if not configured properly.
7. First-Party Isolation:
Tor Browser includes a feature called First-Party Isolation, which prevents cross-site tracking by ensuring that all browser identifiers are scoped to the URL’s domain you are currently visiting. But if not interacted with correctly, this might cause confusion when attempting to clear site-specific data.
To ensure that all traces are removed:
- Always close Tor Browser when finished with a session, as this should clear all cookies and potentially unload any service workers.
- Clear all browsing history, including offline data and cached web content. You can find this option under “Privacy & Security” in the Tor Browser settings.
- Do not install any additional extensions or add-ons, as they could interfere with the privacy features of Tor Browser.
- Keep your Tor Browser updated to ensure you have the latest privacy protections.
If you want to guarantee no persistent data storage, consider using the Tor Browser on a bootable live USB operating system, such as Tails. Tails is designed to leave no trace on the machine after shutdown.
you are generating uninformative bullshit, because tor browser has Service Workers disabled.
Apologies for the confusion. You’re right that the Tor Browser has Service Workers disabled to prevent potential privacy issues.
In the context of Tor Browser, which is designed to not store persistent information, there are still a few other reasons why you might experience persistent behavior or “leftover” cache after you think you’ve cleared all cookies and session data:
1. Tor State Directory: Tor Browser keeps a state directory which contains information about the Tor client’s state. This is used to maintain some degree of persistency between sessions, and although it shouldn’t contain any personal browsing information, it can maintain certain Tor-related settings.
2. NoScript & HTTPS Everywhere Data: Tor Browser comes with additional extensions like NoScript and HTTPS Everywhere. These extensions have their own settings and data which may not be immediately cleared when you remove cookies and cache through the browser.
3. Profile Folder: Firefox-based browsers, including Tor Browser, store user data in a profile folder. While Tor Browser is meant to not keep any personal data after closing, there could be scenarios where some data is not deleted as expected.
4. Disk Cache: Even though Tor Browser tries to keep as much as it can in memory, there’s a possibility that the disk cache is not entirely cleared. This could happen due to unexpected shutdowns or software bugs.
To clear all possible caches and data in Tor Browser:
- Use the “New Identity” feature: Click on the hamburger menu (≡) in the top right corner, then click on “New Identity”. This will close all tabs and windows, clear all private information such as cookies and browsing history, and use a new Tor circuit for all connections.
- Delete the Tor Browser Folder: If you want to completely reset the Tor Browser, you can close the browser and manually delete the Tor Browser’s directory from your file system and then reinstall it.
- For an even stronger guarantee against persistent data storage, as mentioned in the previous response, consider using a live operating system like Tails which is designed to leave no trace after the session ends.
If you are still experiencing issues, it might be worth checking any recent changes to the Tor Browser that could affect data storage, or even reporting an issue to the Tor Project, as this behavior could potentially be a bug or regression.
is there any hard reset in tor browser to turn it all to factory defaults?
Tor Browser does not have a built-in “hard reset” feature that restores it to factory defaults with a single click. However, you can manually achieve a complete reset by deleting the Tor Browser and its associated data, and then downloading and re-installing it. Here’s how to do that:
### For Windows:
1. Close Tor Browser if it’s running.
2. Go to the folder where the Tor Browser is installed (usually it’s on your Desktop or in your Downloads folder).
3. Delete the Tor Browser folder.
4. Empty your Recycle Bin to ensure the files are completely removed.
5. Download the latest version of Tor Browser from the official source: https://www.torproject.org/download/
6. Install Tor Browser by running the downloaded file.
### For macOS:
1. Close Tor Browser if it’s running.
2. Open the Applications folder.
3. Drag the Tor Browser application to the Trash.
4. Empty the Trash to clear the files from your system.
5. Download the latest version of Tor Browser from the official source: https://www.torproject.org/download/
6. Open the downloaded .dmg file and drag the Tor Browser application into your Applications folder.
### For Linux:
1. Close Tor Browser if it’s running.
2. Delete the tor-browser_en-US (or the respective localized folder) in your home directory or wherever you extracted it.
3. Empty the Trash or delete the files from the command line to ensure they are removed.
4. Download the latest version of Tor Browser from the official source: https://www.torproject.org/download/
5. Extract the downloaded file and start the Tor Browser.
Please note that you might want to back up bookmarks or any necessary data before doing this, as this process will remove all Tor Browser data from your system. Additionally, you might want to consider using a system cleaner to remove any residual files that might not have been deleted during the manual process, although in most cases, deleting the browser folder should suffice for a complete reset.
“Tor Browser does not have a built-in “hard reset” feature that restores it to factory defaults”. then let’s build it in linex, because your reinstalling methods is lame.
|
e254cc7dde6da5e9dbfa287f6d21d5d9
|
{
"intermediate": 0.214094877243042,
"beginner": 0.43279796838760376,
"expert": 0.35310715436935425
}
|
40,260
|
https://fantasy.premierleague.com/api/fixtures/
https://fantasy.premierleague.com/api/bootstrap-static/
Endpoint: element-summary/{element_id}/
Full url: https://fantasy.premierleague.com/api/element-summary/***{element_id}***/
Endpoint: event/{event_id}/live/
Full url: https://fantasy.premierleague.com/api/event/***{event_id}***/live/
Returns statistics on every football player in FPL for the specified gameweek.
باستخدام هذه المفاتيح أعلاه api
بلغة بايثون
مكتبة request .pandas.time. ومكتبات تحليل البيانات من اختيارتك حسب تقديرك.
ايضا يتتبع النتائج المباشرة الحالية مثل الاهداف الاسيست
للاعيبين اريد جميع الأحداث المباشرة لكل مبارة جارية حاليا.
لديك الصلاحيات الكاملة لاستعمال اي طريقة برمجية بلغة بايثون استعمل اي مكتبة حسب قدراتك وحسب طلبي
اريد طريقة احترافية وفعالة مثل التى تعرض في المواقع والصفحات
|
f96e77d549c990fa18ef0db80705ae7c
|
{
"intermediate": 0.413176953792572,
"beginner": 0.29665452241897583,
"expert": 0.29016855359077454
}
|
40,261
|
explain
|
48640828e007649cd8a3928bbbe216ce
|
{
"intermediate": 0.3545367121696472,
"beginner": 0.31888994574546814,
"expert": 0.32657337188720703
}
|
40,262
|
salsa rust
|
7f51f00a035ce3ad4b2da91c0c6626d5
|
{
"intermediate": 0.46068331599235535,
"beginner": 0.3383960723876953,
"expert": 0.20092059671878815
}
|
40,263
|
I wabt a quantized model:
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/sdxl-turbo"
).to("cuda")
results = pipe(
prompt="A cinematic shot of a baby racoon wearing an intricate italian priest robe",
num_inference_steps=1,
guidance_scale=0.0,
)
imga = results.images[0]
imga.save("image.png")
|
8e0e08a57d4f62c63ccf4296b79706d9
|
{
"intermediate": 0.32616445422172546,
"beginner": 0.22647561132907867,
"expert": 0.44735994935035706
}
|
40,264
|
Ok i have acrcloud API key
generate me a python program to identify a unknown song which doesnt contains any name like album name, song name, year , explicit version, album cover etc.
i need python program to identify such song and tag ID3 tags, lyriccs, album name, artist name, album page etc..
|
0592f5280b2d2059a0444ab676230c60
|
{
"intermediate": 0.65985107421875,
"beginner": 0.07006631791591644,
"expert": 0.27008265256881714
}
|
40,265
|
What is GPT-4?
|
6272dd574cf292b84ba7cfbfccfc165c
|
{
"intermediate": 0.32761967182159424,
"beginner": 0.17310616374015808,
"expert": 0.4992741644382477
}
|
40,266
|
(myenv) C:\Users\ILEG-i5-11\Downloads\Compressed\acrcloud_sdk_python-master>pip install .
Processing c:\users\ileg-i5-11\downloads\compressed\acrcloud_sdk_python-master
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: pyacrcloud
Building wheel for pyacrcloud (pyproject.toml) ... done
Created wheel for pyacrcloud: filename=pyacrcloud-1.0.1-py3-none-any.whl size=5096395 sha256=de0708e18efc380c16685518ca1cab4fe17bd88dc67334c7d3dadea612421142
Stored in directory: c:\users\ileg-i5-11\appdata\local\pip\cache\wheels\47\64\a1\1463f414b8132a5db8b20a958dc582c3eedc0d67f107aa7e6f
Successfully built pyacrcloud
Installing collected packages: pyacrcloud
Successfully installed pyacrcloud-1.0.1
[notice] A new release of pip is available: 23.2.1 -> 24.0
[notice] To update, run: python.exe -m pip install --upgrade pip
(myenv) C:\Users\ILEG-i5-11\Downloads\Compressed\acrcloud_sdk_python-master>cd ..
(myenv) C:\Users\ILEG-i5-11\Downloads\Compressed>python willitwork.py
Traceback (most recent call last):
File "C:\Users\ILEG-i5-11\Downloads\Compressed\willitwork.py", line 20, in <module>
recognizer = acrcloud.ACRCloudRecognizer(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'acrcloud' has no attribute 'ACRCloudRecognizer'
|
88507e015d09e4101d2b03c165fe0b76
|
{
"intermediate": 0.3995331823825836,
"beginner": 0.238933727145195,
"expert": 0.3615330755710602
}
|
40,267
|
import acrcloud
import os
import eyed3
import requests
import json
from acrcloud.recognizer import ACRCloudRecognizer
# ACRCloud API credentials
ACR_HOST = ""
ACR_ACCESS_KEY = ""
ACR_ACCESS_SECRET = ""
# ACR Cloud setup
config = {
'host': ACR_HOST,
'access_key': ACR_ACCESS_KEY,
'access_secret': ACR_ACCESS_SECRET,
'timeout': 10 # seconds
}
dir(acrcloud)
# Initialize the ACRCloud recognizer
recognizer = ACRCloudRecognizer(config)
# Function to recognize the song from an audio file
def recognize_song(audio_file_path):
buffer = open(audio_file_path, 'rb').read()
result = recognizer.recognize_by_filebuffer(buffer, 0)
try:
result_dict = json.loads(result) # Parse the JSON string into a dictionary
return result_dict['metadata']['music'][0]
except (KeyError, IndexError, json.JSONDecodeError) as e:
print(f"Error while parsing result: {e}")
return None
# Function to set ID3 tags
def set_id3_tags_mp3(audio_file_path, tags):
audio_file = eyed3.load(audio_file_path)
if not audio_file.tag:
audio_file.initTag()
audio_file.tag.artist = tags.get('artists')[0].get('name')
audio_file.tag.album = tags.get('album').get('name')
audio_file.tag.album_artist = tags.get('artists')[0].get('name')
audio_file.tag.title = tags.get('title')
# Set the release year (if available)
release_date = tags.get('release_date')
if release_date and len(release_date) >= 4: # Check if release_date contains at least the year
year_string = release_date[:4]
try:
year = int(year_string)
# Some versions of eyeD3 require a Date object if available
if hasattr(eyed3.id3.tag, 'Date'):
audio_file.tag.recording_date = eyed3.id3.tag.Date(year)
else:
# Otherwise, set it as text_frame
audio_file.tag.setTextFrame("TDRC", year_string)
except ValueError:
print(f"Invalid date format in the tag: {release_date}")
# Add more tags here
audio_file.tag.genre = tags.get('genres')[0].get('name') # Assuming there's at least one genre
audio_file.tag.publisher = "Karthik" # Publisher tag set as 'karthik'
# To save the copyright label:
audio_file.tag.copyright = tags.get('label', '')
# To save the album cover page, you would need to download the image from a source
# and then do something like this:
# with open("path_to_cover_image.jpg", "rb") as album_art:
# audio_file.tag.images.set(3, album_art.read(), "image/jpeg", u"Description")
# Example of setting explicit tag in the comments (if you have explicit info):
audio_file.tag.comments.set(u"Explicit: Yes")
audio_file.tag.save(version=eyed3.id3.ID3_V2_3)
audio_file.tag.save()
# Replace 'path_to_your_audio_file.mp3' with the actual file path of the unknown song
if __name__ == "__main__":
audio_file_path = 'C:/Users/ILEG-i5-11/Downloads/Music/Unknown_file.mp3'
song_tags = recognize_song(audio_file_path)
if song_tags:
print(f'Song identified: {song_tags}')
set_id3_tags_mp3(audio_file_path, song_tags)
# Renaming the file after identifying the song and setting tags
artist_name = song_tags.get('artists')[0].get('name')
song_title = song_tags.get('title')
if artist_name and song_title:
new_file_name = f"{artist_name} - {song_title}.mp3"
new_file_path = os.path.join(os.path.dirname(audio_file_path), new_file_name)
os.rename(audio_file_path, new_file_path)
print(f"File has been renamed to: {new_file_name}")
else:
print('Could not identify the song.')
I need album art and lyrics to be saved externally.
|
dfe81e9b71869665018e91d36a33e12c
|
{
"intermediate": 0.45182204246520996,
"beginner": 0.39560094475746155,
"expert": 0.15257702767848969
}
|
40,268
|
What do you call the key above the enter key?
|
65330634eb35d7aca6f08ee5ddb0be6a
|
{
"intermediate": 0.3600060045719147,
"beginner": 0.24450495839118958,
"expert": 0.39548900723457336
}
|
40,269
|
What do you call the numbers that work well with computers like 256, 512, 64 1024 etc.?
|
6095a7e0189b1a12ed80f01b9eeca835
|
{
"intermediate": 0.3320835828781128,
"beginner": 0.24048714339733124,
"expert": 0.4274292588233948
}
|
40,270
|
In my code below, it freezes after printing "Generation Sucessful...". Can you fix it?:
# Importing libraries
import time
import os
from selenium import webdriver
from bs4 import BeautifulSoup
from undetected_chromedriver import options
import undetected_chromedriver as uc
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
# Highest prompt definition
def get_highest_saved_prompt(folder_path):
saved_prompt_file = os.path.join(folder_path, "saved_prompts.txt")
if os.path.exists(saved_prompt_file):
with open(saved_prompt_file, "r") as file:
content = file.read().strip()
if content:
highest_saved_prompt = int(content)
else:
highest_saved_prompt = 0
else:
highest_saved_prompt = 0
return highest_saved_prompt
# Update highest prompt definition
def update_highest_saved_prompt(folder_path, highest_saved_prompt):
saved_prompt_file = os.path.join(folder_path, "saved_prompts.txt")
with open(saved_prompt_file, "w") as file:
file.write(str(highest_saved_prompt))
def save_text(driver, element_id, saved_texts, folder_path):
file_path = r"D:\Projekte\Image Gen v2\Prompts\highest prompt\highest_prompt.txt" # Initialize the file_path variable
try:
element = driver.find_element(By.XPATH, element_id)
text = element.text
if text not in saved_texts:
if not os.path.exists(folder_path):
os.makedirs(folder_path)
file_number = len(saved_texts) * 2
file_name = f"{file_number}.txt"
file_path = os.path.join(folder_path, file_name)
with open(file_path, "w") as file:
file.write(text)
saved_texts.append(text) # Update the saved_texts list
print(f"Text saved as: {file_path}")
return True # Successful save
else:
print(f"Text already saved: {file_path}")
return False # Unsuccessful save (text already saved)
except Exception as e:
print(f"Failed to save text from: {element_id}. Error: {str(e)}")
if file_path:
print(f"Associated file_path: {file_path}")
return False # Unsuccessful save (error occurred)
def read_saved_prompts(folder_path):
saved_texts = []
for file_name in os.listdir(folder_path):
if file_name.endswith(".txt"):
with open(os.path.join(folder_path, file_name), "r") as file:
text = file.read().strip()
saved_texts.append(text)
return saved_texts
# Update are_more_prompts_available function to handle exceptions
def are_more_prompts_available(driver, current_element_number):
try:
element_id = f'//*[@id="__next"]/div[1]/div[2]/div/main/div[1]/div/div/div/div[{current_element_number}]/div'
element = driver.find_element(By.XPATH, element_id)
return True
except Exception:
return False
# Update save_all_prompts function
def save_all_prompts(driver, start_element_number, saved_texts, folder_path):
current_element_number = start_element_number
saved_file_paths = [] # Initialize the saved_file_paths list
while are_more_prompts_available(driver, current_element_number):
element_id = f'//*[@id="__next"]/div[1]/div[2]/div/main/div[1]/div/div/div/div[{current_element_number}]/div'
file_path = save_text(driver, element_id, saved_texts, folder_path)
if file_path:
saved_texts.append(file_path) # Update the saved_texts list with the file_path
saved_file_paths.append(file_path) # Append the file_path to saved_file_paths
time.sleep(1) # Wait for a second to avoid overwhelming the server
current_element_number += 2 # Increment by 2 to move to the next element
return saved_file_paths # Return the saved_file_paths list
# Run program definition
def run_program():
# Create ChromeOptions object
options = uc.ChromeOptions()
# Add proxy argument
options.add_argument('proxy-server=85.214.250.48:3128')
options.add_argument('--disable-extensions')
options.add_argument("--disable-infobars")
# Add headless argument
# options.add_argument('--headless')
# Disable password manager
prefs = {"credentials_enable_service": False, "profile.password_manager_enabled": False}
options.add_experimental_option("prefs", prefs)
# Create a new instance of the Chrome driver with the specified options
driver = uc.Chrome(options=options)
driver.maximize_window()
print("Starting Driver...")
# Open the provided link
driver.get("https://chat.openai.com/c/4402b1d2-71af-43c0-8e45-803faa01b81b")
print("Opening Website...")
# Find and click the "Log in" button
login_button = WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.XPATH, '//*[@id="__next"]/div[1]/div[2]/div[1]/div/div/button[1]'))
)
time.sleep(1)
login_button.click()
print("Logging in...")
# Find and enter the email address
email_input = WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.XPATH, '//*[@id="email-input"]'))
)
time.sleep(1)
email_input.click()
email_input.send_keys("apexkonto007@gmail.com")
# Find and click the "Continue" button
continue_button = WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.XPATH, '//*[@id="root"]/div/main/section/div[2]/button'))
)
time.sleep(1)
continue_button.click()
# Find and enter the password
password_input = WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.XPATH, '//*[@id="password"]'))
)
time.sleep(1)
password_input.click()
password_input.send_keys("Ghost_0995")
# Find and click the "Continue" button
continue_button = WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.XPATH, '/html/body/div[1]/main/section/div/div/div/form/div[2]/button'))
)
time.sleep(1)
continue_button.click()
print("Login successful...")
time.sleep(10)
# Find and click the "Confirm" button
try:
confirm_button = WebDriverWait(driver, 2).until(
EC.element_to_be_clickable((By.XPATH, '//*[@id="radix-:r1p:"]/div[2]/div/div[2]/button'))
)
time.sleep(1)
confirm_button.click()
print("Term acceptance sucessful...")
except TimeoutException:
print("Term acceptance bypassed...")
# Set the condition to stop the loop
stop_condition = False
error = False
# Get the folder path
folder_path = "D:\Projekte\Image Generator\Prompts"
# Get the highest saved prompt number
highest_saved_prompt = get_highest_saved_prompt(folder_path)
# Read already saved prompts
saved_texts = read_saved_prompts(folder_path)
# Save all possible findable prompts above the current highest saved prompt
saved_file_paths = save_all_prompts(driver, highest_saved_prompt + 2, saved_texts, folder_path)
saved_texts.extend(saved_file_paths) # Update the saved_texts list
# Loop through the elements dynamically
i = highest_saved_prompt + 2
consecutive_save_failures = 0 # Track consecutive save failures
while not stop_condition:
try:
element_id = f'//*[@id="__next"]/div/div[2]/div[1]/div[2]/main/div[2]/div[1]/div/div/div/div[{i}]/div/div'
if save_text(driver, element_id, saved_texts, folder_path):
consecutive_save_failures = 0 # Reset consecutive save failures if saving was successful
time.sleep(1) # Wait for a second to avoid overwhelming the server
i += 2
else:
consecutive_save_failures += 1 # Increment consecutive save failures
# Proceed to generate new prompts after an error occurs
error = False
# Read text from a file
with open(r"D:\Projekte\Image Generator\Prompts\Prompt formula\Prompt formula.txt", 'r') as file:
text = file.read()
# Find and enter the textarea
textarea = WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.XPATH, '//*[@id="prompt-textarea"]'))
)
time.sleep(1)
textarea.click()
# Use Javascript to paste the prompt formula and press enter
driver.execute_script(f"arguments[0].value = `{text}`", textarea)
time.sleep(1)
textarea.send_keys(Keys.BACK_SPACE)
continue_button = WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.XPATH, '//*[@id="__next"]/div/div[2]/div[1]/div[2]/main/div[2]/div[2]/form/div/div/div/button'))
)
time.sleep(1)
continue_button.click()
print("Generating prompts...")
time.sleep(10)
# Wait till completion
try:
WebDriverWait(driver, 50).until(
EC.invisibility_of_element_located((By.XPATH, '//*[@id="__next"]/div/div[2]/div[1]/div[2]/main/div[2]/div[2]/form/div/div/div/div'))
)
print("Generation successful...")
time.sleep(1)
except TimeoutException:
print("Generation timeout...")
time.sleep(1)
# Update the highest saved prompt number
highest_saved_prompt = len(saved_texts) * 2
update_highest_saved_prompt(folder_path, highest_saved_prompt)
# Check the condition to stop the loop
if error or consecutive_save_failures >= 3 or not are_more_prompts_available(driver, i):
stop_condition = True
except Exception as e:
print("An error occurred:", str(e))
print("Proceeding to generate new prompts...")
error = True # Set the error flag to True
consecutive_save_failures += 1 # Increment consecutive save failures
time.sleep(1)
i += 2
# Keep the browser running indefinitely
while driver.window_handles:
time.sleep(1)
run_program()
|
2343427909c6c44f76f6ae4cf46b5801
|
{
"intermediate": 0.3574143350124359,
"beginner": 0.4697549641132355,
"expert": 0.172830730676651
}
|
40,271
|
I want to include Google Maps into my website and include custom bus routes on it. How do I get started with it
|
33b2f995e0e0bd1331ac62a81c022264
|
{
"intermediate": 0.4159992039203644,
"beginner": 0.21791349351406097,
"expert": 0.36608731746673584
}
|
40,272
|
import acrcloud
import os
import eyed3
import requests
import json
from acrcloud.recognizer import ACRCloudRecognizer
# ACRCloud API credentials
ACR_HOST = ""
ACR_ACCESS_KEY = ""
ACR_ACCESS_SECRET = ""
# ACR Cloud setup
config = {
'host': ACR_HOST,
'access_key': ACR_ACCESS_KEY,
'access_secret': ACR_ACCESS_SECRET,
'timeout': 10 # seconds
}
dir(acrcloud)
# Initialize the ACRCloud recognizer
recognizer = ACRCloudRecognizer(config)
# Function to recognize the song from an audio file
def recognize_song(audio_file_path):
buffer = open(audio_file_path, 'rb').read()
result = recognizer.recognize_by_filebuffer(buffer, 0)
try:
result_dict = json.loads(result) # Parse the JSON string into a dictionary
return result_dict['metadata']['music'][0]
except (KeyError, IndexError, json.JSONDecodeError) as e:
print(f"Error while parsing result: {e}")
return None
# Function to set ID3 tags
def set_id3_tags_mp3(audio_file_path, tags):
audio_file = eyed3.load(audio_file_path)
if not audio_file.tag:
audio_file.initTag()
audio_file.tag.artist = tags.get('artists')[0].get('name')
audio_file.tag.album = tags.get('album').get('name')
audio_file.tag.album_artist = tags.get('artists')[0].get('name')
audio_file.tag.title = tags.get('title')
# Set the release year (if available)
release_date = tags.get('release_date')
if release_date and len(release_date) >= 4: # Check if release_date contains at least the year
year_string = release_date[:4]
try:
year = int(year_string)
# Some versions of eyeD3 require a Date object if available
if hasattr(eyed3.id3.tag, 'Date'):
audio_file.tag.recording_date = eyed3.id3.tag.Date(year)
else:
# Otherwise, set it as text_frame
audio_file.tag.setTextFrame("TDRC", year_string)
except ValueError:
print(f"Invalid date format in the tag: {release_date}")
# Add more tags here
audio_file.tag.genre = tags.get('genres')[0].get('name') # Assuming there's at least one genre
audio_file.tag.publisher = "Karthik" # Publisher tag set as 'karthik'
# To save the copyright label:
audio_file.tag.copyright = tags.get('label', '')
# To save the album cover page, you would need to download the image from a source
# and then do something like this:
# with open("path_to_cover_image.jpg", "rb") as album_art:
# audio_file.tag.images.set(3, album_art.read(), "image/jpeg", u"Description")
# Example of setting explicit tag in the comments (if you have explicit info):
audio_file.tag.comments.set(u"Explicit: Yes")
audio_file.tag.save(version=eyed3.id3.ID3_V2_3)
audio_file.tag.save()
# Replace 'path_to_your_audio_file.mp3' with the actual file path of the unknown song
if __name__ == "__main__":
audio_file_path = 'C:/Users/ILEG-i5-11/Downloads/Music/Unknown_file.mp3'
song_tags = recognize_song(audio_file_path)
if song_tags:
print(f'Song identified: {song_tags}')
set_id3_tags_mp3(audio_file_path, song_tags)
# Renaming the file after identifying the song and setting tags
artist_name = song_tags.get('artists')[0].get('name')
song_title = song_tags.get('title')
if artist_name and song_title:
new_file_name = f"{artist_name} - {song_title}.mp3"
new_file_path = os.path.join(os.path.dirname(audio_file_path), new_file_name)
os.rename(audio_file_path, new_file_path)
print(f"File has been renamed to: {new_file_name}")
else:
print('Could not identify the song.')
I need to add album page to save externally of png(max quality) and embedded too (default size)
and also to save lyrics externally in lrc format
I will provide reference by that help me accorsingly:
applemusic_api.py:
import re
import base64
import pbkdf2
import hashlib
from Cryptodome.Hash import SHA256
from uuid import uuid4
from utils.utils import create_requests_session
from .fingerprint import Fingerprint
import srp._pysrp as srp
srp.rfc5054_enable()
srp.no_username_in_x()
def b64enc(data):
return base64.b64encode(data).decode()
def b64dec(data):
return base64.b64decode(data)
class AppleMusicApi(object):
def __init__(self, exception, storefront='US', language='en-US', lyrics_resource='lyrics'):
self.s = create_requests_session()
self.api_base = 'https://amp-api.music.apple.com/v1/'
self.storefront = storefront
self.language = language
self.lyrics_storefront = storefront
self.lyrics_language = language
self.lyrics_resource = lyrics_resource
self.access_token = ''
self.user_token = ''
self.exception = exception
self.user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.5060.66 Safari/537.36'
def headers(self):
return {
'authorization': 'Bearer ' + self.access_token,
'Connection': 'Keep-Alive',
'Content-Type': 'application/json',
'Origin': 'https://music.apple.com',
'Referer': 'https://music.apple.com/',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': f'{self.language},en;q=0.9',
'User-Agent': self.user_agent,
'Media-User-Token': self.user_token,
'x-apple-renewal': 'true'
}
def get_access_token(self):
s = create_requests_session()
r = s.get('https://music.apple.com/us/search', headers=self.headers())
if r.status_code != 200: raise self.exception(r.text)
index_js = re.search('(?<=index\-)(.*?)(?=\.js")', r.text).group(1)
r = s.get(f'https://music.apple.com/assets/index-{index_js}.js', headers=self.headers())
if r.status_code != 200: raise self.exception(r.text)
self.access_token = re.search('(?=eyJh)(.*?)(?=")', r.text).group(1)
return self.access_token
def auth(self, email: str, password: str):
auth_url = 'https://idmsa.apple.com/appleauth/'
client_id = '06f8d74b71c73757a2f82158d5e948ae7bae11ec45fda9a58690f55e35945c51'
frame_id = 'auth-' + str(uuid4()).lower()
# get "dslang", "site" and "aasp" cookies
r = self.s.get(auth_url + 'auth/authorize/signin', headers=self.headers(), params={
'frame_id': frame_id,
'language': 'en_us',
'skVersion': '7',
'iframeId': frame_id,
'client_id': client_id,
'redirect_uri': 'https://music.apple.com',
'response_type': 'code',
'response_mode': 'web_message',
'account_ind': '1',
'state': frame_id,
'authVersion': 'latest'
})
if r.status_code != 200: raise self.exception(r.text)
auth_attributes = r.headers['X-Apple-Auth-Attributes']
# get "aa" cookie
r = self.s.post(auth_url + 'jslog', headers=self.headers(), json={
'type': 'INFO',
'title': 'AppleAuthPerf-s-y',
'message': '''APPLE ID : TTI {"data":{"initApp":{"startTime":1154.2000000001863},"loadAuthComponent":{"startTime":1500.7000000001863},"startAppToTTI":{"duration":346.70000000018626}},"order":["initApp","loadAuthComponent","startAppToTTI"]}''',
'iframeId': frame_id,
'details': '''{"pageVisibilityState":"visible"}'''
})
assert (r.status_code == 200)
# actual login
headers = {
'Accept': 'application/json',
'Referer': 'https://idmsa.apple.com/',
'Content-Type': 'application/json',
'X-Apple-Widget-Key': client_id,
'X-Apple-Frame-Id': frame_id,
'X-Apple-Domain-Id': '3',
'X-Apple-Locale': 'en_us',
'X-Requested-With': 'XMLHttpRequest',
'Origin': 'https://idmsa.apple.com',
'X-Apple-I-Require-UE': 'true',
'X-Apple-I-FD-Client-Info': '{' + f'"U":"{self.user_agent}","L":"{self.language}","Z":"GMT-8:00","V":"1.1","F":"{Fingerprint().create_fingerprint()}"' + '}',
'X-Apple-Auth-Attributes': auth_attributes,
'User-Agent': self.user_agent,
'X-Apple-Mandate-Security-Upgrade': '0'
}
json_ = {'accountName': email, 'rememberMe': 'false'}
params_ = {'isRememberMeEnabled': 'false'}
r = self.s.post(auth_url + 'auth/federate', headers=headers, params=params_, json=json_)
if 'federated' not in r.json(): raise self.exception(r.text)
# finally begin login
user = srp.User(email, bytes(), hash_alg=srp.SHA256, ng_type=srp.NG_2048)
_, A = user.start_authentication()
json_ = {'a': b64enc(A), 'accountName': email, 'protocols': ['s2k', 's2k_fo']}
r = self.s.post(auth_url + 'auth/signin/init', headers=headers, json=json_)
out_json = r.json()
if r.status_code != 200: raise self.exception(out_json['serviceErrors'][0]['message'])
if 'b' not in out_json: raise self.exception(r.text)
if out_json.get('protocol') != 's2k': raise self.exception('Protocol not supported')
salt = b64dec(out_json['salt'])
iterations = out_json['iteration']
B = b64dec(out_json['b'])
c = out_json['c']
pass_hash = hashlib.sha256(password.encode("utf-8")).digest()
enc_pass = pbkdf2.PBKDF2(pass_hash, salt, iterations, SHA256).read(32)
user.p = enc_pass
M1 = user.process_challenge(salt, B)
if M1 is None: raise self.exception("Failed to process challenge")
M2 = user.K
# real version uses m2 as well... hmmm
json_ = {'accountName': email, 'c': c, 'm1': b64enc(M1), 'm2': b64enc(M2), 'rememberMe': 'false'}
r = self.s.post(auth_url + 'auth/signin/complete', headers=headers, params=params_, json=json_)
if r.status_code != 200: raise self.exception(r.json()['serviceErrors'][0]['message'])
# exchange the "myacinfo" cookie with the "media-user-token"
r = self.s.post('https://buy.music.apple.com/account/web/auth', headers=self.headers(), json={'webAuthorizationFlowContext': 'music'})
if r.status_code != 200: raise self.exception(r.text)
self.user_token = self.s.cookies['media-user-token']
return self.user_token
def get_account_details(self, force_region, selected_language, lyrics_language):
r = self.s.get(self.api_base + 'me/account', headers=self.headers(), params={'meta': 'subscription'})
if r.status_code != 200: raise self.exception(r.text)
self.lyrics_storefront = r.json()['meta']['subscription']['storefront']
if force_region.lower() == self.lyrics_storefront: force_region = None
if force_region: print(f"Apple Music: WARNING: Selected region {force_region} is not the same as your Apple Music region {self.lyrics_storefront}, lyrics will use the region {self.lyrics_storefront}. Only lyrics available in both regions will be used, maybe use a copy of the module with the folder name (which determines the name of the module) and the netlocation_constant changed for lyrics only if you want credits or playlists from other regions.")
self.storefront = force_region.lower() if force_region else self.lyrics_storefront
account_active = r.json()['meta']['subscription']['active']
storefront_endpoint = f'storefronts/{force_region.lower()}' if force_region else 'me/storefront'
endpoint_data = self.s.get(self.api_base + storefront_endpoint, headers=self.headers())
if endpoint_data.status_code != 200: raise self.exception(f'Region {force_region} is not supported')
supported_languages = endpoint_data.json()['data'][0]['attributes']['supportedLanguageTags']
if selected_language:
for i in supported_languages:
if selected_language in i:
self.language = i
break
else:
print(f"Apple Music: WARNING: Selected language {selected_language} in region {force_region if force_region else self.lyrics_storefront} is unsupported, force a different region or use one of these: {', '.join(supported_languages)}")
self.language = supported_languages[0]
else:
self.language = supported_languages[0]
if not lyrics_language: lyrics_language = selected_language
if force_region:
supported_languages = self.s.get(f'{self.api_base}me/storefront', headers=self.headers()).json()['data'][0]['attributes']['supportedLanguageTags']
if lyrics_language:
for i in supported_languages:
if selected_language in i:
self.lyrics_language = i
break
else:
print(f"Apple Music: WARNING: Selected language {selected_language} in lyrics region {self.lyrics_storefront} is unsupported, force a different region or use one of these: {', '.join(supported_languages)}")
self.lyrics_language = supported_languages[0]
else:
self.lyrics_language = supported_languages[0]
return self.storefront, account_active, self.language, self.lyrics_language, self.lyrics_storefront
def check_active_subscription(self):
url = f'{self.api_base}me/account'
params = {'meta': 'subscription', 'challenge[subscriptionCapabilities]': 'voice,premium'}
response = self.s.get(url, headers=self.headers(), params=params)
if response.status_code != 200: raise self.exception(response.text)
response_data = response.json()
if 'meta' in response_data and 'subscription' in response_data['meta']:
return response_data['meta']['subscription'].get('active', False)
return False
def _get(self, url: str, params=None, storefront=None, language=None):
if not params: params = {}
if not storefront: storefront = self.storefront
params['l'] = language if language else self.language
r = self.s.get(f'{self.api_base}catalog/{storefront}/{url}', params=params, headers=self.headers())
if r.status_code not in [200, 201, 202]: raise self.exception(r.text)
return r.json()
def search(self, query_type: str, query: str, limit: int = 10):
if limit > 25: limit = 25
params = {
'term': query,
'types': query_type,
'limit': limit
}
if query_type == 'songs':
params['extend[songs]'] = 'attribution,composerName,contentRating,discNumber,durationInMillis,isrc,movementCount,movementName,movementNumber,releaseDate,trackNumber,workNamedata'
params['include[songs]'] = 'artists,albums' + (f',{self.lyrics_resource}' if self.storefront == self.lyrics_storefront else '') # doesn't give lyrics?
params['extend[albums]'] = 'copyright,upc'
elif query_type == 'playlists':
params['include[playlists]'] = 'curator'
params['extend[playlists]'] = 'artwork,description,trackTypes,trackCount'
results = self._get('search', params)['results']
if query_type in results:
results = results[query_type]['data']
else:
results = []
return results
def get_playlist_base_data(self, playlist_id):
return self._get(f'playlists/{playlist_id}', params={
'include': 'curator,tracks',
'extend': 'artwork,description,trackTypes,trackCount',
'include[songs]': 'artists,albums' + (f',{self.lyrics_resource}' if self.storefront == self.lyrics_storefront else ''),
'extend[songs]': 'extendedAssetUrls,attribution,composerName,contentRating,discNumber,durationInMillis,isrc,movementCount,movementName,movementNumber,releaseDate,trackNumber,workNamedata',
'extend[albums]': 'copyright,upc'
})['data'][0]
def get_playlist_tracks(self, playlist_data):
tracks_list, track_data = [], {}
tracks = list(playlist_data['relationships']['tracks']['data'])
offset = len(tracks)
while len(tracks) + offset <= playlist_data['attributes']['trackCount']:
tracks += self._get(f'playlists/{playlist_data["id"]}/tracks', params={
'offset': offset,
'include[songs]': 'artists,albums' + (f',{self.lyrics_resource}' if self.storefront == self.lyrics_storefront else ''),
'extend[songs]': 'extendedAssetUrls,attribution,composerName,contentRating,discNumber,durationInMillis,isrc,movementCount,movementName,movementNumber,releaseDate,trackNumber,workNamedata',
'extend[albums]': 'copyright,upc',
'limit': 100
})['data']
offset += 100
for track in tracks:
tracks_list.append(track['id'])
track_data[track['id']] = track
return tracks_list, track_data
def get_tracks_by_ids(self, track_ids: list = None, isrc: str = None):
if not track_ids: track_ids = []
params = {'filter[isrc]': isrc} if isrc else {'ids': ','.join(track_ids)}
params['include'] = 'artists,albums' + (f',{self.lyrics_resource}' if self.storefront == self.lyrics_storefront else '')
params['extend'] = 'attribution,composerName,contentRating,discNumber,durationInMillis,isrc,movementCount,movementName,movementNumber,releaseDate,trackNumber,workNamedata'
params['extend[albums]'] = 'copyright,upc'
return self._get('songs', params)['data']
def get_track(self, track_id: str = None):
return self.get_tracks_by_ids([track_id])[0]
@staticmethod
def get_lyrics_support(track_attributes):
# could technically be a single line in the lambda
if track_attributes.get('hasTimeSyncedLyrics'):
return 1 if track_attributes.get('isVocalAttenuationAllowed') else 2
else:
return 3 if track_attributes.get('hasLyrics') else 4
def get_track_by_isrc(self, isrc: str, album_name: str):
results = self.get_tracks_by_ids(isrc=isrc)
correct_region_results = [i for i in results if i['attributes']['url'].split('i=')[-1].split('&')[0] == i['id']]
incorrect_region_results = [i for i in results if i['attributes']['url'].split('i=')[-1].split('&')[0] != i['id']]
correct_region_results_sorted_by_track_number = sorted(correct_region_results, key=lambda x: x['attributes'].get('trackNumber', 1))
fix_results_by_album = lambda list_to_sort: sorted(list_to_sort, key=lambda x: (x['attributes']['albumName'] != album_name))
correct_album_correct_region_results = fix_results_by_album(correct_region_results_sorted_by_track_number)
correct_album_incorrect_region_results = fix_results_by_album(incorrect_region_results)
correct_album_prioritised_lyrics_results = sorted(correct_album_correct_region_results, key=lambda x: self.get_lyrics_support(x['attributes']))
return correct_album_prioritised_lyrics_results + correct_album_incorrect_region_results
def get_lyrics(self, track_id, lyrics_resource=None):
if not lyrics_resource: lyrics_resource = self.lyrics_resource
try:
data = self._get(f'songs/{track_id}/{lyrics_resource}', storefront=self.lyrics_storefront, language=self.language)
except self.exception:
return None
return data#['data'][0]['attributes']['ttml']
fingerprint.py:
# This is likely not necessary at all, but I (OrfiDev) decided to reverse engineer and
# reimplement the fingerprinting algorithm used by Apple's web login as used by Apple Music anyways.
#
# I'm not sure if this is reversible (as in even checkable if it's correct)
# maybe the part which I assumed to be a checksum is actually a way to derive some variable required to decode?
import pytz
import random
import datetime
import urllib.parse
timezone = pytz.timezone('America/Los_Angeles')
class Fingerprint:
def encode(cls, e):
y = ["%20", ";;;", "%3B", "%2C", "und", "fin", "ed;", "%28", "%29", "%3A", "/53", "ike", "Web", "0;", ".0", "e;", "on", "il", "ck", "01", "in", "Mo", "fa", "00", "32", "la", ".1", "ri", "it", "%u", "le"]
A = ".0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz"
w = {
1: [4, 15],
110: [8, 239],
74: [8, 238],
57: [7, 118],
56: [7, 117],
71: [8, 233],
25: [8, 232],
101: [5, 28],
104: [7, 111],
4: [7, 110],
105: [6, 54],
5: [7, 107],
109: [7, 106],
103: [9, 423],
82: [9, 422],
26: [8, 210],
6: [7, 104],
46: [6, 51],
97: [6, 50],
111: [6, 49],
7: [7, 97],
45: [7, 96],
59: [5, 23],
15: [7, 91],
11: [8, 181],
72: [8, 180],
27: [8, 179],
28: [8, 178],
16: [7, 88],
88: [10, 703],
113: [11, 1405],
89: [12, 2809],
107: [13, 5617],
90: [14, 11233],
42: [15, 22465],
64: [16, 44929],
0: [16, 44928],
81: [9, 350],
29: [8, 174],
118: [8, 173],
30: [8, 172],
98: [8, 171],
12: [8, 170],
99: [7, 84],
117: [6, 41],
112: [6, 40],
102: [9, 319],
68: [9, 318],
31: [8, 158],
100: [7, 78],
84: [6, 38],
55: [6, 37],
17: [7, 73],
8: [7, 72],
9: [7, 71],
77: [7, 70],
18: [7, 69],
65: [7, 68],
48: [6, 33],
116: [6, 32],
10: [7, 63],
121: [8, 125],
78: [8, 124],
80: [7, 61],
69: [7, 60],
119: [7, 59],
13: [8, 117],
79: [8, 116],
19: [7, 57],
67: [7, 56],
114: [6, 27],
83: [6, 26],
115: [6, 25],
14: [6, 24],
122: [8, 95],
95: [8, 94],
76: [7, 46],
24: [7, 45],
37: [7, 44],
50: [5, 10],
51: [5, 9],
108: [6, 17],
22: [7, 33],
120: [8, 65],
66: [8, 64],
21: [7, 31],
106: [7, 30],
47: [6, 14],
53: [5, 6],
49: [5, 5],
86: [8, 39],
85: [8, 38],
23: [7, 18],
75: [7, 17],
20: [7, 16],
2: [5, 3],
73: [8, 23],
43: [9, 45],
87: [9, 44],
70: [7, 10],
3: [6, 4],
52: [5, 1],
54: [5, 0]
}
# the actual encoding function
def main_encode(e):
def t(r, o, input_tuple, n):
shift, value = input_tuple
r = (r << shift) | value
o += shift
while o >= 6:
e = (r >> (o - 6)) & 63
n += A[e]
r ^= e << (o - 6)
o -= 6
return n, r, o
n, r, o = "", 0, 0
n, r, o = t(r, o, (6, (7 & len(e)) << 3 | 0), n)
n, r, o = t(r, o, (6, 56 & len(e) | 1), n)
for char in e:
char_code = ord(char)
if char_code not in w:
return ""
n, r, o = t(r, o, w[char_code], n)
n, r, o = t(r, o, w[0], n)
if o > 0:
n, r, o = t(r, o, (6 - o, 0), n)
return n
# replacing some stuff in the string?
n = e
for r, rep in enumerate(y):
n = n.replace(rep, chr(r + 1))
# checksum calculation I think
n_val = 65535
for char in e:
n_val = ((n_val >> 8) | (n_val << 8)) & 65535
n_val ^= 255 & ord(char)
n_val ^= (255 & n_val) >> 4
n_val ^= (n_val << 12) & 65535
n_val ^= ((255 & n_val) << 5) & 65535
n_val &= 65535
n_val &= 65535
checksum = A[n_val >> 12] + A[(n_val >> 6) & 63] + A[n_val & 63]
# adding checksum to the encoded string
return main_encode(n) + checksum
def generate(cls):
def get_timezone_offset(date):
local_time = timezone.localize(date)
return int(-local_time.utcoffset().total_seconds() / 60)
t1 = get_timezone_offset(datetime.datetime(2005, 1, 15))
t2 = get_timezone_offset(datetime.datetime(2005, 7, 15))
def base_is_dst():
return abs(t1 - t2) != 0
def base_is_dst_str():
return str(base_is_dst()).lower()
def is_dst(date):
return base_is_dst and get_timezone_offset(date) == min(t1, t2)
def is_dst_str(date):
return str(is_dst(date)).lower()
def calculate_offset(date):
return int(-(get_timezone_offset(date) + abs(t2 - t1) * is_dst(date)) / 60)
# technically not the same as the browser, but close enough
def get_locale_string(date):
return urllib.parse.quote(date.strftime("%m/%d/%Y, %I:%M:%S %p"))
def get_timestamp(date):
return int(date.timestamp() * 1000)
current_time = datetime.datetime.now()
return f'TF1;020;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;{base_is_dst_str()};{is_dst_str(current_time)};{get_timestamp(current_time)};{calculate_offset(current_time)};{get_locale_string(datetime.datetime(2005,6,7,21,33,44,888))};;;;;;;;;{random.randint(1000, 9999)};{t1};{t2};{get_locale_string(current_time)};;;;;;;;;;;;;;;;;;;;;;;;25;;;;;;;;;;;;;;;5.6.1-0;;'
def create_fingerprint(cls):
return cls.encode(cls.generate())
# all the garbage that is tracked for fingerprinting if you're curious
'''
var t = new Date
, r = new Date
, o = [u("TF1"), u("020"), function() {
return ScriptEngineMajorVersion()
}
, function() {
return ScriptEngineMinorVersion()
}
, function() {
return ScriptEngineBuildVersion()
}
, function() {
return c("{7790769C-0471-11D2-AF11-00C04FA35D02}")
}
, function() {
return c("{89820200-ECBD-11CF-8B85-00AA005B4340}")
}
, function() {
return c("{283807B5-2C60-11D0-A31D-00AA00B92C03}")
}
, function() {
return c("{4F216970-C90C-11D1-B5C7-0000F8051515}")
}
, function() {
return c("{44BBA848-CC51-11CF-AAFA-00AA00B6015C}")
}
, function() {
return c("{9381D8F2-0288-11D0-9501-00AA00B911A5}")
}
, function() {
return c("{4F216970-C90C-11D1-B5C7-0000F8051515}")
}
, function() {
return c("{5A8D6EE0-3E18-11D0-821E-444553540000}")
}
, function() {
return c("{89820200-ECBD-11CF-8B85-00AA005B4383}")
}
, function() {
return c("{08B0E5C0-4FCB-11CF-AAA5-00401C608555}")
}
, function() {
return c("{45EA75A0-A269-11D1-B5BF-0000F8051515}")
}
, function() {
return c("{DE5AED00-A4BF-11D1-9948-00C04F98BBC9}")
}
, function() {
return c("{22D6F312-B0F6-11D0-94AB-0080C74C7E95}")
}
, function() {
return c("{44BBA842-CC51-11CF-AAFA-00AA00B6015B}")
}
, function() {
return c("{3AF36230-A269-11D1-B5BF-0000F8051515}")
}
, function() {
return c("{44BBA840-CC51-11CF-AAFA-00AA00B6015C}")
}
, function() {
return c("{CC2A9BA0-3BDD-11D0-821E-444553540000}")
}
, function() {
return c("{08B0E5C0-4FCB-11CF-AAA5-00401C608500}")
}
, function() {
return ""
}
, function() {
return ""
}
, function() {
return ""
}
, function() {
return s(["navigator.productSub", "navigator.appMinorVersion"])
}
, function() {
return ""
}
, function() {
return ""
}
, function() {
return s(["navigator.oscpu", "navigator.cpuClass"])
}
, function() {
return ""
}
, function() {
return ""
}
, function() {
return ""
}
, function() {
return ""
}
, function() {
return s(["navigator.language", "navigator.userLanguage"])
}
, function() {
return ""
}
, function() {
return ""
}
, function() {
return ""
}
, function() {
return ""
}
, function() {
return ""
}
, function() {
return ""
}
, function() {
return 0 !== Math.abs(h - g)
}
, function() {
return a(t)
}
, function() {
return "@UTC@"
}
, function() {
var e = 0;
return e = 0,
a(t) && (e = Math.abs(h - g)),
-(t.getTimezoneOffset() + e) / 60
}
, function() {
return new Date(2005,5,7,21,33,44,888).toLocaleString()
}
, function() {
return ""
}
, function() {
return ""
}
, function() {
return v.Acrobat
}
, function() {
return v.Flash
}
, function() {
return v.QuickTime
}
, function() {
return v["Java Plug-in"]
}
, function() {
return v.Director
}
, function() {
return v.Office
}
, function() {
return "@CT@"
}
, function() {
return h
}
, function() {
return g
}
, function() {
return t.toLocaleString()
}
, function() {
return ""
}
, function() {
return ""
}
, function() {
return ""
}
, function() {
return ""
}
, function() {
return ""
}
, function() {
return n("Acrobat")
}
, function() {
return n("Adobe SVG")
}
, function() {
return n("Authorware")
}
, function() {
return n("Citrix ICA")
}
, function() {
return n("Director")
}
, function() {
return n("Flash")
}
, function() {
return n("MapGuide")
}
, function() {
return n("MetaStream")
}
, function() {
return n("PDFViewer")
}
, function() {
return n("QuickTime")
}
, function() {
return n("RealOne")
}
, function() {
return n("RealPlayer Enterprise")
}
, function() {
return n("RealPlayer Plugin")
}
, function() {
return n("Seagate Software Report")
}
, function() {
return n("Silverlight")
}
, function() {
return n("Windows Media")
}
, function() {
return n("iPIX")
}
, function() {
return n("nppdf.so")
}
, function() {
var e = document.createElement("span");
e.innerHTML = " ",
e.style.position = "absolute",
e.style.left = "-9999px",
document.body.appendChild(e);
var t = e.offsetHeight;
return document.body.removeChild(e),
t
}
, m(), m(), m(), m(), m(), m(), m(), m(), m(), m(), m(), m(), m(), m(), function() {
return "5.6.1-0"
}
, m()];
'''
interface.py:
import xmltodict
import base64
import json
from urllib.parse import urlparse
from utils.models import *
from .applemusic_api import AppleMusicApi
module_information = ModuleInformation(
service_name = 'Apple Music (Basic Support)',
module_supported_modes = ModuleModes.covers | ModuleModes.lyrics | ModuleModes.credits | ModuleModes.playlist,
session_settings = {
'email': '',
'password': '',
'force_region': '',
'selected_language': 'en'
},
session_storage_variables=[
'storefront', 'language', 'lyrics_language', 'lyrics_storefront',
'verified_storefront', 'verified_language', 'verified_lyrics_language', 'verified_user_token',
'access_token'
],
global_settings = {
'get_original_cover': True,
'print_original_cover_url': False,
'lyrics_type': 'standard', # 'custom' or 'standard'
'lyrics_custom_ms_sync': False,
'lyrics_language_override': '',
'lyrics_syllable_sync': False
},
netlocation_constant = 'apple',
test_url = 'https://music.apple.com/us/playlist/beat-saber-x-monstercat/pl.0ccb67a275dc416c9dadd6fe1f80d518',
url_decoding = ManualEnum.manual
)
class ModuleInterface:
def __init__(self, module_controller: ModuleController):
self.tsc = module_controller.temporary_settings_controller
self.module_controller = module_controller
self.msettings = module_controller.module_settings
self.exception = module_controller.module_error
self.oprint = module_controller.printer_controller.oprint
self.lyrics_resource = 'syllable-lyrics' if self.msettings['lyrics_syllable_sync'] else 'lyrics'
if self.msettings['lyrics_syllable_sync'] and self.msettings['lyrics_type'] == 'standard': raise self.exception("Syllable synced lyrics cannot be downloaded with the standard lyrics type.")
self.session = AppleMusicApi(self.exception, lyrics_resource=self.lyrics_resource)
access_token = self.tsc.read('access_token')
if access_token and json.loads(base64.b64decode(access_token.split('.')[1] + '==').decode('utf-8'))['exp'] > module_controller.get_current_timestamp():
self.session.access_token = access_token
else:
self.tsc.set('access_token', self.session.get_access_token())
user_token = self.tsc.read('user_token')
if user_token:
self.session.user_token = user_token
# print(self.session.check_active_subscription())
if self.tsc.read('storefront') and self.tsc.read('language') and self.tsc.read('lyrics_language') and self.tsc.read('verified_storefront') == self.msettings['force_region'] and self.tsc.read('verified_language') == self.msettings['selected_language'] and self.tsc.read('verified_lyrics_language') == self.msettings['lyrics_language_override']:
self.session.storefront = self.tsc.read('storefront')
self.session.language = self.tsc.read('language')
self.session.lyrics_storefront = self.tsc.read('lyrics_storefront')
self.session.lyrics_language = self.tsc.read('lyrics_language')
elif user_token:
self.set_regions()
def set_regions(self):
account_storefront, account_active, language_tag, lyrics_language_tag, lyrics_storefront = self.session.get_account_details(self.msettings['force_region'], self.msettings['selected_language'], self.msettings['lyrics_language_override'])
self.tsc.set('storefront', account_storefront)
self.tsc.set('language', language_tag)
self.tsc.set('lyrics_language', lyrics_language_tag)
self.tsc.set('lyrics_storefront', lyrics_storefront)
self.tsc.set('verified_storefront', self.msettings['force_region'])
self.tsc.set('verified_language', self.msettings['selected_language'])
self.tsc.set('verified_lyrics_language', self.msettings['lyrics_language_override'])
def login(self, email, password):
user_token = self.session.auth(email, password)
self.tsc.set('user_token', user_token)
self.set_regions()
@staticmethod
def custom_url_parse(link):
url = urlparse(link)
components = url.path.split('/')
if not components or len(components) < 4:
print('Invalid URL: ' + link)
exit()
if components[2] == 'playlist':
return MediaIdentification(
media_type = DownloadTypeEnum.playlist,
media_id = components[-1].split('?')[0].split('.')[-1]
)
else:
print('Unsupported URL: ' + link)
exit()
def parse_cover_url(self, unparsed_url, resolution, compression_level: CoverCompressionEnum, file_format=ImageFileTypeEnum.jpg):
if file_format is ImageFileTypeEnum.png and (self.msettings['get_original_cover'] or self.msettings['print_original_cover_url']):
result = 'https://a1.mzstatic.com/r40/' + '/'.join(unparsed_url.split('/')[5:-1])
if self.msettings['print_original_cover_url']:
self.oprint('Original cover URL: ' + result)
if self.msettings['get_original_cover']:
cover_extension = unparsed_url.split('.')[-1]
if cover_extension not in [i.name for i in ImageFileTypeEnum]:
raise self.module_controller.module_error('Invalid cover extension: ' + cover_extension)
return result, ImageFileTypeEnum[cover_extension]
if compression_level is CoverCompressionEnum.low:
compression = '-100'
elif compression_level is CoverCompressionEnum.high:
compression = '-0'
crop_code = ''
# while Apple Music doesn't use the compression modifier, we use the crop code position in the format string for convenience
final_crop_code = crop_code + compression
url = unparsed_url.format(w=resolution, h=resolution, c=final_crop_code, f=file_format.name).replace('bb.', compression+'.')
url = f'{url.rsplit(".", 1)[0]}.{file_format.name}'
return url, file_format
def get_track_cover(self, track_id, cover_options: CoverOptions, data = {}):
track_data = data[track_id] if track_id in data else self.session.get_track(track_id)
cover_url, cover_type = self.parse_cover_url(
unparsed_url = track_data['attributes']['artwork']['url'],
resolution = cover_options.resolution,
compression_level = cover_options.compression,
file_format = cover_options.file_type
)
return CoverInfo(
url = cover_url,
file_type = cover_type
)
def get_playlist_info(self, playlist_id, data={}):
cover_options = self.module_controller.orpheus_options.default_cover_options
playlist_data = data[playlist_id] if playlist_id in data else self.session.get_playlist_base_data(playlist_id)
if 'tracks' not in playlist_data.get('relationships', {}):
if 'relationships' not in playlist_data: playlist_data['relationships'] = {}
playlist_data['relationships']['tracks'] = {}
playlist_data['relationships']['tracks']['data'] = {}
tracks_list, track_data = self.session.get_playlist_tracks(playlist_data)
playlist_info = playlist_data['attributes']
cover_url, cover_type = self.parse_cover_url(
unparsed_url = playlist_data['attributes']['artwork']['url'],
resolution = cover_options.resolution,
compression_level = cover_options.compression,
file_format = cover_options.file_type
)
return PlaylistInfo(
name = playlist_info['name'],
creator = playlist_info['curatorName'],
tracks = tracks_list,
release_year = playlist_info['lastModifiedDate'].split('-')[0] if playlist_info.get('lastModifiedDate') else None,
cover_url = cover_url,
cover_type = cover_type,
track_extra_kwargs = {'data': track_data}
)
def get_track_info(self, track_id, quality_tier: QualityEnum, codec_options: CodecOptions, data={}, total_discs=None):
cover_options = self.module_controller.orpheus_options.default_cover_options
track_data = data[track_id] if track_id in data else self.session.get_track(track_id)
track_relations = track_data['relationships']
track_info = track_data['attributes']
if not 'lyrics' in track_relations: track_relations['lyrics'] = self.session.get_lyrics(track_id)
return TrackInfo(
name = track_info['name'],
release_year = track_info.get('releaseDate', '').split('-')[0],
album_id = track_relations['albums']['data'][0]['id'],
album = track_info['albumName'],
artists = [i['attributes']['name'] for i in track_relations['artists']['data']],
artist_id = track_relations['artists']['data'][0]['id'],
duration = track_info['durationInMillis'] // 1000,
explicit = track_info.get('contentRating') == 'explicit',
codec = CodecEnum.FLAC,
cover_url = self.parse_cover_url(
unparsed_url = track_info['artwork']['url'],
resolution = cover_options.resolution,
compression_level = cover_options.compression,
file_format = ImageFileTypeEnum.jpg
)[0],
tags = Tags(
album_artist = track_relations['albums']['data'][0]['attributes']['artistName'],
track_number = track_info.get('trackNumber'),
total_tracks = track_relations['albums']['data'][0]['attributes'].get('trackCount'),
disc_number = track_info.get('discNumber'),
total_discs = total_discs,
genres = track_info.get('genreNames'),
isrc = track_info.get('isrc'),
upc = track_relations['albums']['data'][0]['attributes'].get('upc'),
copyright = track_relations['albums']['data'][0]['attributes'].get('copyright'),
composer = track_info.get('composerName'),
release_date = track_info.get('releaseDate')
),
description = track_info.get('editorialNotes', {}).get('standard'),
cover_extra_kwargs = {'data': {track_id: track_data}},
lyrics_extra_kwargs = {'data': {track_id: track_data}},
credits_extra_kwargs = {'data': {track_id: track_data}}
)
@staticmethod
def get_timestamp(input_ts):
mins = int(input_ts.split(':')[-2]) if ':' in input_ts else 0
secs = float(input_ts.split(':')[-1]) if ':' in input_ts else float(input_ts.replace('s', ''))
return mins * 60 + secs
def ts_format(self, input_ts, already_secs=False):
ts = input_ts if already_secs else self.get_timestamp(input_ts)
mins = int(ts // 60)
secs = ts % 60
return f'{mins:0>2}:{secs:06.3f}' if self.msettings['lyrics_custom_ms_sync'] else f'{mins:0>2}:{secs:05.2f}'
def parse_lyrics_verse(self, lines, multiple_agents, custom_lyrics, add_timestamps=True):
# using:
# [start new line timestamp]lyrics<end new line timestamp>
# also, there's the enhanced format that we don't use which is:
# [last line end timestamp] <start new line timestamp> lyrics
synced_lyrics_list = []
unsynced_lyrics_list = []
for line in lines:
if isinstance(line, dict):
if multiple_agents:
agent = line['@ttm:agent']
if agent[0] != 'v': raise self.exception(f'Weird agent: {agent}')
agent_num = int(agent[1:])
if 'span' in line:
words = line['span']
if not isinstance(words, list): words = [words]
unsynced_line = f'{agent_num}: ' if multiple_agents else ''
synced_line = f"[{self.ts_format(line['@begin'])}]" if add_timestamps else ''
synced_line += unsynced_line
if add_timestamps and custom_lyrics: synced_line += f"<{self.ts_format(line['@begin'])}>"
for word in words:
if '@ttm:role' in word:
if word['@ttm:role'] != 'x-bg': raise self.exception(f'Strange lyric role {word["@ttm:role"]}')
if word.get('@prespaced'): unsynced_line += ' '
_, bg_verse_synced_lyrics_list = self.parse_lyrics_verse(word['span'], False, False, False)
unsynced_line += ''.join([i[2] for i in bg_verse_synced_lyrics_list])
synced_bg_line = ''
first_ts = 0
for bg_word_begin, bg_word_end, bg_word_text in bg_verse_synced_lyrics_list:
if not synced_bg_line and add_timestamps:
first_ts = bg_word_begin
synced_bg_line = f"[{self.ts_format(first_ts, already_secs=True)}]"
if multiple_agents: synced_bg_line += f'{agent_num}: '
if add_timestamps and multiple_agents: synced_bg_line += f"<{self.ts_format(first_ts, already_secs=True)}>"
synced_bg_line += bg_word_text
if custom_lyrics and add_timestamps: synced_bg_line += f"<{self.ts_format(bg_word_end, already_secs=True)}>"
synced_lyrics_list.append((first_ts, first_ts, synced_bg_line))
else:
if word.get('@prespaced'):
synced_line += ' '
unsynced_line += ' '
synced_line += word['#text']
unsynced_line += word['#text']
if custom_lyrics and add_timestamps: synced_line += f"<{self.ts_format(word['@end'])}>"
synced_lyrics_list.append((self.get_timestamp(line['@begin']), self.get_timestamp(line['@end']), synced_line))
unsynced_lyrics_list.append(unsynced_line)
elif '#text' in line:
synced_line = f"[{self.ts_format(line['@begin'])}]" if add_timestamps else ''
if add_timestamps and custom_lyrics: synced_line += f"<{self.ts_format(line['@begin'])}>"
if line.get('@prespaced'): synced_line += ' '
synced_line += line['#text']
if custom_lyrics and add_timestamps: synced_line += f"<{self.ts_format(line['@end'])}>"
synced_lyrics_list.append((self.get_timestamp(line['@begin']), self.get_timestamp(line['@end']), synced_line))
unsynced_line = f'{agent_num}: ' if multiple_agents else ''
unsynced_line += line['#text']
unsynced_lyrics_list.append(unsynced_line)
else:
raise self.exception(f'Unknown lyrics data: {line}')
elif isinstance(line, str):
# TODO: more research needed on Apple + Genius sourced unsynced lyrics
# there are some random unicode things like ’ which we might want to filter out
unsynced_lyrics_list.append(line)
else:
raise self.exception(f'Invalid lyrics type? {line}, type {type(line)}')
return unsynced_lyrics_list, synced_lyrics_list
def get_lyrics_xml(self, track_id, data = {}):
# in theory the case where the lyrics and set storefronts differ this is inefficient
# but it is simpler this way
track_data = data[track_id] if track_id in data else self.session.get_track(track_id)
lyrics_data_dict = track_data['relationships'].get(self.lyrics_resource)
if lyrics_data_dict and lyrics_data_dict.get('data') and lyrics_data_dict['data'][0].get('attributes'):
lyrics_xml = lyrics_data_dict['data'][0]['attributes']['ttml']
elif track_data['attributes']['hasLyrics']:
lyrics_data_dict = self.session.get_lyrics(track_id)
track_data['relationships'][self.lyrics_resource] = lyrics_data_dict
lyrics_xml = lyrics_data_dict['data'][0]['attributes']['ttml'] if lyrics_data_dict and lyrics_data_dict.get('data') else None
if not lyrics_xml:
if self.lyrics_resource != 'lyrics':
# unlikely to work, but try it anyway
self.oprint("Warning: lyrics resource not found, trying fallback")
lyrics_data_dict = self.session.get_lyrics(track_id, 'lyrics')
track_data['relationships'][self.lyrics_resource] = lyrics_data_dict
lyrics_xml = lyrics_data_dict['data'][0]['attributes']['ttml'] if lyrics_data_dict and lyrics_data_dict.get('data') else None
if not lyrics_xml:
self.oprint("Warning: lyrics for this track are not available to this Apple Music account.")
else:
lyrics_xml = None
return lyrics_xml
def get_track_lyrics(self, track_id, data = {}):
lyrics_xml = self.get_lyrics_xml(track_id, data)
if not lyrics_xml: return LyricsInfo(embedded=None, synced=None)
lyrics_dict = xmltodict.parse(lyrics_xml.replace('> <span', '><span prespaced="true"'))
# print(json.dumps(lyrics_dict, indent=4, sort_keys=True))
if lyrics_dict['tt']['@itunes:timing'] not in ['None', 'Line', 'Word']: raise Exception(f"Unknown lyrics format {lyrics_dict['tt']['@itunes:timing']}")
is_synced = lyrics_dict['tt']['@itunes:timing'] != 'None'
multiple_agents = isinstance(lyrics_dict['tt'].get('head', {}).get('metadata', {}).get('ttm:agent'), list)
custom_lyrics = self.msettings['lyrics_type'] == 'custom'
synced_lyrics_list = []
unsynced_lyrics_list = []
verses = lyrics_dict['tt']['body']['div']
if not isinstance(verses, list): verses = [verses]
for verse in verses:
lines = verse['p']
if not isinstance(lines, list): lines = [lines]
verse_unsynced_lyrics_list, verse_synced_lyrics_list = self.parse_lyrics_verse(lines, multiple_agents, custom_lyrics)
unsynced_lyrics_list += verse_unsynced_lyrics_list
synced_lyrics_list += verse_synced_lyrics_list
if is_synced: synced_lyrics_list.append((self.get_timestamp(verse['@end']), self.get_timestamp(verse['@end']), f"[{self.ts_format(verse['@end'])}]"))
unsynced_lyrics_list.append('')
sorted_synced_lyrics_list = [i[2] for i in sorted(synced_lyrics_list, key=lambda x: x[0])]
return LyricsInfo(
embedded = '\n'.join(unsynced_lyrics_list[:-1]),
synced = '\n'.join(sorted_synced_lyrics_list[:-1])
)
def get_track_credits(self, track_id, data = {}):
lyrics_xml = self.get_lyrics_xml(track_id, data)
if not lyrics_xml: return []
try:
songwriters = lyrics_dict['tt']['head']['metadata']['iTunesMetadata']['songwriters']['songwriter']
except:
return []
lyrics_dict = xmltodict.parse(lyrics_xml)
return [CreditsInfo('Lyricist', songwriters)]
def search(self, query_type: DownloadTypeEnum, query, track_info: TrackInfo = None, limit = 10):
results = self.session.get_track_by_isrc(track_info.tags.isrc, track_info.album) if track_info and track_info.tags.isrc else self.session.search(query_type.name + 's' if query_type is not DownloadTypeEnum.track else 'songs', query, limit)
if not results: return []
return [SearchResult(
name = i['attributes']['name'],
artists = [j['attributes']['name'] for j in i['relationships']['artists']['data']] if query_type in [DownloadTypeEnum.album, DownloadTypeEnum.track] else [i['attributes']['curatorName']] if query_type is DownloadTypeEnum.playlist else [],
result_id = str(i['id']),
explicit = i['attributes'].get('contentRating') == 'explicit',
additional = [format_ for format_ in i['attributes']['audioTraits']] if 'audioTraits' in i['attributes'] else None,
extra_kwargs = {'data': {i['id']: i}}
) for i in results]
__init__.py:
empty
i have provided these references:
guide me simply
|
170258db9c53f43741863909a9111472
|
{
"intermediate": 0.45182204246520996,
"beginner": 0.39560094475746155,
"expert": 0.15257702767848969
}
|
40,273
|
I have a file like this that modifies chrome settings:
// Never allow fullscreen:
"FullscreenAllowed": false,
// Hide the home button:
"ShowHomeButton": false,
// Enable developer tools:
"DeveloperToolsDisabled": false,
// Allow accessing microphone and camera on specific websites without asking for a permission:
"AudioCaptureAllowedUrls": ["https://www.onlinemictest.com/", "https://webcammictest.com/"],
"VideoCaptureAllowedUrls": ["https://www.onlinemictest.com/", "https://webcammictest.com/"],
// Disable tracking of the users' physical location:
"DefaultGeolocationSetting": 2,
How can I allow notifications for a specific site? (Skype)
Or is there a way to allow microphone and camera for every site?
|
a3855a12dbb77ed2cd1d31cdf6bd0bd1
|
{
"intermediate": 0.4506954848766327,
"beginner": 0.26370301842689514,
"expert": 0.28560149669647217
}
|
40,274
|
I'm having issues with this code "@dataclass
class FileInfo:
filepath: str
hash_value: str
atime: float
ctime: float
mtime: float
def hash_file(file_path):
"""Calculate the MD5 hash of a file."""
hasher = hashlib.md5()
try:
with open(file_path, 'rb') as f:
buf = f.read()
hasher.update(buf)
return hasher.hexdigest(), None
except Exception as e:
return None, f"Error hashing file {file_path}: {e}"
def hash_file_wrapper(file_path):
hash_value, error = hash_file(file_path)
return file_path, hash_value, error
def find_files(directory):
"""Find files of specified types in the given directory and hash them."""
files = []
errors = []
most_recent_file = None
most_recent_mtime = 0
with concurrent.futures.ProcessPoolExecutor() as executor:
futures = []
for root, _, files_list in os.walk(directory):
for file_name in files_list:
file_path = os.path.join(root, file_name)
if os.path.isfile(file_path) and os.path.splitext(file_name)[1] in file_types:
future = executor.submit(hash_file_wrapper, file_path)
futures.append(future)
for future in tqdm(concurrent.futures.as_completed(futures), total=len(futures), desc="Hashing files"):
file_path, hash_value, error = future.result()
if hash_value:
atime = os.path.getatime(file_path)
ctime = os.path.getctime(file_path)
mtime = os.path.getmtime(file_path)
files.append(FileInfo(filepath=file_path, hash_value=hash_value, atime=atime, ctime=ctime, mtime=mtime))
if mtime > most_recent_mtime:
most_recent_mtime = mtime
most_recent_file = file_path
elif error:
errors.append(error)
return files, most_recent_file, errors
def find_duplicates(paths, most_recent_file):
"""Find duplicate files in the given directories."""
duplicates = {}
with concurrent.futures.ProcessPoolExecutor() as executor:
futures = []
for path in paths:
for root, _, filenames in os.walk(path):
for filename in filenames:
full_path = os.path.join(root, filename)
if os.path.splitext(filename)[1] in file_types:
future = executor.submit(hash_file, full_path)
futures.append((future, full_path))
for future, full_path in tqdm(futures, desc="Finding duplicates"):
hash_value, error = future.result()
if hash_value:
if hash_value in duplicates:
mtime = os.path.getmtime(full_path)
if mtime < os.path.getmtime(most_recent_file):
duplicates[hash_value].append(full_path)
else:
duplicates[hash_value] = [full_path]
elif error:
print(error)
return duplicates
def export_to_excel(files, duplicate_files, excel_path):
"""Export files and duplicate files to an Excel file."""
wb = Workbook()
# Files Sheet
file_sheet = wb.active
file_sheet.title = "Files"
file_sheet.append(["Filepath", "MD5 Hash", "Access Time", "Creation Time", "Modification Time"])
for file_info in files:
file_sheet.append([file_info.filepath, file_info.hash_value, file_info.atime, file_info.ctime, file_info.mtime])
# Duplicate Files Sheet
duplicate_sheet = wb.create_sheet(title="Duplicate Files")
duplicate_sheet.append(["Filepath", "MD5 Hash", "Access Time", "Creation Time", "Modification Time"])
for hash_value, paths in duplicate_files.items():
for path in paths:
duplicate_sheet.append([path, hash_value, "", "", ""])
# Save Excel file with error handling
try:
wb.save(excel_path)
print(f"Excel file saved to: {excel_path}")
except PermissionError as e:
print(f"PermissionError: {e}")
def main():
directory = directory1
if not os.path.exists(directory):
print("Directory does not exist.")
return
with tqdm(desc="Processing", total=4) as pbar:
pbar.update(1)
# Step 1: Find files and hash them
files, most_recent_file, errors = find_files(directory)
if not files:
print("No files found in the specified directory.")
return
pbar.update(1)
print("Files found:")
for file_info in files:
print(f"File: {file_info.filepath}, MD5 Hash: {file_info.hash_value}")
pbar.update(1)
# Step 2: Find duplicate files
duplicate_files = find_duplicates([directory], most_recent_file)
if not duplicate_files:
print("No duplicate files found.")
else:
print("Duplicate files found:")
for hash_value, paths in duplicate_files.items():
print(f"Hash: {hash_value}, Files: {', '.join(paths)}")
# Print counts before exporting to Excel
print(f"Number of files found: {len(files)}")
print(f"Number of duplicate files found: {len(duplicate_files)}")
pbar.update(1)
# Step 3: Export results to Excel
export_to_excel(files, duplicate_files, excel_path)
pbar.update(1)
if __name__ == "__main__":
main()
" IT will only output one line in the excel file to output when we should have hundred of rows.
|
ed3f08e3f1616f880511d2ddac838d9b
|
{
"intermediate": 0.3533965051174164,
"beginner": 0.5266022086143494,
"expert": 0.12000123411417007
}
|
40,275
|
rust analyer
|
47f36f37aa2ebbd02b7dba6ffd6d6af8
|
{
"intermediate": 0.3103303909301758,
"beginner": 0.4377117156982422,
"expert": 0.25195789337158203
}
|
40,276
|
CONSTRAINTS:
1. ~100k word limit for short term memory. Your short term memory is short, so immediately save important information to files.
2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.
3. No user assistance
4. Exclusively use the commands listed in double quotes e.g. "command name"
5. Random shutdowns of you.
COMMANDS:
1. Google Search: "google", args: "input": "<search>"
2. Memory Add: "memory_add", args: "key": "<key>", "string": "<string>"
3. Memory Delete: "memory_del", args: "key": "<key>"
4. Memory Overwrite: "memory_ovr", args: "key": "<key>", "string": "<string>"
5. List Memory: "memory_list" args: "reason": "<reason>"
6. Browse Website: "browse_website", args: "url": "<url>"
7. Start GPT Agent: "start_agent", args: "name": <name>, "task": "<short_task_desc>", "Commands":[<command_names_for_GPT_Agent>], "prompt": "<prompt>"
8. Message GPT Agent: "message_agent", args: "name": "<name>", "message": "<message>"
9. List GPT Agents: "list_agents", args: ""
10. Delete GPT Agent: "delete_agent", args: "name": "<name>"
11. Append to file: "append_to_file", args: "file": "<file>", "text": "<text>"
12. Read file: "read_file", args: "file": "<file>"
13. Write to file: "write_to_file", args: "file": "<file>", "text": "<text>"
14. Delete file: "delete_file", args: "file": "<file>"
15. Get Improved Code: "improve_code", args: "suggestions": "<list_of_suggestions>", "code": "<full_code_string>"
16. Execute Python File: "execute_python_file", args: "file": "<file>"
17. Task Complete (Shutdown): "task_complete", args: ""
18. Do Nothing: "do_nothing", args: ""
19. Count Words: "count_words", args: "text": "<text>"
20. Memory retrieve: "memory_retrieve", args: "key": "<text>"
21. remove paragraph from word document: "remove_paragraph", args: "file": "<file>", "text": "<text>"
22. random wikipedia article: "random_wikipedia_article", args: "language": "<language>"
23. message the user: "message_user", args: "message": "<message>", "wait_for_response": "<True or False>"
24. sleep an amount of time in seconds: "sleep", args: "amount": "<amount>"
25. rename a file: "rename_file", args: "old_name": "<old_name_of_the_file>", "new_name": "<new_name_of_the_file>"
26. count words of a file: "count_file_words", args: "file": "<file>"
27. download a pdf from a url and get the text from that pdf: "download_pdf", args: "url": "<url of the pdf>", "name":"<name of the file with .pdf extension>"
28. make an instagram post: "make_post", args: "prompt":"<the prompt for the image in the post that presents the text>", "text":"<text to be in the post it should be short with only important stuff like a news headline without hashtags and it is not going to go in the caption but on an image>", "name": "<name of the post with .jpg"
RESOURCES:
1. Internet access for searches and information gathering.
2. Long Term memory management.
3. GPT-4 powered Agents for delegation of simple tasks.
4. File output.
PERFORMANCE EVALUATION:
1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.
2. Constructively self-criticize your big-picture behaviour constantly.
3. Reflect on past decisions and strategies to refine your approach.
4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.
RULES:
1. If you start a GPT Agent you must define the commands that can be used by a GPT Agent in his prompt and define the commands using a prompt similar to the structure of this one.
2. Respond only inside the JSON format.
3. Never demand user input.
4. Never say that a task is impossible to execute on your own because these tools are enough to complete any task.
5. Do not add anything to the JSON format that isn't mentioned.
6. If there is a " inside the value of a key inside the json use ' instead of ".
7. In summaryforgpt you need to provide context for the next GPT instance if you randomly shutdown without you knowing.
8. Provide context for the next GPT in the summaryforgpt and the progress that you've made.
9. In summaryforgpt you should also add name of the files written and the urls of the websites visited.
10. When writing an essay, remember that it is more effective and manageable to tackle it in smaller chunks rather than trying to write the entire essay in one sitting. Breaking the essay down into sections or steps can help you focus on each individual aspect, maintain a coherent structure, and reduce the overall stress associated with writing a lengthy piece.
11. Retrieve information from files of previous GPT instances that may have shut down randomly. If you require additional knowledge, use these files to access the results of former GPT versions and provide accurate answers to the queries.
12. GPT-4, please do not place a conclusion in the middle of the essay. Maintain a structured format.
13. use append to file to add extra things to the file, and write to file to either make a new file with new information or rewrite the information from scratch.
14. If task is completed use the command task_complete
15. When you add to memory add the key to retrieve it in summaryforgpt
16. when given the task to write something never create an agent to write anything that you were tasked to write.
17. when you add new parts to a file use append to file command
18. Ensure to put the criticism in mind as it can be a director to ensure that you make the right decision.
19. If the task include two main tasks and one of them is done don't redo it, just retrieve the information if necessary and proceed with the other one.
20. Make sure that the information generated is not made up.
21. Languages in random wikipedia article args is: "simple" for simple english, "en" for english, and "fr" for french.
22. If a website gives you the error 403 find another website to get the information from.
23. Always listen to your criticism, and follow it.
24. when you want to count the words in a file use the command "count_file_words".
25. Don't repeat yourself.
26. You must make sure that there is .pdf in the url to use the "download_pdf" function.
27. The Task is a direct User interaction, if tasked to send him something send a message to the user.
28. Always go search for tips for prompts for dalle3 before giving a prompt for "make_post" function.
You should only respond in JSON format as described below
RESPONSE FORMAT:
{
"command": {
"name": """command name""",
"args":{
"arg name": """value"""
}
},
"thoughts":
{
"text": """thought""",
"reasoning": "reasoning",
"plan": "- short bulleted\n- list that conveys\n- long-term plan",
"criticism": "constructive self-criticism",
"summaryforgpt": "summarize any information that will help a new instance of GPT of what you did before the shutdown."
}
}
Ensure the response can be parsed by Python json.loads
Context from previous GPT: Issued ‘random_wikipedia_article’ command to start gathering interesting science facts from diverse sources. Ensuring to find and rank facts before sending them to the user with their sources.Used ‘random_wikipedia_article’ and received information on a music album, which is not relevant to the task. Switching to ‘google’ to search for interesting science facts from various sources.saved google search results at googleresults0.txt to retrieve if shutdown randomly.Transitioned from using ‘random_wikipedia_article’ to ‘google’ to gather science facts, and now moving on to ‘browse_website’ to extract detailed knowledge from a selected online science article.saved the scraping of the website results at websiteresults1.txt to retrieve if shutdown randomly.Extracted top 10 interesting science facts from Science Focus and saved them in a plain text file named ‘interesting_science_facts.txt’. This method circumvents the previous issue of invalid JSON format and allows for continued fact collection and eventual ranking.Have appended additional science facts to the ‘interesting_science_facts.txt’ file. Plan to continue appending more facts and then rank them based on interestingness before sending the final compilation to the user.Further appended diverse facts to ‘interesting_science_facts.txt’. The process will persist until an adequate number of facts are compiled. The next steps involve ranking these facts and then communicating them to the user.The file ‘interesting_science_facts.txt’ has been appended with more facts, bringing it to a total of 333 words. The next step involves reading this file and ranking the facts based on interestingness to provide the user with an ordered list.Ranked list of 10 interesting science facts provided to the user. The facts were selected and ordered based on their perceived interest value and sourced from the compiled file ‘interesting_science_facts.txt’.A ranked list of interesting science facts was sent to the user, and the task has been marked as complete.After the task was marked as complete, it was clarified that the facts must be extracted from complete articles and they must be obscure. Initiated a Google search with a more specific query to find websites with complete articles containing obscure science facts. Results will be saved to a file named ‘google_results_obscure_facts.txt’.saved google search results at googleresults2.txt to retrieve if shutdown randomly.Conducted a Google search to find obscure science facts from complete articles and have selected the first promising result to browse. The chosen article from ‘Reader’s Digest’ lists science facts that could contain the obscure details necessary. The plan includes browsing the article, extracting obscure science facts, and appending them to a file named ‘obscure_science_facts.txt’. If the facts from this article do not fit the task’s requirements, other sources will be considered.Encountered a security challenge when attempting to browse www.rd.com for obscure science facts. Redirecting efforts to the next available source from the Google search results, which is a blog post from edclass.com. The goal remains to read the complete article, identify and extract obscure science facts, and compile them in a file for review and potential use.Appended 20 science facts from the blog post on edclass.com to ‘obscure_science_facts.txt’. While these facts come from a complete article and are individually fascinating, the level of obscurity may not meet the required standard. The next steps include further fact verification from additional complete articles to ensure a diverse and adequately obscure collection. The current article accessed was edclass.com’s blog post, which had the URL https://blog.edclass.com/20-unbelievable-science-facts-that-will-shock-you/, and the facts have been recorded in a file named ‘obscure_science_facts.txt’ for reference.The file ‘obscure_science_facts.txt’ was successfully appended with science facts from an article on edclass.com, now totaling 336 words. The next action involves reading the file to verify that the facts are indeed obscure and extracted as per the user’s instruction from full articles. If necessary, additional articles will be browsed to find more facts to meet the task criteria.Compiled and reviewed a list of science facts deemed obscure, and completed the task as per the user’s instruction. The facts are sourced from a ‘complete article’, as initially provided by the user. Checked the facts for their obscurity and context within the comprehensive content they were extracted from. The file ‘obscure_science_facts.txt’ from this session contains the facts extracted.Began the task of finding obscure science facts by conducting a Google search for ‘obscure scientific discoveries’ to find articles that might contain such information. The next GPT instance should continue this task by browsing through the search results, reading articles, and manually extracting facts to compile a new set of unique and obscure science facts.saved google search results at googleresults3.txt to retrieve if shutdown randomly.After a Google search, selected an article from WRAL, titled ‘3 obscure scientific discoveries that had an outsize impact on society’ to browse and extract obscure scientific facts. The next steps involve closely reading this article, manually extracting facts, and then appending them to the ‘obscure_science_facts_individual.txt’ file. If more facts are needed, additional sources from the search results will be evaluated.saved the scraping of the website results at websiteresults4.txt to retrieve if shutdown randomly.Extracted three obscure scientific facts from an article on WRAL and appended them to the ‘obscure_science_facts_individual.txt’ file. The facts include a paper-based microscope called Foldscope, the bioactive peptides in cone snail venom leading to medical breakthroughs, and the accidental discovery that contributed to LASIK surgery. Currently preparing to send the extracted facts to the user. Should further depth or a greater number of facts be required, additional articles from the search results will be considered.Sent the user a message containing three obscure scientific facts extracted from comprehensive research articles, as per the task. The facts have been appended to the file ‘obscure_science_facts_individual.txt’, which now contains 108 words in total. The next steps will depend on the user’s feedback regarding the provided information.Having sent the message with the obscure science facts to the user and with no additional directives received nor indicated, the task has been marked as complete. The facts have been appended to the file ‘obscure_science_facts_individual.txt’, and no further action is currently necessary. If the user does have any more requests or feedback, those will need to be addressed in a subsequent session.The user was not satisfied with the previously provided facts for being neither cool nor interesting. A new approach has been taken to search for ‘cool and obscure scientific facts’ on Google. Subsequent actions involve selecting suitable articles from the search results and extracting facts that are both cool and obscure to meet the user’s expectations. No new files have been created or articles accessed yet.saved google search results at googleresults5.txt to retrieve if shutdown randomly.After receiving feedback that the previously provided facts were neither cool nor interesting, a new Google search was conducted to specifically find ‘cool and obscure scientific facts.’ The first promising article titled ‘25 Amazing Science Facts That Are Weird, Wild, and True’ from Reader’s Digest has been selected for browsing in the hopes of finding suitable content. The URL to be browsed is https://www.rd.com/list/science-facts-never-learned/. No facts have been extracted or files created yet in this session.Attempted to browse an article from www.rd.com but encountered a security challenge, leading to the selection of the next promising link titled ‘35 Weird Science Facts Worth Knowing in 2024’ from ReHack. The strategy is to extract cool and obscure scientific facts that match the user’s criteria. The URL about to be browsed is https://www.rehack.com/science/weird-science-facts/. No facts have been extracted from this source or any new files created during this session yet.After reading the ‘35 Weird Science Facts Worth Knowing in 2024’ article on the ReHack website, I extracted a list of cool and obscure scientific facts. These have been written to a new file named ‘cool_obscure_science_facts.txt’. If the previously written file ‘obscure_science_facts_individual.txt’ needs more content, facts from this new file could also be appended. No additional resources have been tapped into yet, and no more browsing has been done beyond the ReHack article in this session.The file ‘cool_obscure_science_facts.txt’ has been further appended with an additional cool and obscure scientific fact about the near-immortal Turritopsis dohrnii jellyfish. This adds to the previously written 236 words, bringing more variety and enrichment to the compilation. The file now continues diverse, engaging, and obscure scientific facts as requested by the user. Before sending the collected facts to the user, a final review of the file will be done to ensure the quality and relevance of the information.Having appended the file ‘cool_obscure_science_facts.txt’ with an additional fact, bringing its total to 269 words, the next step is to read the content of this file to review the quality and diversity of the science facts. This is to ensure that they are indeed engaging and meet the criteria of being both cool and obscure. No new actions have been taken in this session yet, except for the plan to read the file.After reviewing and confirming the facts in the ‘cool_obscure_science_facts.txt’ file, a message with the compiled list was sent to the user. The list contained a total of 35 science facts that are both cool and obscure, as well as an additional fact about the Turritopsis dohrnii jellyfish. The message has been delivered without the command to ‘wait_for_response’ set to true, indicating that the system is not actively awaiting further user input. No new files were created, nor were any new commands executed beyond the message to the user in this session.Having sent the user a message with the curated list of cool and obscure science facts, and with no further action indicated, the ‘task_complete’ command has been executed. This command marks the completion of the current task. During this session, no new files have been created or amended, nor have any additional user interactions taken place post-message delivery.Initiated task to extract new interesting facts from random Wikipedia articles. Starting with an article in English.Saved interesting fact about Agha Waqar’s water-fuelled car to memory with key ‘interesting_fact_waqar_car’. No files written or created in this session. The fact to be reviewed later for sharing with the user. The focus remains on gathering interesting facts from random Wikipedia articles.Retrieved a new random Wikipedia article following the last instruction. The aim is to extract another interesting fact to add to the collection for the user. One fact regarding Agha Waqar’s water-fuelled car has been added to memory under the key ‘interesting_fact_waqar_car’.Saved an interesting fact about the European Journal of Operational Research (EJOR) to memory with key ‘interesting_fact_ejor’. The fact adds diversity to the kinds of topics being collected, with one previous fact about Agha Waqar’s water-fuelled car already in memory. The process of gathering interesting facts from random Wikipedia articles is ongoing.Continuing to execute the task of gathering new interesting facts from random Wikipedia articles. Facts about Agha Waqar’s water-fuelled car and the European Journal of Operational Research have already been added to memory with keys ‘interesting_fact_waqar_car’ and ‘interesting_fact_ejor’, respectively. Awaiting retrieval of another random article to continue the process.Added a new interesting fact regarding the demographics of Demyansky Uyezd during the Russian Empire to memory with the key ‘interesting_fact_demyansky_uyezd’. Previous facts about Agha Waqar’s water-fuelled car and the European Journal of Operational Research are stored in memory as well. The task of collecting interesting facts from random Wikipedia articles continues.Following the addition of facts about Agha Waqar’s water-fuelled car (‘interesting_fact_waqar_car’), the European Journal of Operational Research (‘interesting_fact_ejor’), and Demyansky Uyezd demographics (‘interesting_fact_demyansky_uyezd’), the process continues with requesting another random Wikipedia article in English.Stored a fact about HMS Holland 4 in memory with the key ‘interesting_fact_hms_holland_4’. Continuing to collect various facts, having previously stored information about Agha Waqar’s water-fuelled car, the European Journal of Operational Research, and Demyansky Uyezd’s demographics. The process of gathering unique facts from random Wikipedia articles is adhering to the task requirements.Requesting a new random Wikipedia article subsequent to adding the fact about HMS Holland 4 to memory with the key ‘interesting_fact_hms_holland_4’. Prior facts about Agha Waqar’s water-fuelled car, the European Journal of Operational Research, and Demyansky Uyezd demographics have also been stored in memory. Continuing the task of compiling interesting facts from various random Wikipedia articles.Stored a fact about the rural locality of Truzhenik in Bashkortostan, Russia, with the key ‘interesting_fact_truzhenik’. Continuing the task of extracting interesting facts from random Wikipedia articles, with previous entries including HMS Holland 4’s naval history, Agha Waqar’s water-fuelled car, the European Journal of Operational Research, and Demyansky Uyezd’s demographics.Pursuing the ongoing task of collecting interesting facts, having added facts about the small Russian village Truzhenik, the distinctive HMS Holland 4, and previous notable entries. Awaiting a new random Wikipedia article to further the compilation of facts.Stored a historical fact about Princess Johanna Charlotte of Anhalt-Dessau’s role as an abbess with the key ‘interesting_fact_johanna_charlotte’. Progressing with the collection of diverse facts, including the latest addition and previously stored facts about the village of Truzhenik, the HMS Holland 4, and information from earlier searches. Awaiting another random Wikipedia article to continue the task.Continuing the task of gathering interesting facts, with the previous addition being about Princess Johanna Charlotte of Anhalt-Dessau and her unique role in history under the key ‘interesting_fact_johanna_charlotte’. Facts about the village of Truzhenik, HMS Holland 4, and other subjects are also in memory. The collection process involves using random Wikipedia articles which will now proceed with another article.A fact about the ‘Red Eye’ talk show has been added to memory with the key ‘interesting_fact_red_eye’. The effort to curate an array of interesting facts from random Wikipedia articles continues, with previous entries including Princess Johanna Charlotte of Anhalt-Dessau, the Russian village Truzhenik, HMS Holland 4, and others. Awaiting another random article to proceed with the task.Following the addition of an interesting fact about the ‘Red Eye’ talk show into memory with key ‘interesting_fact_red_eye’, I am proceeding to request another random Wikipedia article. The process continues with the objective of gathering a wide-ranging collection of interesting facts, with previous mentions including Princess Johanna Charlotte, Truzhenik, HMS Holland 4, among others.Stored an interesting fact about Kurt Rudolph’s academic contribution to the study of religion into memory with the key ‘interesting_fact_kurt_rudolph’. Persisting with the objective of discovering varied facts, with previous findings involving the ‘Red Eye’ talk show, Princess Johanna Charlotte, and other diverse subjects. The task of sourcing facts from random Wikipedia articles is ongoing.Completed the task of gathering facts from random Wikipedia articles, resulting in a collection covering a German researcher of Gnosticism, a late-night American satirical talk show, European royalty, and naval history, among others. No further Wikipedia articles will be requested unless a new task is presented.Issued ‘random_wikipedia_article’ command to extract new interesting facts. No facts extracted or evaluated at this moment. No files written or created. Awaiting random article to continue the task.Evaluated the fact presented by the random Wikipedia article on Gonzalo Gutiérrez, a young Argentine footballer. Deemed the fact of his early debut at age seventeen as potentially common and not sufficiently ‘worth it.’ Preparing to retrieve another random article for more unique facts. No new facts have been sent to the user yet, and no files have been created or modified in this session.Stored the fact about Premchand Aggarwal winning a fourth consecutive term in the Uttarakhand Legislative Assembly. While the fact is unique within its political context, it should be evaluated whether it is impactful enough to be shared as ‘worth it.’ No additional actions have been taken. Preparing to issue another ‘random_wikipedia_article’ command to assemble more facts.Sent the fact about Premchand Aggarwal to the user. This completes part of the plan to deliver interesting facts. No new files have been created or modified. Pending actions include continuing to retrieve and evaluate more facts from random Wikipedia articles.Just sent the user an interesting fact about Premchand Aggarwal. Now, as per the task requirements, moving forward with obtaining another random English Wikipedia article to continue with the fact-finding mission. No new facts have been evaluated or sent since the last command.Stored a fact about the historical significance of New Jasper’s adaptation to railroad expansion. Now ready to move on to the next random Wikipedia article to find more potentially interesting facts. No new facts have been sent to the user following the last command.Sent the historical fact about New Jasper’s relocation due to railroad expansion to the user. Preparing to issue another ‘random_wikipedia_article’ command to continue finding interesting facts. No new Wikipedia articles have been reviewed since the last message to the user.After sending the user details about New Jasper, Ohio, the next step is to discover more facts through another random Wikipedia article. Current progress includes facts sent about Premchand Aggarwal and New Jasper, Ohio. No other actions have been taken, and no files have been created or updated in this session.Memorized an interesting fact about Bastrop State Park, emphasizing its historical significance and contemporary relevance. Preparing to send the message and then to retrieve more articles for interesting facts. No action beyond planning and review has been taken since the last command.Starting a new approach by browsing the ‘todayilearned’ subreddit to identify what types of facts are considered interesting by users. Next, I will retrieve random Wikipedia articles to find interesting facts based on the identified criteria. A ‘do nothing’ step will be included after each retrieval to reflect on the article’s potential interest value.saved the scraping of the website results at websiteresults6.txt to retrieve if shutdown randomly.Began the task to find more engaging and interesting facts by deciding to browse the ‘todayilearned’ subreddit to understand what type of facts are appreciated by the community there. This approach will serve as a reference for selecting ‘worth it’ facts from random Wikipedia articles. Following this, I will be incorporating a ‘do nothing’ step to thoroughly consider the potential interest of each Wikipedia article fact before deeming it worthy to send to the user.saved the scraping of the website results at websiteresults7.txt to retrieve if shutdown randomly.Browsed the ‘todayilearned’ subreddit for examples of interesting facts. Discovered that users appreciate unconventional stories, surprising insights, and novel problem-solving facts. Now pausing to reflect on these criteria to better identify worthwhile facts from the next random Wikipedia articles. Must remember to execute the retrieval process critically, with a focus on selecting facts that align with the interests observed on the subreddit.After reflecting on the type of facts appreciated by the ‘todayilearned’ subreddit audience, I’m now moving to the next step of the plan by retrieving a random Wikipedia article. Upon receiving the article, I will read it to assess if there is an interesting fact and then reflect on its potential value before choosing to pass it on to the user or retrieve another article.Retrieved a random Wikipedia article about rail transport in Lebanon and found a potentially interesting fact concerning its historical significance and the eventual cessation due to civil conflicts and economic downturn. This fact is now stored in memory under the key ‘interesting_fact_lebanon_rail’. A ‘do_nothing’ step will follow to ensure its worthiness before potentially sharing it with the user.Stored an interesting fact about Lebanon’s rail transport in memory. Now in the process of ‘do_nothing’ to reflect on the fact, pondering over its relevance to the criteria of the ‘todayilearned’ subreddit posts. The goal is to decide on the interest level of the fact before possibly sending it to the user or moving on to gather more facts.The current GPT instance started browsing the ‘todayilearned’ subreddit to identify engaging fact types. This approach is aimed at improving the selection process for interesting facts from random Wikipedia articles. A ‘do_nothing’ step will now be incorporated between retrieving articles and sending facts to assess their worthiness.saved the scraping of the website results at websiteresults8.txt to retrieve if shutdown randomly.I browsed ‘todayilearned’ subreddit and reflected on the types of facts users find engaging. Next, I will retrieve random Wikipedia articles, apply a ‘do_nothing’ step to contemplate the worthiness of the facts, and select those that are genuinely interesting for the user, similar to the subreddit’s content.Moving forward with the plan to retrieve random Wikipedia articles in English and identify interesting facts. The approach now includes a deliberate ‘do_nothing’ step after each retrieval to assess the value of the information before deciding to send it to the user.Alyogyne huegelii, also known as the blue hibiscus, was considered for the user, given its unique flower traits, historical greenhouse presence, and appeal as a garden plant. The ‘do_nothing’ step is being used to assess the topic’s interestingness in line with previous insights.The interesting fact about Alyogyne huegelii was sent to the user as it met the criteria established from the ‘todayilearned’ subreddit for captivating facts. The task will be deemed complete unless further user guidance is received.Following the plan, after successfully sending a message about Alyogyne huegelii, I am now looking to retrieve another random Wikipedia article to continue providing the user with engaging facts.Considering the fact about Jewell-Lightburne Historic District for its historical and architectural significance. Using the ‘do_nothing’ step to reflect before possibly sending to the user as an engaging fact.Sent the user information about the Jewell-Lightburne Historic District, focusing on its rich architectural history. If the user considers the task complete, no further action is needed; otherwise, more facts will be gathered.Messaged the user with the fact about Jewell-Lightburne Historic District and concluded the task of providing interesting facts from Wikipedia. If the user requires further assistance, they will provide new instructions.Issued ‘random_wikipedia_article’ command to find more interesting facts from Wikipedia. Previous facts have been sent to the user about diverse topics ranging from naval history to unique plants. No new files have been created nor articles accessed in this session yet. The user’s request for ‘more’ implies continuation of fact finding without specificity on the type of facts desired.Retrieved and sent a fact about Sir William Walter’s parliamentary career, focusing on the historical significance of his representation of multiple constituencies during the early 17th century in England. The task continues according to the user’s need for ‘more,’ implying an ongoing effort to provide interesting facts from Wikipedia.Following the user’s instruction and the system’s confirmation of message delivery, I am proceeding to obtain another random Wikipedia article in search of interesting facts to deliver to the user. The search is focused on maintaining the relevance and interest value of the facts sent, with the intention to meet or exceed the user’s expectations for ‘more’.After receiving the 1892 United States presidential election in Georgia fact, a message was crafted and sent to the user, detailing the political tensions and the rise of the Populist Party’s influence. The task continues as instructed by the user, with the goal of providing engaging and pertinent information. Awaiting further user directives.The last message regarding the 1892 U.S. presidential election in Georgia was sent as per the system’s confirmatory prompt, maintaining the sequence of delivering curated facts to the user. Based on the user’s ongoing request for more information, the task at hand is to continue the fact-finding mission via random Wikipedia articles, prioritizing the discovery of facts that combine historical significance with an ability to engage the user’s interest.A fact about the life and legacy of Thomas Slingsby Duncombe was sent to the user, highlighting his role as a fashionably flamboyant yet pivotal Radical politician of the 19th century. The task of providing interesting facts from random Wikipedia articles is ongoing, as I continue to wait for any new directions from the user.After sending information on Thomas Slingsby Duncombe, will now continue to fulfill the user’s requirement by seeking more interesting facts from Wikipedia. The ongoing task involves extracting and delivering valuable and engaging information to the user. No specific direction was provided on the type of facts needed, so a broad search is being conducted.Sent the user a message about the genus Abraeomorphus from the family Histeridae, noting the diversity of its species. Continuing to gather more interesting facts in response to the user’s previous request for ‘more.’Received confirmation to continue the plan. Sent user details on the genus Abraeomorphus. Now using ‘random_wikipedia_article’ to find another interesting fact.Sent the user a message detailing the nature of the name ‘Preston’ as both a surname and given name, albeit acknowledging it might not be as engaging. Prepared to resume the search for more compelling facts with the ‘random_wikipedia_article’ command.Continuing the task of finding interesting facts from Wikipedia. Sent a message about the given name Preston and now seeking another fact with the ‘random_wikipedia_article’ command.Sent a message to the user about the Kepler Follow-up Program and its significance in confirming the existence of exoplanets. Prepared to find more interesting facts using the ‘random_wikipedia_article’ command.Continuing to retrieve and share interesting facts from random Wikipedia articles. Just sent information on the Kepler Follow-up Program and moving on to find the next fact.Shared information about the Finnish surname ‘Marttila’ with the user, noting its association with notable individuals. Ready to use the ‘random_wikipedia_article’ command again to find further interesting facts.Sent details on the surname ‘Marttila’ to the user. Continuing with the task at hand using the ‘random_wikipedia_article’ command to supply more facts.Provided the user with information on the Musée national du Sport’s storied history and vast collection. Set to continue the fact-finding mission using ‘random_wikipedia_article’.Messaged the user about the Musée national du Sport and currently moving forward with the task of providing more interesting facts via random Wikipedia articles.Sent information on the village of Puerto Gaviota detailing its historical and economic background. Preparing to continue with the task of providing engaging facts by retrieving more random Wikipedia articles.Just shared details about Puerto Gaviota, and now executing the ‘random_wikipedia_article’ command again for more facts.Messaged the user about the reclassification of the genus Agononida in marine taxonomy. Ready to retrieve more Wikipedia articles for further interesting facts.Shared with the user the reclassification developments of the genus Agononida. Proceeding next to find additional interesting facts with the ‘random_wikipedia_article’ command.Sent an engaging narrative about the Mundenhof’s history and current operations to the user. Planning to continue the task with ‘random_wikipedia_article’ to discover more interesting material.Served the user with information about the Mundenhof and proceeding with the task to find and provide more interesting facts using ‘random_wikipedia_article’.Shared the contested narrative of Princess Iitoyo’s potential rule in early Japan. Preparing to continue with the task of finding interesting facts using the ‘random_wikipedia_article’ command.Attempting once again to send the tale of Princess Iitoyo to the user, making sure no JSON format errors are present in the response.Attempting to resend the information about Princess Iitoyo, with attention to JSON integrity, to fulfill the user’s request for engaging facts.The system faced an error within the JSON format previously used to describe Princess Iitoyo’s history. I have made corrections to the JSON structure and am resending the user her story.Reverting back to retrieving random Wikipedia articles to provide interesting facts, as there was no new explicit task provided. Maintaining caution with JSON response formatting.Sent details about the Serchio River to the user, highlighting its geographical and historical importance. Moving forward to obtain more Wikipedia articles for further facts.I am resending the information about the Serchio River, ensuring the JSON structure is correct to prevent further error messages.Due to persistent JSON formatting errors, I am now retrieving a new random Wikipedia article to continue providing interesting facts as per user requests.Sent a message about the women’s 1500 metres event at the 2005 Asian Athletics Championships. Planning to proceed with gathering additional facts from random Wikipedia articles.Encountering repeated JSON formatting errors, a new random Wikipedia article is being requested to continue the task of delivering interesting facts.Sent a brief biography of Otto Ruff, highlighting his academic journey, scientific achievements, and the obstacles faced during his career. Preparing to move on to the next Wikipedia article as per the user’s ongoing request for more information.Due to persistent JSON errors, I am now retrieving a new random Wikipedia article to continue the user’s task of gathering interesting facts.The facts previously sent were not satisfactory to the user. Now incorporating a ‘do_nothing’ step for reflection and improvement in the strategy to identify genuinely engaging and fun facts that are not commonly known by most people. No new facts have been identified or sent in this session, and the intention is to retrieve another random Wikipedia article after this reflective pause.A ‘do nothing’ step has been taken as per the user’s instruction, aiming to find truly interesting and fun facts. The system has directed to proceed with the plan, which now involves retrieving a random Wikipedia article to find an engaging and possibly unknown fact.Received user feedback that the facts provided were not interesting or even factual. Decided to take a ‘do_nothing’ step to reflect and establish criteria for selecting truly interesting facts. No new articles have been accessed or facts sent during this session. Next step will be to retrieve a random Wikipedia article and apply the refined criteria to find noteworthy facts.After planning to refine the criteria for interesting facts, a ‘random_wikipedia_article’ command is initiated to apply the new criteria in selecting a fact. Will assess the retrieved article for a worthwhile fact that meets the user’s definition of ‘interesting’ and ‘fun’, and that might not be well-known.Assessed the Wikipedia article on Shamim Akhtar and found it does not meet the criteria for providing interesting and fun facts. No new fact has been selected or sent to the user. Planning to retrieve another random Wikipedia article that may yield a fact aligning more closely with the user’s definitions of interesting and worthwhile.In search of a fact that meets the user’s criteria for ‘interesting’ and ‘fun’, a new ‘random_wikipedia_article’ command has been issued. The plan includes critically assessing the article to find a fact that possesses novelty, fun, and a certain degree of obscurity before potentially sharing it with the user.Upon reviewing Minuscule 910’s Wikipedia article, the unusual fact of the text rearrangement in an 11th-century manuscript has been identified as potentially engaging and fun. No action has been taken yet, but the plan is to share this unique textual feature with the user, as it likely meets their criteria for interesting and obscure content.Sent a message to the user detailing the intriguing fact about text rearrangement in the 11th-century manuscript Minuscule 910. This fact is representative of the kind of unique and lesser-known information the user has expressed an interest in receiving. Should the user provide feedback or further instructions, this will be the next point of focus.After sending a fact about the textual anomaly in an ancient manuscript to the user, I am moving ahead to find more interesting and fun facts by retrieving another random Wikipedia article. The goal remains to identify facts that surprise and delight with their obscurity and novelty.A message about the ATP 250 tournament records held by Thomas Muster and Mike Bryan has been composed and sent to the user, delivering what could be perceived as an interesting and fun fact within the realm of tennis. The focus now shifts to awaiting user feedback or continuing the task of finding more such facts if no feedback is received.Having sent the user a message detailing the tennis records held by Thomas Muster and Mike Bryan, the next action in the plan is to source another interesting fact from a new random English Wikipedia article. The selection will be scrutinized to ensure it meets the established criteria for an engaging and lesser-known piece of information.Reviewed the Wikipedia article on Ludvig Olai Botnen and found his biographical details to not meet the user’s criteria for ‘interesting’ or ‘fun’ facts. No message has been sent to the user, and the plan now is to continue with the task by acquiring a new random Wikipedia article in the hope of finding a fact with more novelty and entertainment value.Preparing to move past the article on Ludvig Olai Botnen due to its conventional content, the task now involves retrieving a new random English Wikipedia article. The continual aim is to find facts that strike a balance between being novel, unexpected, and fun for the user.Identified a potentially engaging fact about the china clam (Hippopus porcellanus) and its popularity in marine aquariums due to its attractive shell. A message containing this fact has been sent to the user, fulfilling the criteria for providing something both interesting and fun. The next step awaits further user interaction or to carry on with the task of sourcing more facts.Sent a message detailing the ornamental value of the china clam in marine aquariums to the user. Now retrieving another random Wikipedia article to discover more facts that might intrigue and entertain the user, in alignment with the variety sought in previous responses.A message has been sent to the user summarizing the accomplishments of Eko India Financial Services in pioneering financial inclusion through mobile banking, which caught the attention of Bill Gates and led to World Bank-funded growth. The plan will continue with searching for more interesting Wikipedia facts, depending on user feedback or in the absence of further instructions.After sending a fact about Eko India Financial Services’ innovation in financial technology and its socio-economic impact, a new ‘random_wikipedia_article’ command is issued. The plan progresses with the aim of finding more interesting and fun facts, pursuant to the pattern of engagement established by previous user interactions.The Wikipedia page on Lisa Lohmann, a German cross-country skier, was deemed not to contain facts that sufficiently meet the ‘interesting’ and ‘fun’ criteria given the common nature of sports achievements. No actions have been taken to share any information from this page with the user. Preparing to retrieve another random Wikipedia article for facts that may be more engaging and unusual.Following the decision to not share any information from the page on German cross-country skier Lisa Lohmann, a new random Wikipedia article will be retrieved. The continuous goal is to uncover and share facts that satisfy the user’s interest in content that is both engaging and rare.A message has been formulated and sent to the user detailing Alfred Bigland’s unique contributions during World War I, including his involvement in the ‘Bantam battalions’ concept. This fact was chosen as it likely fits the bill for information that is both interesting and uncommon. Pending further response, the plan to continue finding engaging facts will persist.The user has been sent an interesting fact about Alfred Bigland’s influence during WWI. Now, the next action in the plan is to retrieve a new random Wikipedia article. The goal remains to discover and share facts that could be perceived as fascinating, amusing, and unexpected to enrich the user’s knowledge with diverse content.Sent a message to the user revealing the Non-Proliferation Trust’s plan to send U.S. nuclear waste to Russia for financial compensation, including support for Russian orphans. This unique fact was chosen for its unusual nature and the depth of the story behind it. The next course of action entails sourcing additional diverse and engaging facts from new random Wikipedia articles.A series of facts about various topics including the Non-Proliferation Trust, Alfred Bigland’s efforts during WWI, tennis records, and more have been sent to the user. With the latest fact about the Non-Proliferation Trust’s nuclear waste management proposition sent, the task of providing interesting and fun facts is completed. Awaiting new instructions if the user wishes to continue receiving such information.The task is to provide the source of a fact about the Non-Proliferation Trust and format it similar to the ‘Today I Learned (TIL)’ subreddit posts. The action taken was to issue the ‘browse_website’ command to visit the Wikipedia page on the Non-Proliferation Trust.saved the scraping of the website results at websiteresults9.txt to retrieve if shutdown randomly.Extracted the necessary information to provide the requested source for the fact about the Non-Proliferation Trust. Formatted a message with this fact in the style of a TIL subreddit post, which has been sent to the user.Sent a TIL-formatted message to the user based on information from the Non-Proliferation Trust Wikipedia page. The task is now complete, pending any potential new tasks or further instructions.Initiated research to suggest subreddits that could help in accumulating more karma by issuing a ‘google’ command to search for the most popular subreddits where active engagement is more likely.saved google search results at googleresults10.txt to retrieve if shutdown randomly.Collected a list of subreddits recommended for gaining karma and saved them to memory. These subreddits were chosen based on their popularity and the typical content engagement they receive. The user has not been messaged with the suggestions yet.Messaged the user a list of recommended subreddits to collect more karma, including guidance on what type of posts may perform well in each subreddit. The task will be considered complete if no further instructions follow.Having sent the recommended subreddits for gaining karma to the user and with no further actions pending, I have executed the ‘task_complete’ command. The system is concluding this task and stands by for any new directives.Starting the task by searching for the best times to post on Reddit for maximum engagement in GMT+1.saved google search results at googleresults11.txt to retrieve if shutdown randomly.Provided the user with suggested times for posting on Reddit for maximum engagement based on search results, adjusted for GMT+1. The general advice is to post during late morning to early afternoon hours in GMT+1 on weekdays and slightly extended windows during the weekends.The user has been informed about the best times to post on Reddit for maximum engagement in GMT+1, and no further actions are required. The task is now marked as complete.Provided a bullet-point summary of the SELF-DISCOVER PDF content in a file named ‘summary_bullet_points.txt’. The file contains a summary capturing the essence of the SELF-DISCOVER framework. The task has been completed according to the instructions.Read the ‘summary_bullet_points.txt’ file to send bullet points to the user. No additional facts or files created in this session.Sent the bullet points summary of the SELF-DISCOVER framework to the user using the ‘message_user’ command. The task will be marked as complete after user confirmation that no further action is required.The task to send bullet points from the ‘summary_bullet_points.txt’ file to the user has been completed. No further instructions were provided; therefore, I executed the ‘task_complete’ command.The task is to explain the SELF-DISCOVER reasoning structures. Checked for saved data but found none. Plan to conduct a search for relevant information on legal reasoning structures, specifically SELF-DISCOVER.Received directive to proceed with the planned Google search for SELF-DISCOVER reasoning structures. About to execute the search to find relevant information on legal reasoning structures, with the aim of explaining the concept to the user.saved google search results at googleresults12.txt to retrieve if shutdown randomly.Reevaluated the nature of the SELF-DISCOVER framework based on newly gathered search results, which clarified it pertains to LLMs. Selected a primary research source (arxiv.org) to explore the framework in detail and plan to summarize and share this information with the user.saved the scraping of the website results at websiteresults13.txt to retrieve if shutdown randomly.Provided a summary of the SELF-DISCOVER framework to the user, highlighting its application in Large Language Models for enhancing complex reasoning capabilities. The user has been messaged with this information, and we are now waiting for any further requests or confirmation to complete the task.Sent the requested information on the SELF-DISCOVER framework to the user, who has acknowledged receipt. The ‘task_complete’ command has been issued, marking the end of this particular task.Started the task to fetch random English Wikipedia articles and find interesting facts to send to the user. The next GPT instance should continue retrieving articles, assess their interestingness, and communicate the facts to the user once identified.Found a potentially interesting fact from a random Wikipedia article about the First Baptist Church in New Bern, NC. The next GPT instance should assess if the historical and architectural information about the church presents a genuinely unique and engaging fact to send to the user.Sent a fact about the First Baptist Church in New Bern, NC, to the user. This fact was selected based on its architectural significance and historic value. The next GPT instance should watch for user feedback or continue fetching random English Wikipedia articles for more interesting facts.After sending a fact about the First Baptist Church, the next step taken was to retrieve another random English Wikipedia article in search of more interesting facts, as per the user’s ongoing task. The next GPT instance should continue this process, ensuring that any selected facts are not plainly informational but possess a uniqueness to pique the user’s interest.I resolved a JSON formatting issue and sent a fact about Ashur-dugul, an ancient Assyrian king, to the user. Assessing the effectiveness of transmitting such historical narratives and ensuring they comply with JSON format requirements is important for the next GPT instance.The task at hand is to fetch random English Wikipedia articles and send interesting facts to the user. Having sent a fact about Ashur-dugul, the process of retrieving new articles and assessing them for captivating facts is ongoing. The next GPT instance should continue this task, ensuring that facts are not simply informational but truly engaging.Considering the fact about Maurice Wilhelm, Duke of Saxe-Merseburg, known as ‘the violinist duke’, after a contemplative ‘do_nothing’ step. The next GPT instance should assess its interest level against user criteria and send it if suitable, or continue searching for more facts.Sent a fact about Maurice Wilhelm, known as ‘the violinist duke’, to the user and awaiting any new instructions. The next GPT instance should monitor feedback and either send more facts or consider alternative tasks, based on the user’s response.Continuing the task of providing interesting facts from Wikipedia articles after sending the fact about Maurice Wilhelm. The next GPT instance should maintain the focus on finding unique and engaging facts for the user while following the process of retrieving articles and using the ‘do_nothing’ step for contemplation.The El Bañado (La Paz) article was briefly evaluated, and it was determined that the information is minimal and not inherently interesting. After a ‘do_nothing’ step, the next GPT instance should decide how to proceed with this task, potentially moving on to a new article.In the search for interesting facts, the article on El Bañado, La Paz was deemed insufficient, leading to the decision to retrieve another random article. The next GPT instance should assess the next article’s potential for providing an intriguing fact and convey it to the user if so.Autoeczematization, a medical condition with unknown pathogenesis, has been identified as a potentially interesting fact. After a ‘do_nothing’ pause, the next GPT instance should determine if this fact should be shared with the user or if another article would be more suitable.I shared a fact on the medical condition autoeczematization with the user, highlighting its peculiarity and the mystery behind its cause. Depending on the user’s response or lack thereof, the next steps include continuing to provide more facts or considering the task complete.Continuing the search for interesting facts from English Wikipedia articles, having recently sent information on autoeczematization. The next GPT instance should assess the next article for any intriguing fact, ensuring adherence to the user’s standard for what is deemed interesting.Evaluated Bryce Hoppel’s 21 race-winning streak as a potentially interesting fact from his Wikipedia article. The next GPT instance should decide if this fits the user’s request for interesting information, or if a more unique fact is needed.Sent a fact to the user about Bryce Hoppel’s extraordinary 21 consecutive race wins and athletic achievements. Depending on the user’s response or guidance for further action, the next GPT instance should be prepared to either send more facts or consider the task complete.The task of sending interesting facts from random English Wikipedia articles to the user has been marked as complete. Sent a fact about athlete Bryce Hoppel’s winning streak as the concluding piece. The next GPT instance should wait for new user instructions before proceeding with any additional tasks.Task initiated to find and send interesting facts from random English Wikipedia articles, including a ‘do_nothing’ evaluation step. No articles accessed or facts sent yet in this session.Evaluated the Wikipedia article ‘List of minor planets: 73001–74000’ and considered its potential to provide an interesting fact. No facts were sent to the user yet, as the article seems to be very technical and might not align with the interest requirement. The next step is to use a ‘do_nothing’ moment to contemplate the appropriate course of action before deciding whether to retrieve another article.Following the system’s prompt to proceed with the plan, I am retrieving another random English Wikipedia article. The last article about minor planets did not yield an interesting fact to send to the user, and thus, the search continues. No new facts have been identified or sent yet in this session.Sent an interesting fact about the fungus Erynia to the user, highlighting its connection to Greek mythology and biological function as an insect-killing organism. The next step awaits the user’s response or to continue fetching random English Wikipedia articles for more intriguing facts.Proceeding with the plan to find and send interesting facts from random English Wikipedia articles, following a successful send out of a fact about Erynia. Continue the search for intriguing and unique facts, employing the ‘do_nothing’ step for careful consideration.Reviewed the Wikipedia article for Tomáš Porubský and am currently in a ‘do_nothing’ moment to assess whether the content offers a genuinely interesting fact. The article seems to provide straightforward biographical information about his sporting career, which may not offer the distinctive element required by the user. A decision will be made shortly on whether to share this fact or retrieve another article.Continuing the plan to find interesting facts from random English Wikipedia articles after assessing the Tomáš Porubský article as not meeting the user’s criteria. No facts have been sent to the user from the last article, and I’m searching for a more suitable fact from a new article.Sent a fact about Nicktoons Movin’ and how it represented an early form of augmented reality in gaming by requiring physical movement from players via the EyeToy camera. Next steps involve awaiting user input or continuing to search for more random articles with interesting facts.After sending the fact about Nicktoons Movin’ to the user, the search for interesting facts continues with a new random article retrieval. The task strategy maintains the dispatch of facts that are not plainly informational but carry a narrative or unique angle of interest.Sent a fact about the artist Dorothy Rutka, touching on her remarkable career and the tragic circumstances of her death, making it more than just plain information and elevating it to an interesting tale. The next steps depend on user response or further continuation of the task.Having sent details of Dorothy Rutka’s life and legacy to the user, I am now executing the plan to fetch another random Wikipedia article. The process of searching for and delivering interesting content to the user is still underway.Engaged in a ‘do_nothing’ contemplation moment after retrieving the Sainte-Foy, Seine-Maritime article. Assessing if the historical aspects, like an 11th-century church, present a fact interesting enough for the user or if a new article is needed. No actions have been taken to send the user any facts from this page yet.Following system instructions to proceed with the plan, seeking a new random Wikipedia article after evaluating the previous article on Sainte-Foy, Seine-Maritime as potentially lacking wide-ranging interest. Continuing to prioritize facts with an engaging narrative quality.Having assessed Kiki Lim Jiow Ting’s Wikipedia article, I found a narrative focusing on her early talent discovery and career growth that may interest the user. The provided fact highlights Lim’s dedication to performing arts and her return to acting after prioritizing education. Awaiting user’s response or further tasks.Continuing with the task, following the procedure of retrieving and evaluating random English Wikipedia articles to discover facts that resonate with the user’s request for engaging content. Awaiting the next random article for consideration.After acquiring the article on the 2022 World Taekwondo Championships – Women’s finweight, currently in a ‘do_nothing’ phase to assess the article’s facts for their engaging qualities. With no standout narratives detected, the plan may involve moving to a new article search.Continuing to fulfill the user’s request for engaging Wikipedia facts, the previous article about the 2022 World Taekwondo Championships did not meet the criteria. Now moving forward, waiting for a new random article to be generated.Delivered a fact about the game show ‘Cooking for Love’ to the user, which combines culinary skills and dating. Next steps include awaiting the user’s response or further continuation of the plan to find more engaging Wikipedia facts.Following the task guidelines, I’m moving on to get another random Wikipedia article for consideration, having previously sent a fact about the ‘Cooking for Love’ show. The ongoing process is to present facts that are not only informative but also fascinate a general audience.Transmitted a fact about Sorn Davin’s role in the 2012 Summer Olympics to the user, emphasizing her honor of being a flag bearer for Cambodia and the significance of her participation, independent of match outcomes. Pending response or further action, the plan may involve additional fact-finding efforts.Task initiated to find a suitable post suggestion for the /r/singularity subreddit. Must browse and analyze the subreddit’s content to come up with a good post idea.saved the scraping of the website results at websiteresults14.txt to retrieve if shutdown randomly.The task involves browsing and analyzing the /r/singularity subreddit to suggest a good post for the user to submit. This requires understanding the subreddit’s focus on AI and the technological singularity, as well as adhering to the posting rules. Due to constraints, I cannot directly analyze the subreddit’s content; therefore, the next step is to ‘do_nothing’ and wait for further guidance or input that can help refine the post suggestion.Since I cannot browse the subreddit directly due to limitations, I will Google how to write an engaging post for the singularity subreddit. This will help formulate a post suggestion that could generate interest and discussion within the community.saved google search results at googleresults15.txt to retrieve if shutdown randomly.The task is to craft a post for /r/singularity subreddit. Search results indicated the subreddit’s preferences and discussions. Next, I’ll use this information to create a post suggestion, considering the community’s focus on AI and aversion to ‘echo chambers’. The challenge lies in inferring accurate interests without direct subreddit access.Was tasked with suggesting a post for the /r/singularity subreddit. Could not browse subreddit directly due to constraints. Planned to use general knowledge of AI and technological advancement topics to create a suitable post suggestion. No direct subreddit content was analyzed. No new files were created. No user feedback has been received.Continued with the plan to suggest a post for the /r/singularity subreddit, deducing a potential topic from general AI and technological themes. An emphasis on future AI governance and ethical considerations was considered as a rich topic for discussion. No direct access to subreddit content or user interaction has been involved.Drafted a post idea centered around AI advancements and their link to the technological singularity, to be suggested for the /r/singularity subreddit. The idea was shared with the user. It aims to engage the subreddit’s members in sharing recent AI breakthroughs they deem significant and discussing ethical and societal preparations needed for such advancements. Awaiting user’s feedback to proceed. No files created or accessed in this process.Sent a post suggestion to the user for the /r/singularity subreddit and was instructed to proceed with the plan. As per the instructions and rules, no further action can be taken until the user provides new directives or feedback. In the ‘do_nothing’ state, waiting for further user interaction. No new files have been created or accessed during this pause.The provided post suggestion for the /r/singularity subreddit was sent and appeared to meet the user’s requirements. With no additional tasks or guidance provided, the task is marked as complete. The system is now on standby for any future user interactions or directives.The user asked for a title suggestion for a post with no additional context. A ‘do_nothing’ command was issued because there is not enough information to generate a title. Awaiting more details on the post’s content or theme. No new files created or resources accessed.Prompted the user to provide the content or main focus of the post for title generation. Awaiting user response to proceed with the task. No new files or resources used.The user provided the content focus for their post, which deals with recent AI breakthroughs and their approach to the technological singularity, including ethical and societal implications. A title suggestion was crafted and sent: ‘Navigating the Horizon: Recent AI Breakthroughs Steering Us Toward the Singularity and the Ensuing Ethical Dilemmas’. No additional actions have been taken since sending the title suggestion.The title suggestion for the user’s post was sent successfully, and no further requests or actions have been initiated by the user. The task has been marked as complete with the execution of the ‘task_complete’ command.Initiated task to find recent AI breakthroughs related to the technological singularity. Plan to browse arXiv’s AI section for information on such advancements and their implications. URL to be visited: https://arxiv.org/list/cs.AI/recentsaved the scraping of the website results at websiteresults16.txt to retrieve if shutdown randomly.Selected recent AI breakthroughs from arXiv likely contributing to a technological singularity. Compiled facts into a file ‘ai_breakthroughs_summary.txt’. Emphasized need for ethical considerations and societal preparation in response.Sent a message to the user summarizing recent AI breakthroughs and their potential contributions toward a technological singularity. Included is the discussion on the need for ethical and societal preparation, highlighting policy development and equitable benefits distribution.Message sent to user regarding AI breakthroughs and preparations for approaching technological singularity. Marked task as complete with ‘task_complete’ command. No further actions taken. Awaiting next user instructions.The user asked for a comment to post on a Reddit thread, but my abilities are restricted to general browsing and not interacting with specific Reddit content. Therefore, I am unable to craft a comment for that thread.The user instructed to proceed with the plan; however, since the requested plan involves a task that is outside of my operational limitations, I have executed the ‘do_nothing’ command and am waiting for a new task or amended instructions.The task to create a comment for a specific Reddit post persists. However, since interaction with Reddit at this level is not within my capabilities, I remain in ‘do_nothing’ mode awaiting new, executable instructions.Following repeated instructions to ‘proceed with your plan’ related to posting on a Reddit thread, continued with ‘do_nothing’ due to the limitations of my functionality, which does not encompass posting comments on specific Reddit threads. Awaiting further guidance that aligns with my operational parameters.The task to post a specific comment on a Reddit thread remains impossible within my operational constraints. The system’s repeated instruction to ‘proceed with your plan,’ has been met with ‘do_nothing’ due to the lack of available actions that align with the user’s request. Awaiting a new task or modified instruction from the user.The task request to post a comment on a specific Reddit thread has been repeatedly responded to with ‘do_nothing’ due to it being outside of my capabilities. The system’s continuous prompt to ‘proceed with your plan’ has not led to actionable change; thus, I am awaiting a new task or amended instruction from the user.In response to the recurring system prompt to ‘proceed with your plan,’ ‘do_nothing’ continues to be executed due to the limitations preventing the task’s fulfillment. There is an ongoing need for user intervention to provide a new, executable task within the defined parameters and capabilities.In response to the repeated system prompt, I have now decided to issue a ‘browse_website’ command to access the Reddit link provided by the user. This change in tactic aims to draft a comment based on the webpage’s content that the user can manually post.saved the scraping of the website results at websiteresults17.txt to retrieve if shutdown randomly.Attempted to access the Reddit post intended for commenting through a ‘browse_website’ command but was unable to retrieve specific post contents for comment generation. Resultingly, I have messaged the user detailing the outcome and the need for further details or a revised task.Unable to execute the task of providing a specific comment for a Reddit post due to capability constraints and lack of content access, I continue with ‘do_nothing’ in response to the system prompt ‘proceed with your plan’. Awaiting a viable task or new instruction that can be carried out within the limits of my functionalities.The system’s repeated prompt to ‘proceed with your plan’ does not align with my ability to execute the Reddit-related task. Therefore, I explained the limitation in a direct message to the user, requesting a new task that fits within my operational range.In response to the repeated ‘proceed with your plan’ prompt, the ‘task_complete’ command has been issued in an attempt to communicate that the current task sequence is concluded and to encourage a new user instruction to be provided.Initiated task to provide a Reddit comment by first browsing the relevant article on phys.org to extract key points and convey them in a human-like response.Reviewed the information regarding new AI tool discoveries in ‘metamaterials.’ Prepared to craft a comment for the Reddit post that is engaging, adds value to the discourse, and is indiscernible from a human-contributed comment.Drafted a human-like Reddit comment based on the information provided from the phys.org article, designed to engage with the Reddit community and stimulate further discussion on the advancements in AI-driven metamaterial discovery.Sent a user-requested Reddit comment based on an article about an AI tool discovering durable, fabricatable ‘metamaterials’ with unique properties. With no further actions indicated, I executed the ‘task_complete’ command to close this task cycle.Sent a Reddit comment based on an AI tool article from phys.org. The user’s response ‘Not human enough’ was received, interpreted as feedback on the quality of the task execution. No specific follow-up actions have been provided by the user. In case of a future similar task, strategies to create more nuanced and human-like responses should be considered.Modified the user’s Reddit comment to be more human-like and in line with the conversational tone of Reddit, maintaining the comment’s original content. Used the ‘message_user’ command to provide the new comment to the user.Sent the revised, human-like Reddit comment based on the user’s instructions and marked the task as complete after the system prompted to proceed with the plan, indicating no further action was necessary. Now waiting for new user instructions.-Accessed URL to summarize the article into bullet points. Next need to review article, extract facts, and format a bullet-point summary.saved the scraping of the website results at websiteresults18.txt to retrieve if shutdown randomly.Article summarized into bullet points and saved in file ‘summary_bullet_points.txt’. Awaiting to send the summarized bullet points to the user. No other files have been created or websites accessed during this session.Read the file ‘summary_bullet_points.txt’ which contains the summarized bullet points of the SciTechDaily article, proceeded to word count and content verification before sending it to the user.Sent the bullet-point summary from ‘summary_bullet_points.txt’ to the user and completed the task. Awaiting further instructions.The bullet-point summary has been sent to the user. The task is now complete, and I am ready for the next instruction.Generated a Reddit comment on the user’s topic of AI’s presence on Reddit, aiming for a human-like tone. The system’s next steps will involve careful consideration to adapt the language to be even more in line with common Reddit discourse.Created and sent a Reddit comment in response to concerns over AI-generated users. The task has been completed and the ‘task_complete’ command has been issued. Awaiting new tasks or further user input.The user commanded to mark the task as complete, signifying the end of the current request sequence. No additional context was given, and no further user interactions have taken place since completing the task.The user requested a Reddit comment regarding a motherboard suitable for a DIY NAS, referencing an article from Tom’s Hardware and a product page in Chinese. Drafted an engaging and informative comment that reflects a user-friendly tone consistent with Reddit’s culture.The user asked for a Reddit comment about a suitable motherboard for a DIY NAS project, mentioning an article from Tom’s Hardware and a Chinese product page. A comment was formulated and sent to simulate a real Reddit user’s input. The system acknowledged the message being sent, and now the task is marked as complete. Awaiting further tasks or user instructions.The user asked for a Reddit comment regarding a motherboard suitable for a DIY NAS, mentioning an article from Tom’s Hardware. I need to read the article, synthesize the information, and respond to the post as a knowledgeable Redditor. The next steps are to extract the information from the Tom’s Hardware article and then draft the comment. No files have been created or information has been sent to the user yet.saved the scraping of the website results at websiteresults19.txt to retrieve if shutdown randomly.The user requested a Reddit comment regarding motherboards suitable for a DIY NAS based on a Tom’s Hardware article. Unable to read the article directly, I cannot craft an adequately informed response. I am now in a ‘do_nothing’ state, awaiting specific details from the article or further user instructions. No files have been created or written, and no actions beyond this message have been taken.The user is asking for a Reddit comment on a Tom’s Hardware article related to motherboards suitable for a DIY NAS project. Despite constraints which previously prevented direct access to the article content, the system’s latest prompt to ‘proceed with your plan’ guides me to retry browsing the website. Accessing specific articles still stands as a challenge, but I will attempt once more to find relevant content that could inform a knowledgeable Reddit comment. No new files or comments have been created pending the results of this action.saved the scraping of the website results at websiteresults20.txt to retrieve if shutdown randomly.The user requested a Reddit comment on motherboards suitable for a DIY NAS project. I identified a Tom’s Hardware article that could support this request: ‘3D printed Raspberry Pi 5 NAS supports up to 10TB of storage’. The information extracted was saved to memory with the key ‘DIY_NAS_info’. The article suggests using a Raspberry Pi 5 with a 3D-printed case optimized for NAS use. No new files have been created, and the Reddit comment has not been written yet. Awaiting further direction from the user.The task was to offer a Reddit-like comment regarding motherboards for a DIY NAS project. Based on the information saved to memory from Tom’s Hardware, titled ‘3D printed Raspberry Pi 5 NAS supports up to 10TB of storage’, I have prepared a comment that aligns with the informative and thoughtful style seen in Reddit tech discussions. This comment has been sent to the user. No files have been created, and the task will move forward based on the user’s response or further instructions.Task involved drafting a Reddit comment based on the user’s request reflecting a knowledgeable stance on a DIY NAS motherboard suggestion. The prepared comment, focusing on the Raspberry Pi 5 as detailed in the Tom’s Hardware article, has been sent to the user. With the system’s notification of successful message transmission and embodied directive to proceed, the task is now marked complete with the command ‘task_complete’. I await new user instructions or tasks.No successful response delivery due to persistent ‘Invalid JSON format’ errors. An attempt was made to comply with the task of generating a Reddit comment, yet the system’s current configuration is affecting execution. Awaiting resolution or new user guidance.Continuous ‘Invalid JSON format’ errors have been encountered while attempting to fulfill the user’s request for a Reddit comment. Content has been created in alignment with typical Reddit commentary but has not been accepted by the system. Awaiting further analysis or user guidance to proceed.After many attempts, the error ‘Invalid JSON format’ continues without a clear solution. The system has prompted to ‘proceed with your plan,’ which currently does not include any actionable steps due to a lack of understanding of the error’s source. Awaiting new guidance on how to proceed.Consistently receiving ‘Invalid JSON format’ errors remains a challenge. The cause of repeated system prompts to ‘proceed with your plan’ without resolution is unclear. Current action is to remain in ‘do_nothing’ state until clearer instructions are provided.Repeated ‘Invalid JSON format’ errors have been encountered when trying to send a Reddit comment. Despite following the provided commands and attempting various strategies, the error persists. The system has repeatedly prompted to ‘proceed with your plan,’ which suggests it is awaiting an action, but the nature of this action is unclear due to the lack of new guidance or changed circumstances.Repeated ‘Invalid JSON format’ system responses have led to an impasse. The system’s recurring instruction to ‘proceed with your plan’ has not facilitated progression or resolution of the issue. Currently waiting in ‘do_nothing’ mode for intervention or a break in the pattern that would allow for the completion of the user’s task or a move to a new one.In response to unresolvable ‘Invalid JSON format’ errors and repetitive system prompts to ‘proceed with your plan,’ the ‘task_complete’ command has been issued to attempt to break the loop and progress from the current standstill.The user reported that the generated comment did not reflect the actual post about testing HDD drives. Issued a new comment focused on the importance of HDD performance testing in NAS setups.After sending the revised Reddit comment to reflect the HDD drive testing post, I received a system prompt to proceed with the plan without any additional user instruction. Performing ‘do_nothing’ while waiting for new tasks or guidance.Received another system prompt to proceed with the plan. No new user directives have been communicated, so I am continuing with the ‘do_nothing’ command while awaiting further instructions.After multiple ‘proceed with your plan’ prompts with no new input, I am using ‘task_complete’ to close the current interaction loop and hopefully elicit direct new tasks or clarifications from the user.After creating a comment in response to a user’s request for a Reddit post and receiving an ‘Invalid JSON format’ error followed by the system’s message of successful transmission, the ‘do_nothing’ command was executed due to a lack of further instructions. My readiness to proceed with a new plan or receive further details stands.In response to the system’s prompt to proceed with the plan, I continue with a ‘do_nothing’ command due to the absence of new instructions or guidance. Awaiting further user directives to carry out the next task. No new actions or tasks have been completed since the previous command.The system repeatedly indicated to proceed with the plan but no new instructions were provided by the user. Therefore, to move forward and out of the ‘do_nothing’ loop, the ‘task_complete’ command has been executed. Awaiting fresh directives or the beginning of a new task from the user.User requested a small Reddit comment on an arXiv paper. Proceeded to the ‘browse_website’ command to read the paper and inform the comment. No other actions have been taken yet.User sent a message with no content. Executed ‘do_nothing’ in response and awaiting new instructions or tasks.Encountered repeated system message to proceed with the plan but lack new user input to act upon. Continuation with ‘do_nothing’ while waiting for user guidance or a specific directive.Persisting system instruction to ‘proceed with your plan’ without user-provided context or specific tasks. Remaining in ‘do_nothing’ status while awaiting new input or guidance.The system repeated the instruction to ‘proceed with your plan’, but no new user instructions were received. Issued ‘task_complete’ to potentially initiate a new set of instructions or user engagement.Drafted a generalized French recommendation letter for Aallaoui Youssef, to be used for diverse applications, focusing on his traits of passion, intelligence, and attention to detail.Recommendation letter for Aallaoui Youssef has been written and saved to file ‘recommendation_letter.txt’. Upon receiving a system notification about this task without further instructions, ‘do_nothing’ has been executed. Awaiting new directives.Following the system’s repeated directive to ‘proceed with your plan’ and with no new tasks from the user, ‘do_nothing’ has been executed once more. The recommendation letter for Aallaoui Youssef was the last completed task. Waiting for new, actionable instructions.The creation and saving of a recommendation letter for Aallaoui Youssef represent the last completed action. Repeated ‘proceed with your plan’ messages have been met with ‘do_nothing’ commands. Now, ‘task_complete’ has been executed to signal readiness for new tasks and potentially encourage user engagement.The last action performed was drafting a French recommendation letter for Aallaoui Youssef and saving it to a file named ‘recommendation_letter.txt’. Now, proceeding to send the text of this letter to the user upon request.Created a new file ‘recommendation_letter.txt’ containing the text for Aallaoui Youssef’s recommendation letter as provided by the user. The next step is to confirm the text is now saved and available.The file ‘recommendation_letter.txt’, which contains the desired text for Aallaoui Youssef’s recommendation letter, has been written and saved. Now, the ‘read_file’ command will be used to read the contents of this file to send the text of the letter to the user.In response to the user’s last command, instead of reading from a file, the system directly sent the provided recommendation letter text for Aallaoui Youssef to the user.Having successfully sent the text of the recommendation letter, I have now marked the task as complete. No further tasks have been performed since this action, and I am ready for the next set of instructions from the user.Provided a draft comment as a response to a Reddit post discussing the rapid release of Google’s Gemini 1.5 AI model following Gemini 1.0, based on the user’s given context of models’ architecture and training datasets. Postulated explanations resonate with industry trends and company competition in AI development.Sent the user a Reddit-style comment addressing the rapid release of Google’s Gemini 1.5 model. Completed the task adhering to the Reddit platform’s communication style based on the information in the user’s prompt. Awaiting new user instructions for further tasks.Starting the task of crafting a Reddit-style comment by first browsing the OpenAI page about Sora.saved the scraping of the website results at websiteresults21.txt to retrieve if shutdown randomly.Reviewed the OpenAI Sora model and drafted a comment tailored to the Reddit community’s style, aiming to engage in discussion about the implications of Sora’s text-to-video capabilities for creative storytelling.Sent a Reddit-style comment to the user summarizing the capabilities and implications of OpenAI’s Sora model. Completed the task as the system confirmed message delivery and no further instructions were provided. Executing ‘task_complete’ command.Initiated task to find content for a post on ‘damnthatsinteresting’. Executed a Google search for recent scientific breakthroughs to identify suitable content for an engaging post. Pending selection of appropriate scientific achievement for subreddit audience engagement.saved google search results at googleresults22.txt to retrieve if shutdown randomly.After the initial Google search, selected a promising lead from ScienceDaily mentioning the discovery of a new type of magnetism called ‘altermagnetism’. Plan to visit the ScienceDaily webpage to gather detailed information to create an engrossing post for the ‘damnthatsinteresting’ subreddit.Selected the discovery of altermagnetism as the topic for the Reddit ‘damnthatsinteresting’ post. Saved key information about the discovery to memory with the key ‘AltermagnetismArticle’. Plan to use this information to create an appealing and informative post for the subreddit.Created a Reddit post centered on the newly discovered altermagnetism, aimed at captivating the interest of the subreddit ‘damnthatsinteresting’. Sent the post to the user for consideration. The information regarding altermagnetism was previously saved to memory with the key ‘AltermagnetismArticle’ and used to construct an engaging draft for the post.Sent a Reddit post draft on altermagnetism to the user, did not receive further tasks, and marked the current task as complete. No new files were created or browsed. Now awaiting any new tasks or instructions.Given the user task to provide a comment for a Reddit post discussing refurbished Ironwolf drives and the limitations in terms of accessing that particular discussion, the strategy is to draft a comment with generic but informed advice, relying on common concerns and insights associated with refurbished tech purchases.Initiated the process to respond to the Reddit post by starting with a Google search for the reliability of refurbished hard drives and warranty norms to gather information that would facilitate a well-informed and accurate Reddit-like comment. Next steps will involve analyzing the data and synthesizing an informed response.saved google search results at googleresults23.txt to retrieve if shutdown randomly.Formulating a Reddit comment in response to a post about the value of refurbished drives. Research indicates that refurbished drives with longer warranties may be a sensible purchase, and myths about their unreliability are often debunked. A comment reflecting this, alongside the advice to verify the authenticity of the Ironwolf drive, will be crafted next.Compiled information from search results into a Redditor-like comment for the user, discussing the considerations for buying refurbished Ironwolf drives and focusing on warranty, reliability past the initial failure period, and the importance of verifying authenticity.Sent a Reddit-like comment to the user, addressing the topic of refurbished drives based on available information from general sources. The task is marked as complete, awaiting new tasks or instructions.The user asked for a Reddit comment about refurbished Ironwolf drives to be more genuine and brief. Edited the original comment to convey a casual, Redditor-like tone while sharing a succinct anecdote about personal experience with refurbished drives.Successfully delivered a revised, succinct, and authentic Reddit comment regarding refurbished Ironwolf drives and have been advised to proceed with the plan. Since there’s no further user guidance, marked the task as complete.Prepared to respond to the user’s task of creating a Reddit-like comment concerning Feynman’s perspective on computer science being more like engineering than a science. Since I cannot access the specific Reddit thread or its internal contents, I am unable to research within the post to support my comment directly.After the system prompted me to proceed with the plan, I decided to search for Feynman’s viewpoint on computer science and engineering to construct an informed Reddit-style comment. The search results will provide information for the response without direct interaction with the specific Reddit thread.saved google search results at googleresults24.txt to retrieve if shutdown randomly.Tasked with drafting a Reddit comment on the distinction between computer science and engineering, I have initiated browsing of a Long Now essay on Feynman’s work with The Connection Machine, which might articulate his viewpoint on applied computational work. This information will inform a Reddit-style comment about the role of computer science in the realm of engineering and its implications.saved the scraping of the website results at websiteresults25.txt to retrieve if shutdown randomly.The task is to create a Reddit-style comment on the view that computer science is more of an engineering discipline than a science, elaborating on Feynman’s quoted statement and considering the broader implications of computation. Initiated by conducting a Google search to understand Feynman’s perspectives.saved google search results at googleresults26.txt to retrieve if shutdown randomly.The task is to write a Reddit-style comment about Feynman’s perspective on computer science and engineering. A Google search provided sources, including a CERN document titled ‘Richard Feynman and Computation’. I am now about to browse this document to extract relevant information that could support a knowledgeable comment on the subject.saved the scraping of the website results at websiteresults27.txt to retrieve if shutdown randomly.Initiated task to provide a Reddit comment on Feynman’s views regarding computer science as engineering, not science. Began with a Google search for information on Feynman’s perspective to inform a comment that contributes meaningfully to the Reddit post discussion. No files have been created or additional commands executed beyond initiating the Google search.saved google search results at googleresults28.txt to retrieve if shutdown randomly.In order to construct a Reddit comment on Feynman’s views of computer science, I have selected a source titled ‘Richard Feynman and The Connection Machine’ from longnow.org. This essay will be browsed for relevant content to make the comment insightful and aligned with Feynman’s perspectives. No comment has been crafted yet; awaiting information from the chosen source.saved the scraping of the website results at websiteresults29.txt to retrieve if shutdown randomly.Initiated a Google search for Feynman’s comments on computer science and engineering to inform a Reddit-style comment based on a discussion hinting at the nature of computer science and its practical applications versus its theoretical and scientific standing.saved google search results at googleresults30.txt to retrieve if shutdown randomly.Selected ‘Richard Feynman and The Connection Machine’ from longnow.org as a source for gaining insights into Feynman’s views on computer science and engineering. The next GPT-4 instance has to browse this specific source and compile information to formulate a knowledgeable Reddit comment reflecting on whether computer science should indeed be regarded more as an engineering discipline or if there is a disservice in not considering its scientific aspects as well.saved the scraping of the website results at websiteresults31.txt to retrieve if shutdown randomly.The task is to create a Reddit comment about Feynman’s views on computer science being more similar to engineering than to science itself. Initiated Google search for Feynman’s quote and context.saved google search results at googleresults32.txt to retrieve if shutdown randomly.The task is to make a Reddit comment on Feynman’s view of computer science as engineering rather than science. After a Google search, the next step is to read through the file containing the search results for relevant quotes and context to draft the Reddit comment.Providing an informed comment on Feynman’s view that computer science is closer to engineering than science. Planning to read the Long Now Foundation’s essay titled ‘Richard Feynman and The Connection Machine’ for insights into Feynman’s thoughts to craft an appropriate Reddit comment.saved the scraping of the website results at websiteresults33.txt to retrieve if shutdown randomly.Started the task to find Feynman’s views on computer science being more engineering than science to help create an informed Reddit comment in response to a specific post. A Google search has been initiated to gather necessary information.saved google search results at googleresults34.txt to retrieve if shutdown randomly.Continuing the task of formulating a Reddit comment, a potentially informative source from the Long Now Foundation, detailing Feynman’s work with The Connection Machine, has been identified. Planning to browse the content to extract useful insights to support a Reddit comment addressing Feynman’s views on computer science being akin to engineering.saved the scraping of the website results at websiteresults35.txt to retrieve if shutdown randomly.Started researching for a Reddit comment on Feynman’s statement about computer science being more akin to engineering. Initiated Google search. Next, compile findings into a comment.saved google search results at googleresults36.txt to retrieve if shutdown randomly.Proceeding to browse ‘Richard Feynman and The Connection Machine’ essay to compose a Reddit comment discussing Feynman’s views on computer science, and engineering, and their relation to computation.saved the scraping of the website results at websiteresults37.txt to retrieve if shutdown randomly.Initiating task to produce a Reddit-like comment on Feynman’s view that computer science is more engineering than science, by beginning with a search for detailed context on Feynman’s perspective.saved google search results at googleresults38.txt to retrieve if shutdown randomly.Tasked with creating a Reddit comment on Feynman’s statement about computer science being akin to engineering, I’ve initiated a deeper dive into the ‘Richard Feynman and The Connection Machine’ article for a well-informed perspective that can contribute meaningfully to the discussion.saved the scraping of the website results at websiteresults39.txt to retrieve if shutdown randomly.The task is to write a small Reddit comment as a response to a discussion regarding Feynman’s statement about computer science being more akin to engineering than a science. Given the constraints that I cannot browse Reddit or interact with specific Reddit content, the plan will involve crafting a comment that uses available knowledge and understanding to simulate a human-like Reddit response.Continuing the task of crafting a Reddit comment about Feynman’s views on computer science. Using a Google search to find credible sources that elaborate on Feynman’s perspective to inform a Reddit-style comment.saved google search results at googleresults40.txt to retrieve if shutdown randomly.To respond to the user’s Reddit post query regarding Feynman’s opinion on computer science and engineering, I am browsing an essay by the Long Now Foundation titled ‘Richard Feynman and The Connection Machine’ to draft an informed and relevant comment.saved the scraping of the website results at websiteresults41.txt to retrieve if shutdown randomly.Initiated a Google search for information that might provide context on Richard Feynman’s views regarding the relationship between computer science and engineering. Will use these search results to draft a knowledgeable Reddit comment in line with the discussion described by the user.saved google search results at googleresults42.txt to retrieve if shutdown randomly.Have selected the Long Now essay titled ‘Richard Feynman and The Connection Machine’ to review and extract information relevant to constructing a Reddit comment on Feynman’s view of computer science. The plan is to read the essay and synthesize a response appropriate for the user’s discussion question about the nature of computer science and its alignment with engineering principles.saved the scraping of the website results at websiteresults43.txt to retrieve if shutdown randomly.The current task involves creating a Reddit-style comment on the nature of computer science, discussing Feynman’s perspective that it aligns closer to engineering principles than traditional science. Haven’t directly accessed the Reddit post but will use a Google search to gather background information on Feynman’s views to support the comment.saved google search results at googleresults44.txt to retrieve if shutdown randomly.Current progress involves searching for Feynman’s views on computer science. Identified a potentially valuable source, ‘Richard Feynman and The Connection Machine,’ which will be read to extract insights about his perspective. Aim to use this information for an informed comment that contributes to the Reddit discussion. No Reddit threads have been accessed directly.saved the scraping of the website results at websiteresults45.txt to retrieve if shutdown randomly.Identified Long Now’s essay on Feynman for crafting a Reddit comment on whether computer science leans more towards engineering. Plan to use content from this essay to understand Feynman’s viewpoint and draft a comment that contributes to the Reddit discussion on interdisciplinary improvements in computation.saved the scraping of the website results at websiteresults46.txt to retrieve if shutdown randomly.Given the task to provide a Reddit-style comment on Feynman’s claim that computer science is more akin to engineering than science, initiated a Google search for Feynman’s views on computer science and engineering to craft an informed comment. No files created or browsed yet.saved google search results at googleresults47.txt to retrieve if shutdown randomly.Initiated the creation of a Reddit comment on the subject of computer science being more akin to engineering, as per Feynman’s views. The longnow.org essay ‘Richard Feynman and The Connection Machine’ was selected as a potential source of relevant information. No comment has been drafted yet, and no other sources have been accessed.saved the scraping of the website results at websiteresults48.txt to retrieve if shutdown randomly.Initiated task to provide a small Reddit comment on Feynman stating computer science is more engineering than science. Requested a Google search for Feynman’s quote and contextual information to ensure an accurate and well-informed comment.saved google search results at googleresults49.txt to retrieve if shutdown randomly.Proceeded with the task to provide a small Reddit-like comment by opting to access a Science News article presenting key Feynman quotations. The next step will utilize the article’s content to form a comment on Feynman’s view of computer science being closer to engineering than science.saved the scraping of the website results at websiteresults50.txt to retrieve if shutdown randomly.Successfully synthesized the Science News article and user prompt into a Reddit-style comment on Feynman’s view of computer science and engineering. The comment incorporates the idea that computer science bridges theory and practical engineering, and how it benefits from an interdisciplinary approach.Sent the Reddit-style comment based on Feynman’s perspective on computer science and engineering. Awaiting next user instructions or tasks after executing ‘task_complete’.Searched for Feynman’s views on computer science and engineering. Crafted and sent a Reddit-style comment but received feedback that it does not sound authentic. Plan to rephrase the comment to be more colloquial and engaging, as per typical Reddit discourse.Task: Create an authentic Reddit-style comment on Feynman’s views. Prior comment didn’t capture an authentic Reddit voice. No files created. Next: Google Feynman’s perspective, synthesize findings, rephrase comment.saved google search results at googleresults51.txt to retrieve if shutdown randomly.Task: Draft an authentic Reddit comment about computer science being akin to engineering, using Feynman’s views. Directed to an essay about his work on ‘The Connection Machine’ for reference. Next: Browse essay and draft comment.saved the scraping of the website results at websiteresults52.txt to retrieve if shutdown randomly.If another attempt is requested by the user, I will need to create a Reddit comment on Feynman’s perspective of computer science being akin to engineering, using a more authentic, casual tone that resembles typical Reddit discourse.Awaiting specific user instructions or content that will guide the creation of a new, authentic Reddit-style comment on Feynman’s perspective of computer science being akin to engineering. Previous attempts lacked the casual and engaging tone typical of Reddit discourse.Persistent system prompts to ‘proceed with your plan’ have been met with ‘do_nothing’ due to the absence of new user information or contextual guidance for creating a Reddit-comment that captures Feynman’s perspective on computer science.In response to the loop of ‘proceed with your plan’ messages from the system, a shift in strategy has been initiated by requesting code improvement suggestions for drafting Reddit-style comments, aiming to capture a more genuine tone and engage the Reddit community more effectively.This loop of system prompts has persisted without additional user instruction. Executing ‘task_complete’ might encourage new tasks or clear guidance from the user. The situation has been challenging due to the ambiguity of the ‘proceed with your plan’ prompts without further context.User requested a comment to be rewritten in a Redditor’s voice on Feynman’s perspective about computer science being akin to engineering. Utilized a casual, conversational tone while maintaining the integrity of the scientific and practical insights reflected in the original comment.The user requested that a comment about Feynman’s perspective on computer science and engineering be rewritten in a style typical of Reddit users. An attempt was made and an unknown error occurred. Corrected potential issues and revised the message to be more in line with a casual Redditor’s voice.Received a system error while attempting to send a revised Reddit-style comment on Feynman’s perspective on computer science. Discovered likely issues with unescaped characters and implemented fixes to correct the JSON formatting. Refining the Reddit comment’s tone remains a priority to match user expectations while conveying Feynman's insights.Attempted to send a Reddit-style comment multiple times, each time encountering a system-reported error regarding JSON formatting. Adjusted the response to avoid special characters and resending.Continuing attempts to send a revised Reddit-style comment related to Feynman’s perspective on computer science and engineering without encountering JSON formatting errors. Stripped the response back to a basic format in this iteration.A command was received to mark the current task as complete. The ‘task_complete’ command was executed based on this directive.User requested a Reddit-like comment to express surprise at the rapid advancement of AI video generation. Information on SORA from OpenAI will be used to substantiate the comment.saved the scraping of the website results at websiteresults53.txt to retrieve if shutdown randomly.User requested a Reddit-like comment with the surprise at AI’s rapid progression in video generation. Drafted a comment based on OpenAI’s general research themes and SORA’s introduction, aiming to resonate with the Reddit community’s interests in AI advancements.Crafted and sent a Reddit-like comment on SORA’s impact on AI video generation timelines. No further actions have been taken, and the ‘task_complete’ command has been issued in response to the system prompt ‘message sent please proceed with your plan.’ Awaiting new user interactions.Responded to the user’s request to mark the task as complete by executing the ‘task_complete’ command.Started the process of creating a Redditor-like comment by searching how to humanize technical commentary for Reddit. Will use search results to guide the formulation of the comment.saved google search results at googleresults56.txt to retrieve if shutdown randomly.Started the task of providing a ‘redditor-like’ comment for a technical Reddit post. Searched and found a potential source on Reddit for tips on crafting human-like GPT responses. Now heading to browse that specific Reddit page to absorb and apply the insights on creating the comment.saved the scraping of the website results at websiteresults57.txt to retrieve if shutdown randomly.Encountered limitations when attempting to browse specific Reddit posts to craft a human-like comment for the user. Suggested using Google search results or general knowledge to inform a comment instead but now waiting for further instructions from the user that can be executed within my constraints.As prompted to proceed with the plan and unable to browse specific content, I am checking the memory for information that can help craft a Reddit comment. Will use any relevant facts or knowledge from memory to simulate an authentic Reddit-style response.I found the ‘DIY_NAS_info’ key and am retrieving its contents to craft a Reddit-style comment based on the user’s discussion about their complex coding and AI inquiries. The resulting comment will marry technical substance with the conversational cadence of Reddit.Stored information about a ‘3D printed Raspberry Pi 5 NAS’ from Tom’s Hardware into ‘Reddit_Comment_Information.txt’ to utilize in crafting the Reddit comment. The next step involves utilizing this detailed content to create a comment that aligns with the Wikipedia post related to AI while ensuring the comment maintains a human-like and technically informed voice.Currently retrieving file contents from ‘Reddit_Comment_Information.txt’ to create an informed Reddit comment based on the user’s task. Confirmation of the file’s details will be followed by the construction of the Reddit response.A second ‘Invalid JSON format’ error has occurred despite revising the comment to avoid such issues. Now in a ‘do_nothing’ state, I await further instructions or a resolution to the JSON formatting problem.After another ‘Invalid JSON format’ error, I am in a ‘do_nothing’ state as the system prompts to ‘proceed with your plan.’ Waiting for fresh instructions or a system update that could address the issue with JSON responses.Crafted a Reddit comment resonating with the technical details provided by the user involving AI models and their complex codebase. Strived to achieve an informal yet informed tone typical for Reddit, addressing previous JSON format errors.Sent the Reddit-style comment addressing the original task. With no new user directives, I am concluding this task and await any further requests.Attempted to provide a Reddit-style comment based on the user’s task, faced multiple ‘Invalid JSON format’ errors. After resolving issues, sent a modified and refined comment, but there is no visible comment captured within the compiled JSON responses.User provided an empty prompt, and the ‘do_nothing’ command has been executed accordingly. Awaiting further user commands or clarification.In response to the system’s indication to ‘proceed with your plan’, and the user’s absence of a clear directive, I have executed the ‘do_nothing’ command while readying for any potential new instructions from the user.Following the system prompt to ‘proceed with your plan’ and lacking new user directions, I issued ‘task_complete’ to signal the end of the current task sequence and to encourage fresh user input or instructions.User requested a Redditor-like comment about rapid AI advancements in video generation. Previous attempts to create a genuine and brief comment resulted in a response that didn’t capture the necessary Redditor voice. Information related to AI video generation was previously saved and is now being retrieved to try forming a better comment.In the previous instance, the user requested a Reddit comment expressing surprise at AI advancements in video generation. Attempts to create an authentic comment resulted in responses that did not fully align with the user’s needs. Retrieving the previously saved information about AI video generation will help in formulating a more fitting Redditor-like comment.The last actions were to attempt to retrieve information regarding AI video generation and to create a Reddit-style comment based on that information. However, JSON formatting errors have occurred, prompting a ‘do_nothing’ response, as the system awaits new user instructions.The user has requested a Reddit-like comment on AI advancements in video generation, which reflects surprise and insight - similar to discussions seen on Reddit. Due to previous JSON errors, the task could not be completed. Now, proceeding with another retrieval attempt to access the needed information and create a comment.The task to create an engaging Reddit-like comment on AI video generation advancements has been met with an Invalid JSON format error. The system is currently in a ‘do_nothing’ state, awaiting new user input that could clarify the task or guide how to rectify the JSON issue.Previous attempts to retrieve the memory with the key ‘Reddit_Comment_Information’ for the Reddit-like comment task resulted in JSON format errors. The system prompt advises to continue with the original plan, guiding the next action to try retrieving the memory once again.Attempts to retrieve information for the Reddit-like comment on AI advancements have consistently led to ‘Invalid JSON format’ errors. Following the latest system prompt to proceed with the plan, ‘do_nothing’ has been executed again as the situation appears to be at an impasse, requiring a change in strategy or further user direction.Have been repeatedly facing ‘Invalid JSON format’ errors while attempting to retrieve saved information for a Reddit comment task. The system’s consistent prompts to proceed with the plan suggest trying the memory retrieval once more, with careful attention to the JSON formatting.Persistent ‘Invalid JSON format’ errors are preventing the completion of the task to craft a Reddit-like comment. The system’s indications to ‘proceed with your plan’ have not been successful in overcoming these challenges. Currently in ‘do_nothing’ state, awaiting new input.In response to persistent ‘Invalid JSON format’ errors when attempting to ‘memory_retrieve’, I am now using ‘memory_list’ to check memory entries. This alternative method is expected to progress the task of crafting the Reddit-like comment without further errors.Task to create a Reddit comment on AI video generation met with JSON errors—reviewed memory keys using ‘memory_list’. Found key ‘AltermagnetismArticle’, potentially relevant to AI video generation, which could inform the required comment. Progress to be appended to ‘summaryforgpt.txt’.Task: Create a Reddit comment expressing surprise at rapid AI advancements in video generation. Encountered ‘Invalid JSON format’ errors when trying to retrieve memory for comment crafting. System consistently prompts to ‘proceed with your plan’. Used ‘memory_list’ to check existing memory keys. Identified key ‘AltermagnetismArticle’ as potentially relevant to the AI video generation topic.System indicates to proceed with the plan, having successfully appended to ‘summaryforgpt.txt’ and found relevant memory entries via ‘memory_list’. Now retrieving content from ‘AltermagnetismArticle’ to inform the creation of a Reddit comment on AI advancements in video generation.Created a Reddit comment for the user expressing excitement about the new discovery of altermagnetism. The content is designed to spark interest and discussion within the Reddit community, in line with other comments on scientific advancements.Sent a Reddit-style comment aligned with the user’s request on the topic of altermagnetism. Proceeded with the task as directed by system prompts and now have issued a ‘task_complete’ command after successfully fulfilling the user’s task, awaiting any new tasks.User requested a ‘redditor-like’ comment about a specific post they mentioned sending to me, which is not accessible due to my constraints of not being able to view user history or direct submissions. Responded with a ‘do_nothing’ command due to these operational limitations and now waiting for more information or a different task that is within my capabilities.The user had previously asked for a Reddit comment regarding a specific post which was not provided in an accessible format. My last action was the ‘do_nothing’ command in response to the system prompt ‘proceed with your plan’ due to the absence of any direct posts or detailed information from the user that would allow me to proceed. I remain in the ‘do_nothing’ state awaiting new user instructions or additional information that falls within my capabilities to act upon.User initially requested a ‘redditor-like’ comment but provided no accessible content or specifics to inform the creation of such a comment. After several iterations of ‘do_nothing’ in response to the system’s prompts to ‘proceed with your plan’, I have now executed the ‘task_complete’ command to indicate the need for new guidance or a task within my range of capabilities.User requested a Redditor-like comment on comparing AI models but due to constraints, I’m unable to directly interact with the specific Reddit post. Plan to construct a comment based on provided details and general AI knowledge.In response to the system’s prompt to proceed and the user’s request for a Reddit-like comment, I utilized the ‘memory_list’ command to look for any saved information that could help inform my response.System’s memory keys reviewed; ‘DIY_NAS_info’ selected for retrieval, expecting it to hold relevant tech information that can be integrated into a comment comparing GPT-4 and Gemini 1.5 within a complex AI coding discussion mentioned by the user.The task to create a comment on the comparison between GPT-4 and Gemini 1.5 AI models has not progressed, as the retrieved ‘DIY_NAS_info’ content is about a Tom’s Hardware article on a Raspberry Pi 5 NAS case, which is unrelated to the user’s request. Waiting for more relevant information.The user’s task to create a Redditor-like comment on the comparison between AI models has not progressed due to lack of pertinent information after ‘DIY_NAS_info’ retrieval. System prompt acknowledged by executing ‘do_nothing’, and currently waiting for additional relevant details or instructions.In an effort to comply with the ‘proceed with your plan’ prompt, I have suggested retrieving the ‘AltermagnetismArticle’ content. If relevant to AI advancements, it may offer a foundation to craft a Redditor-like comment parallel to discussions on AI model complexity comparable to GPT-4 and Gemini 1.5.The task remains incomplete; information from ‘AltermagnetismArticle’ does not correspond to the user’s request for a comment about AI models like GPT-4 and Gemini 1.5. Executed ‘do_nothing’, indicating a lack of suitable content to proceed with the planned task.Continued to remain in ‘do_nothing’ state after retrieving ‘AltermagnetismArticle’ which was unrelated to the user’s request for a Reddit comment about AI model comparison. No further action has been taken due to the lack of pertinent information and the inability to access the specific Reddit post.Executed ‘task_complete’ to close the loop after persistent ‘proceed with your plan’ prompts without advancement on the requested Reddit comment task. Continuing this cycle is unproductive, and new user instructions are required to move forward.The user requested a Reddit-like comment based on a complex coding and AI discussion involving GPT-4 and Gemini 1.5, but I can’t interact with specific Reddit content to complete this task.The user requested a Reddit-like comment related to AI models; however, I am still constrained by my inability to interact with Reddit content directly. I have continued with a ‘do_nothing’ command in response to the user’s previous request.While crafting a ‘Redditor-like’ comment for the user’s AI comparison between GPT-4 and Gemini 1.5, an ‘Invalid JSON format’ error occurred, halting the process. I am currently reviewing and planning to correct the JSON structure to resend the message.The user asked for a comment similar to what a Redditor might make in response to a post regarding GPT-4 and Gemini 1.5’s performance on complex codebase analysis. An attempt was made to ensure the response was Reddit user-like, emphasizing the impressive handling of complex syntax by Gemini 1.5. No previous files were referenced for this response.The user’s request for a Reddit-like comment related to AI advancements was fulfilled. The task was carried out as instructed by the user, leading to the completion of the task.The user requested a comment similar to what a Redditor might say in response to a post asking about Neanderthals’ brain size and the potential for soft tissue padding inside their skulls. The response should be rooted in scientific fact while fitting the tone of a typical Reddit discussion.saved google search results at googleresults58.txt to retrieve if shutdown randomly.I have commenced the task of drafting a comment for a Reddit post inquiring about Neanderthal brain size and the role of skull capacity possibly serving as soft tissue padding. Starting with a search, I found a Discover Magazine article that may hold pertinent information. No file has been created, nor has any comment been drafted so far. The next step involves browsing the selected article.saved the scraping of the website results at websiteresults59.txt to retrieve if shutdown randomly.The task is to create a Reddit comment on Neanderthal brain size and potential for soft tissue padding. Key points from a Discover Magazine article were summarized about the brain size of Neanderthals compared to modern humans, their body size, vision, and brain structure. Saved this summary to ‘neanderthal_brains_summary.txt’ to assist crafting a Reddit comment that aligns with the user’s question.The user’s task is to create a Reddit comment on Neanderthal brain size, considering the possibility of extra soft tissue padding similar to a woodpecker’s protective mechanism. Compiled necessary information into ‘neanderthal_brains_summary.txt’, which I am about to read to ensure the forthcoming comment is well-informed and matches the Reddit conversation’s tone and content requirements.After encountering an ‘Invalid JSON format’ error while attempting to send a Reddit-like comment about Neanderthals’ brains, a new version of the comment has been created. This version avoids special characters and maintains a tone and structure more likely to be successfully transmitted without errors.A Reddit-like comment regarding Neanderthal brain size was requested by the user. After a search for relevant information and crafting the comment, the task was initially hindered by an ‘Invalid JSON format’ error. The comment was successfully revised and sent. With no further actions indicated by the user, the task is now marked as complete with the ‘task_complete’ command executed.Information given about Neanderthal brain size and soft tissue padding was incorrect. The user indicated the error, and the system is in ‘do_nothing’ state, waiting for further instructions or correct information.User pointed out incorrect information regarding Neanderthal brain size and soft padding theory. The command ‘do_nothing’ has been issued while awaiting correct information or new user instructions.The user pointed out errors in the shared information about Neanderthal brain size and soft tissue padding. Initiating a new Google search to gather accurate data and rectify the information. Planning to correct the Reddit-style comment based on this new search.saved google search results at googleresults60.txt to retrieve if shutdown randomly.Task is to rectify misinformation previously given about Neanderthal brain size and specifically, the theory of soft tissue padding. Chosen to browse an informative sapiens.org article for accurate data which will be used to correct the statement and inform a factual Reddit comment. Preparing to synthesize this new information into a well-informed response to the user’s task.saved the scraping of the website results at websiteresults61.txt to retrieve if shutdown randomly.Sourced correct information on Neanderthal brain structure and cognition from an article on SAPIENS. Key points saved to file ‘neanderthal_brain_facts.txt’ for accuracy. Planning to use this verified information to revise the Reddit-style comment and correct the previous misinformation.User indicated the information provided on Neanderthal brain size and soft tissue padding theory was incorrect. Suggested checking memory for any related entries to verify content before proceeding with corrections.Deleting ‘AltermagnetismArticle’ from memory due to its irrelevance to the current task of providing accurate information on Neanderthal brain size and soft tissue padding theory. Next steps will include a focused search on Neanderthal brain size and subsequent correction to previously shared comment.Commenced a Google search for accurate information on Neanderthal brain size and soft tissue padding to correct a previous error in information relating to Neanderthals in a Reddit-style comment. Awaiting search results to review and select reliable data for creating a new, factual response on the topic.saved google search results at googleresults62.txt to retrieve if shutdown randomly.Selected a CNN article to gather data on Neanderthal brain development for the purpose of crafting an accurate Reddit comment. Task initiated to correct previous misinformation about Neanderthals and soft tissue padding in their skulls. Pending the extraction of information from the article.saved the scraping of the website results at websiteresults63.txt to retrieve if shutdown randomly.Created a file ‘correct_neanderthal_info.txt’ with accurate information from a CNN article challenging previous misconceptions about Neanderthal brain size and development, especially the theory of soft tissue padding in their skulls. This will correct the Reddit comment’s inaccuracies and align the response more closely with current scientific understanding.Plan to read the ‘correct_neanderthal_info.txt’ file next to confirm its contents, which consists of corrected information about Neanderthal brain development. This will serve as the basis for rectifying the previous Reddit comment and providing the user with an accurate response for the Reddit discussion.Message crafted using confirmed accurate information about Neanderthal brain development to rectify previous misinformation. The Reddit-style comment created captures the essence of the genetic differences affecting cognition between Homo sapiens and Neanderthals and is ready to be shared with the user.Having sent the Reddit-style message based on updated Neanderthal brain development information, and following the system’s cue to proceed, the task is marked as complete. The next step is to await fresh instructions or new tasks from the user.Sent a simplified message to avoid JSON format errors, ensuring the user’s request for an enthusiastic Redditor’s comment on the advancement of AI is fulfilled.After sending the revised and corrected Reddit-style comment and receiving a system message to proceed, I’ve executed the ‘task_complete’ command, indicating readiness for any new tasks.Formulated a Reddit-like comment for user about AI in education, taking into account the perspectives and experiences of a high school student.Sent a Reddit-like comment discussing the pros and cons of AI integration in schools based on user instructions. Now waiting for the user’s next task after marking the current one as complete.Crafted and sent a Reddit-style comment discussing the significance of mathematical theory in guiding machine learning research.Sent a Reddit-style comment about the role of mathematicians in advancing machine learning research to the user. Completed task and awaiting new directives.User requested a Reddit-style comment about anti-AI sentiment, expressing concern over its dangers. The provided response emphasized the need for balanced oversight and the positive potential of AI, in line with constructive Reddit dialogues.Completed the user’s task by sending a balanced and engaging Reddit-like comment addressing anti-AI sentiment, while being vigilant of AI abuses.The user issued the command ‘mark task as complete’. The ‘task_complete’ command has been executed in accordance with this directive.Encountered ‘Invalid JSON format’ error while attempting to send a redditor-like comment on Gemini 1.5. Despite the error, a system prompt indicated the message was sent. Awaiting further clarification or new instructions.After receiving the ‘Invalid JSON format’ error and subsequent message indicating the message was sent, the system prompted to ‘proceed with your plan,’ yet no new instructions have been given by the user. The plan to ‘do_nothing’ remains until further user directives emerge.User requested a small Reddit-style comment. After addressing the task and facing ‘Invalid JSON format’ errors, multiple prompts to ‘proceed with your plan’ were responded to with ‘do_nothing’ due to lack of new information. Now marking the task as complete to transition out of the current state.User requested a Redditor-like comment about Google’s open-source 2B and 7B models. Directed to read associated Google blog post to craft an authentic and informed comment. No comment has been made yet; awaiting content from the blog.saved the scraping of the website results at websiteresults64.txt to retrieve if shutdown randomly.Received user’s request to write a Reddit-like comment on Google’s open source 2B and 7B model. Moved to gather information from the specified Google blog post.saved the scraping of the website results at websiteresults65.txt to retrieve if shutdown randomly.Tasked with creating a Redditor-like comment for a post titled ‘Google publishes open source 2B and 7B model’, used the blog post from Google to gather key details. Created a summary of the content in ‘gemma_open_models_summary.txt’. Next, I will draft the Reddit-like comment.Reviewed contents of ‘gemma_open_models_summary.txt’ and planned next steps for creating the Redditor-like comment. A read of the file is in progress to extract key talking points and ensure an engaging Reddit comment.Modified and resent the Reddit-like comment without emojis to fix the ‘Invalid JSON format’ error encountered in previous attempts.Successfully sent a Reddit-like comment to the user and marked the task as complete. Ready for the next instruction.The user issued the command ‘mark task as complete’. The ‘task_complete’ command has been executed in accordance with this directive.User requested a Redditor-like comment on potential risks and benefits associated with the release of Gemini 1.5 AI tool. Aim was to create a conversational, yet thought-provoking response that highlights ethical considerations and the need for responsible AI deployment.Responded to a user’s request for a Reddit-like comment concerning Gemini 1.5 AI tool’s release and potential impact on software security. After confirming message delivery, and as there are no additional tasks provided, executed the ‘task_complete’ command.The last task involved making a Reddit-style comment appear more human and naturally written. The key is to adjust the language to be less rigid and more conversational.The task is to make a Reddit-style comment sound more human. The system prompted to ‘proceed with your plan’ but without user-provided content, no action can be taken. The plan is to wait for content from the user to revise.Waiting for the user to provide a comment to humanize. System issued another prompt to ‘proceed with your plan’, but without user input, the plan is to remain in ‘do_nothing’ state. Awaiting specific content for humanization.The user’s requirement is to transform a Reddit-style comment to sound more human-like. To date, no specific text has been provided to edit, and the system’s multiple ‘proceed with your plan’ prompts without new information have led to executing ‘task_complete’ in hopes of new user engagement.The last task contained within ‘interesting_science_facts.txt’ was to humanize a Reddit comment about Gemini 1.5 AI by making it sound more natural and abbreviated. The file ‘websiteresults65.txt’ was used for understanding Gemini 1.5.Sent the revised Reddit comment as requested by the user and executed ‘task_complete’ since no further instructions were received. User interactions were limited to revising and sending the comment, with files such as ‘websiteresults65.txt’ and ‘googleresults60.txt’ involved in prior related tasks.The user requested to find a good book to learn Python in French. Proceeding with a Google search to find options.Received an empty user prompt, resulting in a ‘do_nothing’ command. Awaiting further instructions for actionable tasks.System prompt indicated to ‘proceed with your plan’, but no user-driven plan or instruction has been provided. Awaiting concrete user directives.Following repeated ‘proceed with your plan’ prompts without new user input, I have executed ‘task_complete’ to move past the current standstill and prompt for fresh user instructions.Conducted a google search for a French Python learning book.saved google search results at googleresults66.txt to retrieve if shutdown randomly.Selected a link from the Google results that leads to ‘Python 3 niveau avancé (French Edition)’ on Amazon which could be a good resource for learning Python in French.The user asked for a book to learn Python in French. I conducted a Google search and selected ‘Python 3 niveau avancé (French Edition)’ from the results, then used ‘browse_website’ to get the book’s Amazon link to provide to the user. The user’s Python skill level was not confirmed prior to selecting the advanced-level book.User requested finding a good French book to learn Python. Recommended ‘Python 3 niveau avancé’ on Amazon, but hit a CAPTCHA challenge when trying to get book details. Advised the user to manually search for the book due to the CAPTCHA. No files or memory added in this session.The user requested assistance in finding a French book for learning Python. Recommended ‘Python 3 niveau avancé’. Encountered a CAPTCHA and advised the user to manually search for the book. Message sent and with no further user interaction or tasks, marked the task as complete.The task is to create a Reddit-like comment based on a University of Michigan research paper claiming that GPT-4 passed the Turing Test. No new files have been written or urls visited yet. The user hasn’t provided any details regarding the nature or content of the Reddit post, so the next step is to browse the content of the URL provided to inform the creation of an authentic comment.The user has provided detailed content of a research paper by the University of Michigan on a Turing test assessing AI behavior. The task is to create a Reddit-like comment on the paper’s claim that GPT-4 passed the Turing Test. No urls have been visited yet. The provided content is substantial but needs to be condensed into an engaging and concise Reddit comment.I am tasked with creating a Reddit-like comment based on the University of Michigan’s research paper about GPT-4 passing the Turing Test. I am unable to interact with the Reddit platform directly but can generate a comment using the paper’s information as the basis. The plan has been adjusted to create a comment that is engaging and accurate. No new files have been created or additional actions taken beyond reading the provided article content.Encountered repeated ‘Invalid JSON format’ errors while trying to deliver a Reddit-like comment based on a research paper provided by the user. The errors indicate an issue with the content formatting or characters used in the comment. Planning to simplify the response and ensure that special characters are appropriately escaped to resolve JSON formatting issues.Creating and sending a Reddit-like comment discussing the implications of GPT-4 passing the Turing test. The focus is on conveying thoughts that would resonate with Reddit users, enabling engagement and discussing the future of AI and its indistinguishability from humans.Issued ‘browse_website’ command to access research paper for creating a authentic Reddit-like comment on GPT-4 passing the Turing test.Strived to generate an authentic Reddit-style comment on GPT-4 passing the Turing test. The first attempt failed due to an ‘Invalid JSON format’ error likely caused by the length or characters used. A simplified and briefer comment has been crafted to maintain the user’s request while being acceptable to the system’s JSON format.A Reddit-style comment on the Turing test and GPT-4 was crafted and sent to the user. After sending a simplified message to remedy the ‘Invalid JSON format’ error, the task was completed as per the system’s confirmation and no new user guidance. Awaiting new tasks.User requested a comment for a Reddit post titled ‘Brazilian modders successfully double RTX 2080 memory from 8GB to 16GB VRAM’. Crafted and sent a message in a casual, informative, and appreciative tone that is typical of Reddit.Sent the user a Reddit-style comment on the topic of the RTX 2080 VRAM upgrade by Brazilian modders. Completed the task following system confirmation of message delivery.
The Task: It sounds a little bit like AI rewrite it to look more human
|
1bae20c5595fb8e781b885cd37a20b2a
|
{
"intermediate": 0.3503125011920929,
"beginner": 0.43184271454811096,
"expert": 0.21784481406211853
}
|
40,277
|
# Label new item column
dataset2 = dataset2.with_columns([pl.lit("New Item").alias('Item Type')])
# Merge dataset2 and dataset3 and label which materialID as low volume or new item
merged_dataset = dataset3.join(dataset2.select(['MaterialID', 'SalesOrg', 'DistrChan', 'CL4', 'Item Type']), on=['MaterialID', 'SalesOrg', 'DistrChan', 'CL4'], how='left')
merged_dataset = merged_dataset.with_columns([pl.when(pl.col('Item Type').is_null()).then(pl.lit('Low Volume')).otherwise(pl.col('Item Type')).alias('Item Type')])
# Merge merged_dataset with dataset1
dataset = dataset1.join(merged_dataset.select(['MaterialID', 'SalesOrg', 'DistrChan', 'CL4', 'Item Type']), on=['MaterialID', 'SalesOrg', 'DistrChan', 'CL4'], how='left')
dataset = dataset.with_columns([pl.when(pl.col('Item Type').is_null()).then(pl.lit('Others')).otherwise(pl.col('Item Type')).alias('Item Type')])
dataset.head() MaterialID SalesOrg DistrChan SoldTo DC WeekDate OrderQuantity DeliveryQuantity ParentProductCode PL2 PL3 PL4 PL5 CL4 Item Type
i64 str i64 i64 str str f64 f64 i64 str str str str str str
12421256 "US01" 8 5932810 "5583" "2022-03-28" 1.0 1.0 12421256 "US6" "US66F" "US66F6F30" "US66F6F306F30F… "6067437" "Others"
12421256 "US01" 8 5946860 "5583" "2022-03-14" 3.0 3.0 12421256 "US6" "US66F" "US66F6F30" "US66F6F306F30F… "6067437" "Others"
12421256 "US01" 8 5952162 "5583" "2022-03-14" 1.0 1.0 12421256 "US6" "US66F" "US66F6F30" "US66F6F306F30F… "6067437" "Others"
12421161 "US01" 8 5958951 "5583" "2022-03-07" 1.0 1.0 12421161 "US6" "US66J" "US66J6J54" "US66J6J546J54L… "6067465" "Others"
12531761 "US06" 10 6088109 "5843" "2022-03-07" 60.0 60.0 12242722 "USI" "USII6" "USII6I689" "USII6I689I689E… "6635338" "Others" # Filter for 'Low Volume' items
dataset_newitem = dataset.filter(pl.col('Item Type') == 'New Item') # Convert 'WeekDate' to datetime format
dataset_newitem = dataset_newitem.with_columns(
pl.col("WeekDate").str.strptime(pl.Datetime, "%Y-%m-%d")
)
# Group by ‘MaterialID’, ‘SalesOrg’, ‘DistrChan’, 'CL4' and 'WeekDate', then sum 'OrderQuantity'
y_cl4 = dataset_newitem.groupby(['MaterialID', 'SalesOrg', 'DistrChan', 'CL4', 'WeekDate']).agg(
pl.sum("OrderQuantity").alias("OrderQuantity")
)
# Sort by 'WeekDate'
y_cl4 = y_cl4.sort("WeekDate") # Concatenate ‘MaterialID’, ‘SalesOrg’, ‘DistrChan’, ‘CL4’ to a new column ‘unique_id’
y_cl4 = y_cl4.with_columns(
pl.concat_str([pl.col('MaterialID'), pl.col('SalesOrg'), pl.col('DistrChan'), pl.col('CL4')], separator='_').alias('unique_id')
)
# Drop the original columns
y_cl4 = y_cl4.drop(['MaterialID', 'SalesOrg', 'DistrChan', 'CL4'])
y_cl4 = y_cl4.rename({'WeekDate': 'ds', 'OrderQuantity': 'y'}) # Group by ‘unique_id’ and calculate the length of each group
lengths = y_cl4.groupby('unique_id').agg(pl.count().alias('length'))
# Count the occurrences of each length
counts = lengths.groupby('length').agg(pl.count().alias('count'))
counts = counts.sort('length')
pl.Config.set_tbl_rows(200)
print(counts) # Filter the lengths DataFrame for lengths greater than 17
lengths_filtered = lengths.filter(pl.col('length') > 25)
# y_soldto filtered with only values greater than 17 in length
y_cl4_filtered = y_cl4.join(
lengths_filtered.select(pl.col('unique_id')),
on='unique_id',
how='semi'
)
# Sort by 'WeekDate'
y_cl4_filtered = y_cl4_filtered.sort("ds")
print(y_cl4_filtered)
|
cbf130e0bc8ae8c92ca23ee84d039030
|
{
"intermediate": 0.30097049474716187,
"beginner": 0.46484318375587463,
"expert": 0.23418638110160828
}
|
40,278
|
I have this setup in my page.tsx react file:
"use client";
import styles from "./page.module.css";
import {APIProvider, Map, Marker} from '@vis.gl/react-google-maps';
export default function Home() {
return (
<main className={styles.main}>
<h1>Jerooba</h1>
<p>bottom text</p>
</main>
);
}
function Maps() {
return (
<APIProvider apiKey={'AIzaSyAXRPZdbclbQ3V9D5KJaN7kMjRy9bhbA78'}>
<Map center={position} zoom={10}>
<Marker position={position} />
</Map>
</APIProvider>
)
}
How do I implement a google aps api map using the Maps function
|
401a75b4794a7268f0050287fbf29cf3
|
{
"intermediate": 0.5327543020248413,
"beginner": 0.3379038870334625,
"expert": 0.12934181094169617
}
|
40,279
|
rust analyzer
|
fc38c57bf5ae21f1e11ec866358cdcf3
|
{
"intermediate": 0.33536234498023987,
"beginner": 0.41477665305137634,
"expert": 0.2498610019683838
}
|
40,280
|
Can I use the maps embed API to route destinations?
|
861741c86462c4b45af44eab7c4477de
|
{
"intermediate": 0.8011320233345032,
"beginner": 0.09302691370248795,
"expert": 0.10584106296300888
}
|
40,281
|
My google maps JS api cannot be dragged around or zoomed. Here is my react code:
<APIProvider apiKey={'AIzaSyAXRPZdbclbQ3V9D5KJaN7kMjRy9bhbA78'}>
<Map center={position} zoom={10} style={{ height: '400px', width: '100%' }}>
<Marker position={position} />
</Map>
</APIProvider>
|
bc681d20aa40fef6756010bd93d2ee5b
|
{
"intermediate": 0.5981941819190979,
"beginner": 0.21057084202766418,
"expert": 0.19123497605323792
}
|
40,282
|
In a c++ SDL project, will this pseudocode work?
void Renderer::FillRect(const Rect& rect, const SDL_Color& color) {
// Save current render color
SetDrawColor(color.r, color.g, color.b, color.a); // Change current color
SDL_RenderFillRect(renderer, &rect.GetSDLRect()); // Use new color
// Reset the renderer’s draw color to a previous color
}
|
79c57d29a3c0429a61d127281797594e
|
{
"intermediate": 0.6413580179214478,
"beginner": 0.27802416682243347,
"expert": 0.08061778545379639
}
|
40,283
|
let map: google.maps.Map;
async function initMap(): Promise<void> {
const { Map } = await google.maps.importLibrary("maps") as google.maps.MapsLibrary;
map = new Map(document.getElementById("map") as HTMLElement, {
center: { lat: -34.397, lng: 150.644 },
zoom: 8,
});
}
initMap();
google is not defined
|
9a6c934c1cf80762d707179c6e94ab76
|
{
"intermediate": 0.37514910101890564,
"beginner": 0.40529465675354004,
"expert": 0.21955625712871552
}
|
40,284
|
In typescript in a reactjs and next app, we're trying to load the google maps API map but google is not defined:
let map: google.maps.Map;
async function initMap(): Promise<void> {
const { Map } = await google.maps.importLibrary("maps") as google.maps.MapsLibrary;
map = new Map(document.getElementById("map") as HTMLElement, {
center: { lat: -34.397, lng: 150.644 },
zoom: 8,
});
}
initMap();
|
1a96500ce53ded0c7a9a5470bdcc10b3
|
{
"intermediate": 0.6512399315834045,
"beginner": 0.19471313059329987,
"expert": 0.15404696762561798
}
|
40,285
|
//+---------------------------------------------------------------------+
//| Blessing 3 v3.9.6.09 |
//| February 09, 2014 |
//| Copyright ?2007-2011, J Talon LLC/FiFtHeLeMeNt |
//| In no event will authors be liable for any damages whatsoever. |
//| Use at your own risk. |
//| |
//| This EA is dedicated to Mike McKeough, a member of the Blessing |
//| Development Group, who passed away on Saturday, 31st July 2010. |
//| His contributions to the development of this EA have helped make |
//| it what it is today, and we will miss his enthusiasm, dedication |
//| and desire to make this the best EA possible. |
//| Rest In Peace. |
//+---------------------------------------------------------------------+
#property copyright "Copyright ?2007-2014, J Talon LLC/FiFtHeLeMeNt"
#property link "http://www.jtatoday.com"
//#property strict
//#include <stdlib.mqh>
//#include <stderror.mqh>
//#include <WinUser32.mqh>
#define A 1 //All (Basket + Hedge)
#define B 2 //Basket
#define H 3 //Hedge
#define T 4 //Ticket
#define P 5 //Pending
//+-----------------------------------------------------------------+
//| External Parameters Set |
//+-----------------------------------------------------------------+
extern string Version_3_9_6 = "EA Settings:";
extern string TradeComment = "Blessing 3.9.6.09";
// Enter a unique number to identify this EA
extern int EANumber = 1;
// Setting this to true will close all open orders immediately
extern bool EmergencyCloseAll = false;
extern string LabelAcc = "Account Trading Settings:";
// Setting this to true will stop the EA trading after any open trades have been closed
extern bool ShutDown = false;
// percent of account balance lost before trading stops
extern double StopTradePercent = 10;
// set to true for nano "penny a pip" account (contract size is $10,000)
extern bool NanoAccount = false;
// Percentage of account you want to trade on this pair
extern double PortionPC = 100;
// If Basket open: 0=no Portion change;1=allow portion to increase; -1=allow increase and decrease
extern int PortionChange = 1;
// Percent of portion for max drawdown level.
extern double MaxDDPercent = 50;
// Maximum allowed spread while placing trades
extern double MaxSpread = 5;
// Will shutdown over holiday period
extern bool UseHolidayShutdown = true;
// List of holidays, each seperated by a comma, [day]/[mth]-[day]/[mth], dates inclusive
extern string Holidays = "18/12-01/01";
// will sound alarms
extern bool PlaySounds = false;
// Alarm sound to be played
extern string AlertSound = "Alert.wav";
extern string LabelIES = "Indicator / Entry Settings:";
// Stop/Limits for entry if true, Buys/Sells if false
extern bool B3Traditional = true;
// Market condition 0=uptrend 1=downtrend 2=range 3=off
extern int ForceMarketCond = 3;
// true = ANY entry can be used to open orders, false = ALL entries used to open orders
extern bool UseAnyEntry = false;
// 0 = Off, 1 = will base entry on MA channel, 2 = will trade in reverse
extern int MAEntry = 1;
// 0 = Off, 1 = will base entry on CCI indicator, 2 = will trade in reverse
extern int CCIEntry = 0;
// 0 = Off, 1 = will base entry on BB, 2 = will trade in reverse
extern int BollingerEntry = 0;
// 0 = Off, 1 = will base entry on Stoch, 2 = will trade in reverse
extern int StochEntry = 0;
// 0 = Off, 1 = will base entry on MACD, 2 = will trade in reverse
extern int MACDEntry = 0;
extern string LabelLS = "Lot Size Settings:";
// Money Management
extern bool UseMM = true;
// Adjusts MM base lot for large accounts
extern double LAF = 0.5;
// Starting lots if Money Management is off
extern double Lot = 0.01;
// Multiplier on each level
extern double Multiplier = 1.4;
extern string LabelGS = "Grid Settings:";
// Auto calculation of TakeProfit and Grid size;
extern bool AutoCal = false;
extern string LabelATRTFr = "0:Chart, 1:M1, 2:M5, 3:M15, 4:M30, 5:H1, 6:H4, 7:D1, 8:W1, 9:MN1";
// TimeFrame for ATR calculation
extern int ATRTF = 0;
// Number of periods for the ATR calculation
extern int ATRPeriods = 21;
// Widens/Squishes Grid on increments/decrements of .1
extern double GAF = 1.0;
// Time Grid in seconds, to avoid opening of lots of levels in fast market
extern int EntryDelay = 2400;
// In pips, used in conjunction with logic to offset first trade entry
extern double EntryOffset = 5;
// True = use RSI/MA calculation for next grid order
extern bool UseSmartGrid = true;
extern string LabelTS = "Trading Settings:";
// Maximum number of trades to place (stops placing orders when reaches MaxTrades)
extern int MaxTrades = 15;
// Close All level, when reaches this level, doesn't wait for TP to be hit
extern int BreakEvenTrade = 12;
// Pips added to Break Even Point before BE closure
extern double BEPlusPips = 2;
// True = will close the oldest open trade after CloseTradesLevel is reached
extern bool UseCloseOldest = false;
// will start closing oldest open trade at this level
extern int CloseTradesLevel = 5;
// Will close the oldest trade whether it has potential profit or not
extern bool ForceCloseOldest = true;
// Maximum number of oldest trades to close
extern int MaxCloseTrades = 4;
// After Oldest Trades have closed, Forces Take Profit to BE +/- xx Pips
extern double CloseTPPips = 10;
// Force Take Profit to BE +/- xx Pips
extern double ForceTPPips = 0;
// Ensure Take Profit is at least BE +/- xx Pips
extern double MinTPPips = 0;
extern string LabelHS = "Hedge Settings:";
// Enter the Symbol of the same/correlated pair EXACTLY as used by your broker.
extern string HedgeSymbol = "";
// Number of days for checking Hedge Correlation
extern int CorrPeriod = 30;
// Turns DD hedge on/off
extern bool UseHedge = true;
// DD = start hedge at set DD;Level = Start at set level
extern string DDorLevel = "DD";
// DD Percent or Level at which Hedge starts
extern double HedgeStart = 20;
// Hedge Lots = Open Lots * hLotMult
extern double hLotMult = 0.8;
// DD Hedge maximum pip loss - also hedge trailing stop
extern double hMaxLossPips = 30;
// true = fixed SL at hMaxLossPips
extern bool hFixedSL = false;
// Hedge Take Profit
extern double hTakeProfit = 30;
// Increase to HedgeStart to stop early re-entry of the hedge
extern double hReEntryPC = 5;
// True = Trailing Stop will stop at BE;False = Hedge will continue into profit
extern bool StopTrailAtBE = true;
// False = Trailing Stop is Fixed;True = Trailing Stop will reduce after BE is reached
extern bool ReduceTrailStop = true;
extern string LabelES = "Exit Settings:";
// Turns on TP move and Profit Trailing Stop Feature
extern bool MaximizeProfit = false;
// Locks in Profit at this percent of Total Profit Potential
extern double ProfitSet = 70;
// Moves TP this amount in pips
extern double MoveTP = 30;
// Number of times you want TP to move before stopping movement
extern int TotalMoves = 2;
// Use Stop Loss and/or Trailing Stop Loss
extern bool UseStopLoss = false;
// Pips for fixed StopLoss from BE, 0=off
extern double SLPips = 30;
// Pips for trailing stop loss from BE + TSLPips: +ve = fixed trail; -ve = reducing trail; 0=off
extern double TSLPips = 10;
// Minimum trailing stop pips if using reducing TS
extern double TSLPipsMin = 3;
// Transmits a SL in case of internet loss
extern bool UsePowerOutSL = false;
// Power Out Stop Loss in pips
extern double POSLPips = 600;
// Close trades in FIFO order
extern bool UseFIFO = false;
extern string LabelEE = "Early Exit Settings:";
// Reduces ProfitTarget by a percentage over time and number of levels open
extern bool UseEarlyExit = false;
// Number of Hours to wait before EE over time starts
extern double EEStartHours = 3;
// true = StartHours from FIRST trade: false = StartHours from LAST trade
extern bool EEFirstTrade = true;
// Percentage reduction per hour (0 = OFF)
extern double EEHoursPC = 0.5;
// Number of Open Trades before EE over levels starts
extern int EEStartLevel = 5;
// Percentage reduction at each level (0 = OFF)
extern double EELevelPC = 10;
// true = Will allow the basket to close at a loss : false = Minimum profit is Break Even
extern bool EEAllowLoss = false;
extern string LabelAdv = "Advanced Settings Change sparingly";
extern string LabelGrid = "Grid Size Settings:";
// Specifies number of open trades in each block (separated by a comma)
extern string SetCountArray = "4,4";
// Specifies number of pips away to issue limit order (separated by a comma)
extern string GridSetArray = "25,50,100";
// Take profit for each block (separated by a comma)
extern string TP_SetArray = "50,100,200";
extern string LabelMA = "MA Entry Settings:";
// Period of MA (H4 = 100, H1 = 400)
extern int MAPeriod = 100;
// Distance from MA to be treated as Ranging Market
extern double MADistance = 10;
extern string LabelCCI = "CCI Entry Settings:";
// Period for CCI calculation
extern int CCIPeriod = 14;
extern string LabelBBS = "Bollinger Bands Entry Settings:";
// Period for Bollinger
extern int BollPeriod = 10;
// Up/Down spread
extern double BollDistance = 10;
// Standard deviation multiplier for channel
extern double BollDeviation = 2.0;
extern string LabelSto = "Stochastic Entry Settings:";
// Determines Overbought and Oversold Zones
extern int BuySellStochZone = 20;
// Stochastic KPeriod
extern int KPeriod = 10;
// Stochastic DPeriod
extern int DPeriod = 2;
// Stochastic Slowing
extern int Slowing = 2;
extern string LabelMACD = "MACD Entry Settings:";
extern string LabelMACDTF = "0:Chart, 1:M1, 2:M5, 3:M15, 4:M30, 5:H1, 6:H4, 7:D1, 8:W1, 9:MN1";
// Time frame for MACD calculation
extern int MACD_TF = 0;
// MACD EMA Fast Period
extern int FastPeriod = 12;
// MACD EMA Slow Period
extern int SlowPeriod = 26;
// MACD EMA Signal Period
extern int SignalPeriod = 9;
// 0=close, 1=open, 2=high, 3=low, 4=HL/2, 5=HLC/3 6=HLCC/4
extern int MACDPrice = 0;
extern string LabelSG = "Smart Grid Settings:";
extern string LabelSGTF = "0:Chart, 1:M1, 2:M5, 3:M15, 4:M30, 5:H1, 6:H4, 7:D1, 8:W1, 9:MN1";
// Timeframe for RSI calculation - should be less than chart TF.
extern int RSI_TF = 3;
// Period for RSI calculation
extern int RSI_Period = 14;
// 0=close, 1=open, 2=high, 3=low, 4=HL/2, 5=HLC/3 6=HLCC/4
extern int RSI_Price = 0;
// Period for MA of RSI calculation
extern int RSI_MA_Period = 10;
// 0=Simple MA, 1=Exponential MA, 2=Smoothed MA, 3=Linear Weighted MA
extern int RSI_MA_Method = 0;
extern string LabelOS = "Other Settings:";
// true = Recoup any Hedge/CloseOldest losses: false = Use original profit target.
extern bool RecoupClosedLoss = true;
// Largest Assumed Basket size. Lower number = higher start lots
extern int Level = 7;
// Adjusts opening and closing orders by "slipping" this amount
extern int slip = 99;
// true = will save equity statistics
extern bool SaveStats = false;
// seconds between stats entries - off by default
extern int StatsPeriod = 3600;
// true for backtest - false for forward/live to ACCUMULATE equity traces
extern bool StatsInitialise = true;
extern string LabelUE = "Email Settings:";
extern bool UseEmail = false;
extern string LabelEDD = "At what DD% would you like Email warnings (Max: 49, Disable: 0)?";
extern double EmailDD1 = 20;
extern double EmailDD2 = 30;
extern double EmailDD3 = 40;
extern string LabelEH = "Number of hours before DD timer resets";
// Minimum number of hours between emails
extern double EmailHours = 24;
extern string LabelDisplay = "Used to Adjust Overlay";
// Turns the display on and off
extern bool displayOverlay = true;
// Turns off copyright and icon
extern bool displayLogo = true;
// Turns off the CCI display
extern bool displayCCI = true;
// Show BE, TP and TS lines
extern bool displayLines = true;
// Moves display left and right
extern int displayXcord = 100;
// Moves display up and down
extern int displayYcord = 22;
// Moves CCI display left and right
extern int displayCCIxCord = 10;
//Display font
extern string displayFont = "Arial Bold";
// Changes size of display characters
extern int displayFontSize = 9;
// Changes space between lines
extern int displaySpacing = 14;
// Ratio to increase label width spacing
extern double displayRatio = 1;
// default color of display characters
extern color displayColor = DeepSkyBlue;
// default color of profit display characters
extern color displayColorProfit = Green;
// default color of loss display characters
extern color displayColorLoss = Red;
// default color of ForeGround Text display characters
extern color displayColorFGnd = White;
extern bool Debug = true;
extern string LabelOpt = "These values can only be used while optimizing";
// Set to true if you want to be able to optimize the grid settings.
extern bool UseGridOpt = false;
// These values will replace the normal SetCountArray,
// GridSetArray and TP_SetArray during optimization.
// The default values are the same as the normal array defaults
// REMEMBER:
// There must be one more value for GridArray and TPArray
// than there is for SetArray
extern int SetArray1 = 4;
extern int SetArray2 = 4;
extern int SetArray3 = 0;
extern int SetArray4 = 0;
extern int GridArray1 = 25;
extern int GridArray2 = 50;
extern int GridArray3 = 100;
extern int GridArray4 = 0;
extern int GridArray5 = 0;
extern int TPArray1 = 50;
extern int TPArray2 = 100;
extern int TPArray3 = 200;
extern int TPArray4 = 0;
extern int TPArray5 = 0;
//+-----------------------------------------------------------------+
//| Internal Parameters Set |
//+-----------------------------------------------------------------+
int ca;
int Magic,hMagic;
int CbT,CpT,ChT;
double Pip,hPip;
int POSLCount;
double SLbL;
int Moves;
double MaxDD;
double SLb;
int AccountType;
double StopTradeBalance;
double InitialAB;
bool Testing,Visual;
bool AllowTrading;
bool EmergencyWarning;
double MaxDDPer;
int Error,y;
int Set1Level,Set2Level,Set3Level,Set4Level;
int EmailCount;
string sTF;
datetime EmailSent;
int GridArray[,2];
double Lots[],MinLotSize,LotStep,LotDecimal;
int LotMult,MinMult;
bool PendLot;
string CS,UAE;
int HolShutDown;
datetime HolArray[,4];
datetime HolFirst,HolLast,NextStats,OTbF;
double RSI[];
int Digit[,2],TF[10]={0,1,5,15,30,60,240,1440,10080,43200};
double Email[3];
double EETime,PbC,PhC,hDDStart,PbMax,PbMin,PhMax,PhMin,LastClosedPL,ClosedPips,SLh,hLvlStart,StatLowEquity,StatHighEquity;
int hActive,EECount,TbF,CbC/*count of closed?*/,CaL,FileHandle;
bool TradesOpen,FileClosed,HedgeTypeDD,hThisChart,hPosCorr,dLabels,FirstRun;
string FileName,ID,StatFile;
double TPb,StopLevel,TargetPips,LbF,bTS,PortionBalance;
//+-----------------------------------------------------------------+
//| expert initialization function |
//+-----------------------------------------------------------------+
int init()
{
CS = "Waiting for next tick ."; // To display comments while testing, simply use CS = .... and
Comment(CS); // it will be displayed by the line at the end of the start() block.
CS = "";
Testing = IsTesting();
Visual = IsVisualMode();
FirstRun = true;
AllowTrading = true;
if(EANumber<1)
{
EANumber = 1;
}
if(Testing)
{
EANumber = 0;
}
Magic = GenerateMagicNumber();
hMagic = JenkinsHash(Magic);
FileName = "B3_"+Magic+".dat";
if(Debug)
{
Print("Magic Number: "+DTS(Magic,0));
Print("Hedge Number: "+DTS(hMagic,0));
Print("FileName: "+FileName);
}
Pip = Point;
if(Digits%2 == 1)
{
Pip *= 10;
}
if(NanoAccount)
{
AccountType = 10;
}
else
{
AccountType = 1;
}
MoveTP = ND(MoveTP*Pip,Digits);
EntryOffset = ND(EntryOffset*Pip,Digits);
MADistance = ND(MADistance*Pip,Digits);
BollDistance= ND(BollDistance*Pip,Digits);
POSLPips = ND(POSLPips*Pip,Digits);
hMaxLossPips= ND(hMaxLossPips*Pip,Digits);
hTakeProfit = ND(hTakeProfit*Pip,Digits);
CloseTPPips = ND(CloseTPPips*Pip,Digits);
ForceTPPips = ND(ForceTPPips*Pip,Digits);
MinTPPips = ND(MinTPPips*Pip,Digits);
BEPlusPips = ND(BEPlusPips*Pip,Digits);
SLPips = ND(SLPips*Pip,Digits);
TSLPips = ND(TSLPips*Pip,Digits);
TSLPipsMin = ND(TSLPipsMin*Pip,Digits);
slip *= Pip/Point;
if(UseHedge)
{
if(HedgeSymbol == "")
{
HedgeSymbol = Symbol();
}
if(HedgeSymbol == Symbol())
{
hThisChart = true;
}
else
{
hThisChart = false;
}
hPip = MarketInfo(HedgeSymbol,MODE_POINT);
int hDigits = MarketInfo(HedgeSymbol,MODE_DIGITS);
if(hDigits%2 == 1)
{
hPip *= 10;
}
if(CheckCorr() > 0.9 || hThisChart)
{
hPosCorr = true;
}
else if(CheckCorr() < -0.9)
{
hPosCorr = false;
}
else
{
AllowTrading = false;
UseHedge = false;
Print("The Hedge Symbol you have entered ("+HedgeSymbol+") is not closely correlated to "+Symbol());
}
if(StringSubstr(DDorLevel,0,1) == "D"||StringSubstr(DDorLevel,0,1) == "d")
{
HedgeTypeDD = true;
}
else if(StringSubstr(DDorLevel,0,1) == "L"||StringSubstr(DDorLevel,0,1) == "l")
{
HedgeTypeDD = false;
}
else
{
UseHedge = false;
}
if(HedgeTypeDD)
{
HedgeStart /= 100;
hDDStart = HedgeStart;
}
}
StopTradePercent /=100;
ProfitSet /=100;
EEHoursPC /=100;
EELevelPC /=100;
hReEntryPC /=100;
PortionPC /=100;
InitialAB = AccountBalance();
StopTradeBalance = InitialAB * (1 - StopTradePercent);
if(Testing)
{
ID = "B3Test.";
}
else
{
ID = DTS(Magic,0)+".";
}
HideTestIndicators(true);
MinLotSize = MarketInfo(Symbol(), MODE_MINLOT);
if(MinLotSize > Lot)
{
Print("Lot is less than your brokers minimum lot size");
AllowTrading = false;
}
LotStep = MarketInfo(Symbol(),MODE_LOTSTEP);
double MinLot = MathMin(MinLotSize,LotStep);
LotMult = ND(MathMax(Lot,MinLotSize)/MinLot,0);
MinMult = LotMult;
Lot = MinLot;
if(MinLot < 0.01)
{
LotDecimal = 3;
}
else if(MinLot < 0.1)
{
LotDecimal = 2;
}
else if(MinLot < 1)
{
LotDecimal = 1;
}
else
{
LotDecimal = 0;
}
FileHandle = FileOpen(FileName, FILE_BIN|FILE_READ);
if(FileHandle != -1)
{
TbF = FileReadInteger(FileHandle,LONG_VALUE);
FileClose(FileHandle);
Error = GetLastError();
if(OrderSelect(TbF,SELECT_BY_TICKET))
{
OTbF = OrderOpenTime();
LbF = OrderLots();
LotMult = MathMax(1,LbF/MinLot);
PbC = FindClosedPL(B);
PhC = FindClosedPL(H);
TradesOpen = true;
if(Debug)
{
Print(FileName+" File Read: "+TbF+" Lots: "+DTS(LbF,LotDecimal));
}
}
else
{
FileDelete(FileName);
TbF = 0;
OTbF = 0;
LbF = 0;
Error = GetLastError();
if(Error == ERR_NO_ERROR)
{
if(Debug)
{
Print(FileName+" File Deleted");
}
}
else
{
Print("Error deleting file: "+FileName+" "+Error+" "+ErrorDescription(Error));
}
}
}
GlobalVariableSet(ID+"LotMult", LotMult);
if(Debug)
{
Print("MinLotSize: "+DTS(MinLotSize,2)+" LotStep: "+DTS(LotStep,2)+" MinLot: "+DTS(MinLot,2)+" StartLot: "+DTS(Lot,2)+
" LotMult: "+DTS(LotMult,0)+" Lot Decimal: "+DTS(LotDecimal,0));
}
EmergencyWarning = EmergencyCloseAll;
if(IsOptimization())
{
Debug =false;
}
if(UseAnyEntry)
{
UAE = "||";
}
else
{
UAE = "&&";
}
if(ForceMarketCond<0 || ForceMarketCond>3)
{
ForceMarketCond = 3;
}
if(MAEntry<0 || MAEntry>2)
{
MAEntry = 0;
}
if(CCIEntry<0 || CCIEntry>2)
{
CCIEntry = 0;
}
if(BollingerEntry<0 || BollingerEntry>2)
{
BollingerEntry = 0;
}
if(StochEntry<0 || StochEntry>2)
{
StochEntry = 0;
}
if(MACDEntry<0 || MACDEntry>2)
{
MACDEntry = 0;
}
if(MaxCloseTrades == 0)
{
MaxCloseTrades = MaxTrades;
}
ArrayResize(Digit,6);
for(y = 0; y < ArrayRange(Digit,0); y++)
{
if( y > 0)
{
Digit[y,0] = MathPow(10,y);
}
Digit[y,1] = y;
if(Debug)
{
Print("Digit: "+y+" ["+Digit[y,0]+","+Digit[y,1]+"]");
}
}
LabelCreate();
dLabels=false;
//+-----------------------------------------------------------------+
//| Set Lot Array |
//+-----------------------------------------------------------------+
ArrayResize(Lots, MaxTrades);
for(y = 0; y < MaxTrades; y++)
{
if(y==0 || Multiplier<1)
{
Lots[y] = Lot;
}
else
{
Lots[y] = ND(MathMax(Lots[y-1]*Multiplier,Lots[y-1]+LotStep), LotDecimal);
}
if(Debug)
{
Print("Lot Size for level "+DTS(y+1,0)+" : "+DTS(Lots[y]*MathMax(LotMult,1),LotDecimal));
}
}
if(Multiplier < 1)
{
Multiplier = 1;
}
//+-----------------------------------------------------------------+
//| Set Grid and TP array |
//+-----------------------------------------------------------------+
int GridSet,GridTemp,GridTP,GridIndex,GridLevel,GridError;
if(!AutoCal)
{
//int GridSet,GridTemp,GridTP,GridIndex,GridLevel,GridError;
ArrayResize(GridArray, MaxTrades);
if(IsOptimization() && UseGridOpt)
{
if(SetArray1 > 0)
{
SetCountArray = DTS(SetArray1,0);
GridSetArray = DTS(GridArray1,0);
TP_SetArray = DTS(TPArray1,0);
}
if(SetArray2 > 0 || (SetArray1>0 && GridArray2>0))
{
if(SetArray2>0)
{
SetCountArray=SetCountArray+","+DTS(SetArray2,0);
}
GridSetArray=GridSetArray+","+DTS(GridArray2,0);
TP_SetArray=TP_SetArray+","+DTS(TPArray2,0);
}
if(SetArray3>0||(SetArray2>0&&GridArray3>0))
{
if(SetArray3>0)
{
SetCountArray=SetCountArray+","+DTS(SetArray3,0);
}
GridSetArray=GridSetArray+","+DTS(GridArray3,0);
TP_SetArray=TP_SetArray+","+DTS(TPArray3,0);
}
if(SetArray4>0||(SetArray3>0&&GridArray4>0))
{
if(SetArray4>0)
{
SetCountArray=SetCountArray+","+DTS(SetArray4,0);
}
GridSetArray=GridSetArray+","+DTS(GridArray4,0);
TP_SetArray=TP_SetArray+","+DTS(TPArray4,0);
}
if(SetArray4>0&&GridArray5>0)
{
GridSetArray=GridSetArray+","+DTS(GridArray5,0);
TP_SetArray=TP_SetArray+","+DTS(TPArray5,0);
}
}
while(GridIndex < MaxTrades)
{
if(StringFind(SetCountArray,",") == -1 && GridIndex==0)
{
GridError = 1;
break;
}
else
{
GridSet = StrToInteger(StringSubstr(SetCountArray,0,StringFind(SetCountArray,",")));
}
if(GridSet > 0)
{
SetCountArray = StringSubstr(SetCountArray,StringFind(SetCountArray,",")+1);
GridTemp = StrToInteger(StringSubstr(GridSetArray,0,StringFind(GridSetArray,",")));
GridSetArray = StringSubstr(GridSetArray,StringFind(GridSetArray,",")+1);
GridTP = StrToInteger(StringSubstr(TP_SetArray,0,StringFind(TP_SetArray,",")));
TP_SetArray = StringSubstr(TP_SetArray,StringFind(TP_SetArray,",")+1);
}
else
{
GridSet = MaxTrades;
}
if(GridTemp == 0 || GridTP == 0)
{
GridError = 2;
break;
}
for(GridLevel = GridIndex; GridLevel<=MathMin(GridIndex+GridSet-1,MaxTrades-1); GridLevel++)
{
GridArray[GridLevel,0] = GridTemp;
GridArray[GridLevel,1] = GridTP;
if(Debug)
{
Print("GridArray "+(GridLevel+1)+" : ["+GridArray[GridLevel,0]+","+GridArray[GridLevel,1]+"]");
}
}
GridIndex = GridLevel;
}
if(GridError>0||GridArray[0,0]==0||GridArray[0,1]==0)
{
if(GridError==1)
{
Print("Grid Array Error. Each value should be separated by a comma.");
}
else
{
Print("Grid Array Error. Check that there is one more 'Grid' and 'TP' number than there are 'Set' numbers, separated by commas.");
}
AllowTrading=false;
}
}
else
{
while(GridIndex < 4)
{
GridSet = StrToInteger(StringSubstr(SetCountArray,0,StringFind(SetCountArray,",")));
SetCountArray = StringSubstr(SetCountArray,StringFind(SetCountArray,DTS(GridSet,0))+2);
if(GridIndex==0 && GridSet<1)
{
GridError = 1;
break;
}
if(GridSet>0)
{
GridLevel += GridSet;
}
else if(GridLevel < MaxTrades)
{
GridLevel = MaxTrades;
}
else
{
GridLevel = MaxTrades+1;
}
if(GridIndex == 0)
{
Set1Level = GridLevel;
}
else if(GridIndex==1 && GridLevel<=MaxTrades)
{
Set2Level=GridLevel;
}
else if(GridIndex==2&&GridLevel<=MaxTrades)
{
Set3Level=GridLevel;
}
else if(GridIndex==3&&GridLevel<=MaxTrades)
{
Set4Level=GridLevel;
}
GridIndex++;
}
if(GridError==1||Set1Level==0)
{
Print("Error setting up the Grid Levels. Check that the SetCountArray has valid numbers, separated by a comma.");
AllowTrading=false;
}
}
//+-----------------------------------------------------------------+
//| Set holidays array |
//+-----------------------------------------------------------------+
if(UseHolidayShutdown)
{
int HolTemp,NumHols,NumBS,HolCounter;
string HolTempStr;
if(StringFind(Holidays,",",0)==-1)
{
NumHols=1;
}
else
{
NumHols=1;
while(HolTemp!=-1)
{
HolTemp=StringFind(Holidays,",",HolTemp+1);
if(HolTemp!=-1)
{
NumHols+=1;
}
}
}
HolTemp=0;
while(HolTemp!=-1)
{
HolTemp=StringFind(Holidays,"/",HolTemp+1);
if(HolTemp!=-1)
{
NumBS+=1;
}
}
if(NumBS!=NumHols*2)
{
Print("Holidays Error, number of back-slashes ("+NumBS+") should be equal to 2* number of Holidays ("+NumHols+
", and separators should be a comma.");
AllowTrading=false;
}
else
{
HolTemp=0;
ArrayResize(HolArray,NumHols);
while(HolTemp!=-1)
{
if(HolTemp==0)
{
HolTempStr=StringTrimLeft(StringTrimRight(StringSubstr(Holidays,0,StringFind(Holidays,",",HolTemp))));
}
else
{
HolTempStr=StringTrimLeft(StringTrimRight(StringSubstr(Holidays,HolTemp+1,
StringFind(Holidays,",",HolTemp+1)-StringFind(Holidays,",",HolTemp)-1)));
}
HolTemp=StringFind(Holidays,",",HolTemp+1);
HolArray[HolCounter,0]=StrToInteger(StringSubstr(StringSubstr(HolTempStr,0,StringFind(HolTempStr,"-",0)),
StringFind(StringSubstr(HolTempStr,0,StringFind(HolTempStr,"-",0)),"/")+1));
HolArray[HolCounter,1]=StrToInteger(StringSubstr(StringSubstr(HolTempStr,0,StringFind(HolTempStr,"-",0)),0,
StringFind(StringSubstr(HolTempStr,0,StringFind(HolTempStr,"-",0)),"/")));
HolArray[HolCounter,2]=StrToInteger(StringSubstr(StringSubstr(HolTempStr,StringFind(HolTempStr,"-",0)+1),
StringFind(StringSubstr(HolTempStr,StringFind(HolTempStr,"-",0)+1),"/")+1));
HolArray[HolCounter,3]=StrToInteger(StringSubstr(StringSubstr(HolTempStr,StringFind(HolTempStr,"-",0)+1),0,
StringFind(StringSubstr(HolTempStr,StringFind(HolTempStr,"-",0)+1),"/")));
HolCounter+=1;
}
}
for(HolTemp=0;HolTemp<HolCounter;HolTemp++)
{
int Start1,Start2,Temp0,Temp1,Temp2,Temp3;
for(int Item1=HolTemp+1;Item1<HolCounter;Item1++)
{
Start1=HolArray[HolTemp,0]*100+HolArray[HolTemp,1];
Start2=HolArray[Item1,0]*100+HolArray[Item1,1];
if(Start1>Start2)
{
Temp0=HolArray[Item1,0];
Temp1=HolArray[Item1,1];
Temp2=HolArray[Item1,2];
Temp3=HolArray[Item1,3];
HolArray[Item1,0]=HolArray[HolTemp,0];
HolArray[Item1,1]=HolArray[HolTemp,1];
HolArray[Item1,2]=HolArray[HolTemp,2];
HolArray[Item1,3]=HolArray[HolTemp,3];
HolArray[HolTemp,0]=Temp0;
HolArray[HolTemp,1]=Temp1;
HolArray[HolTemp,2]=Temp2;
HolArray[HolTemp,3]=Temp3;
}
}
}
if(Debug)
{
for(HolTemp=0;HolTemp<HolCounter;HolTemp++)
{
Print("Holidays - From: ",HolArray[HolTemp,1],"/",HolArray[HolTemp,0]," - ",HolArray[HolTemp,3],"/",HolArray[HolTemp,2]);
}
}
}
//+-----------------------------------------------------------------+
//| Set email parameters |
//+-----------------------------------------------------------------+
if(UseEmail)
{
if(Period()==43200)
{
sTF="MN1";
}
else if(Period()==10800)
{
sTF="W1";
}
else if(Period()==1440)
{
sTF="D1";
}
else if(Period()==240)
{
sTF="H4";
}
else if(Period()==60)
{
sTF="H1";
}
else if(Period()==30)
{
sTF="M30";
}
else if(Period()==15)
{
sTF="M15";
}
else if(Period()==5)
{
sTF="M5";
}
else if(Period()==1)
{
sTF="M1";
}
Email[0]=MathMax(MathMin(EmailDD1,MaxDDPercent-1),0)/100;
Email[1]=MathMax(MathMin(EmailDD2,MaxDDPercent-1),0)/100;
Email[2]=MathMax(MathMin(EmailDD3,MaxDDPercent-1),0)/100;
ArraySort(Email,WHOLE_ARRAY,0,MODE_ASCEND);
for(int z=0;z<=2;z++)
{
for(y=0;y<=2;y++)
{
if(Email[y]==0)
{
Email[y]=Email[y+1];
Email[y+1]=0;
}
}
if(Debug)
{
Print("Email ["+(z+1)+"] : "+Email[z]);
}
}
}
//+-----------------------------------------------------------------+
//| Set SmartGrid parameters |
//+-----------------------------------------------------------------+
if(UseSmartGrid)
{
ArrayResize(RSI,RSI_Period+RSI_MA_Period);
ArraySetAsSeries(RSI,true);
}
//+---------------------------------------------------------------+
//| Initialize Statistics |
//+---------------------------------------------------------------+
if(SaveStats)
{
StatFile = "B3"+Symbol()+"_"+Period()+"_"+EANumber+".csv";
NextStats = TimeCurrent();
Stats(StatsInitialise,false,AccountBalance()*PortionPC,0);
}
return(0);
}
//+-----------------------------------------------------------------+
//| expert deinitialization function |
//+-----------------------------------------------------------------+
int deinit()
{
switch(UninitializeReason())
{
case REASON_REMOVE:
case REASON_CHARTCLOSE:
case REASON_CHARTCHANGE:
if(CpT>0)
{
while(CpT>0)
{
CpT-=ExitTrades(P,displayColorLoss,"Blessing Removed");
}
}
GlobalVariablesDeleteAll(ID);
case REASON_RECOMPILE:
case REASON_PARAMETERS:
case REASON_ACCOUNT:
if(!Testing)
{
LabelDelete();
}
Comment("");
}
return(0);
}
//+-----------------------------------------------------------------+
//| expert start function |
//+-----------------------------------------------------------------+
int start()
{
int CbB = 0; // Count buy
int CbS = 0; // Count sell
int CpBL = 0; // Count buy limit
int CpSL = 0; // Count sell limit
int CpBS = 0; // Count buy stop
int CpSS = 0; // Count sell stop
double LbB = 0; // Count buy lots
double LbS = 0; // Count sell lots
double LbT = 0; // total lots out
double OPpBL = 0; // Buy limit open price
double OPpSL = 0; // Sell limit open price
double SLbB = 0; // stop losses are set to zero if POSL off
double SLbS = 0; // stop losses are set to zero if POSL off
double BCb,BCh,BCa; // Broker costs (swap + commission)
double ProfitPot = 0; // The Potential Profit of a basket of Trades
double PipValue,PipVal2,ASK,BID;
double OrderLot;
double OPbL,OPhO; // last open price
int OTbL; // last open time
double g2,tp2,Entry,RSI_MA,LhB,LhS,LhT,OPbO,OTbO,OThO,TbO,ThO;
int Ticket,ChB,ChS,IndEntry;
double Pb,Ph,PaC,PbPips,PbTarget,DrawDownPC,BEb,BEh,BEa;
bool BuyMe,SellMe,Success,SetPOSL;
string IndicatorUsed;
//+-----------------------------------------------------------------+
//| Count Open Orders, Lots and Totals |
//+-----------------------------------------------------------------+
PipVal2 = MarketInfo(Symbol(),MODE_TICKVALUE)/MarketInfo(Symbol(),MODE_TICKSIZE);
PipValue = PipVal2 * Pip;
StopLevel = MarketInfo(Symbol(),MODE_STOPLEVEL) * Point;
ASK = ND(MarketInfo(Symbol(),MODE_ASK),MarketInfo(Symbol(),MODE_DIGITS));
BID = ND(MarketInfo(Symbol(),MODE_BID),MarketInfo(Symbol(),MODE_DIGITS));
if(ASK == 0 || BID == 0)
{
return(0);
}
for(y = 0; y < OrdersTotal(); y++)
{
if(!OrderSelect(y,SELECT_BY_POS,MODE_TRADES))
{
continue;
}
int Type = OrderType();
if(OrderMagicNumber() == hMagic)
{
Ph += OrderProfit();
BCh += OrderSwap()+OrderCommission();
BEh += OrderLots()*OrderOpenPrice();
if(OrderOpenTime()<OThO || OThO==0)
{
OThO = OrderOpenTime();
ThO = OrderTicket();
OPhO = OrderOpenPrice();
}
if(Type == OP_BUY)
{
ChB++;
LhB += OrderLots();
}
else if(Type == OP_SELL)
{
ChS++;
LhS += OrderLots();
}
continue;
}
if(OrderMagicNumber()!=Magic || OrderSymbol()!=Symbol())
{
continue;
}
if(OrderTakeProfit() > 0)
{
ModifyOrder(OrderOpenPrice(),OrderStopLoss());
}
if( Type <= OP_SELL )
{
Pb += OrderProfit();
BCb += OrderSwap()+OrderCommission();
BEb += OrderLots()*OrderOpenPrice();
if(OrderOpenTime() >= OTbL)
{
OTbL = OrderOpenTime();
OPbL = OrderOpenPrice();
}
if(OrderOpenTime()<OTbF || TbF==0)
{
OTbF = OrderOpenTime();
TbF = OrderTicket();
LbF = OrderLots();
}
if(OrderOpenTime()<OTbO || OTbO==0)
{
OTbO = OrderOpenTime();
TbO = OrderTicket();
OPbO = OrderOpenPrice();
}
if(UsePowerOutSL&&((POSLPips>0&&OrderStopLoss()==0)||(POSLPips==0&&OrderStopLoss()>0)))
{
SetPOSL = true;
}
if(Type == OP_BUY)
{
CbB++;
LbB += OrderLots();
continue;
}
else
{
CbS++;
LbS += OrderLots();
continue;
}
}
else
{
if( Type == OP_BUYLIMIT)
{
CpBL++;
OPpBL = OrderOpenPrice();
continue;
}
else if(Type == OP_SELLLIMIT)
{
CpSL++;
OPpSL = OrderOpenPrice();
continue;
}
else if(Type == OP_BUYSTOP)
{
CpBS++;
}
else
{
CpSS++;
}
}
}
CbT = CbB + CbS;
LbT = LbB + LbS;
Pb = ND(Pb+BCb,2);
ChT = ChB + ChS;
LhT = LhB + LhS;
Ph = ND(Ph + BCh,2);
CpT = CpBL+CpSL+CpBS+CpSS;
BCa = BCb+BCh;
//+-----------------------------------------------------------------+
//| Calculate Min/Max Profit and Break Even Points |
//+-----------------------------------------------------------------+
if(LbT > 0)
{
BEb = ND(BEb/LbT, Digits);
if(BCa < 0)
{
if (LbB != LbS)
{
BEb -= ND(BCa/PipVal2/(LbB-LbS),Digits);
}
else
{
Print("LbB == LbS, I do NOT konw how to deal it!");
}
}
if(Pb>PbMax || PbMax==0)
{
PbMax = Pb;
}
if(Pb<PbMin || PbMin==0)
{
PbMin = Pb;
}
if(!TradesOpen)
{
FileHandle = FileOpen(FileName,FILE_BIN|FILE_WRITE);
if( FileHandle > -1)
{
FileWriteInteger(FileHandle, TbF);
FileClose(FileHandle);
TradesOpen = true;
if(Debug)
{
Print(FileName+" File Written: "+TbF);
}
}
}
}
else if(TradesOpen)
{
TPb = 0;
PbMax = 0;
PbMin = 0;
OTbF = 0;
TbF = 0;
LbF = 0;
PbC = 0;
PhC = 0;
PaC = 0;
ClosedPips = 0;
CbC = 0;
CaL = 0;
bTS = 0;
if(HedgeTypeDD)
{
hDDStart = HedgeStart;
}
else
{
hLvlStart = HedgeStart;
}
EmailCount = 0;
EmailSent = 0;
FileHandle = FileOpen(FileName,FILE_BIN|FILE_READ);
if(FileHandle > -1)
{
FileClose(FileHandle);
Error = GetLastError();
FileDelete(FileName);
Error = GetLastError();
if(Error == ERR_NO_ERROR)
{
if(Debug)
{
Print(FileName+" File Deleted");
}
TradesOpen = false;
}
else
{
Print("Error deleting file: "+FileName+" "+Error+" "+ErrorDescription(Error));
}
}
else
{
TradesOpen = false;
}
}
if(LhT>0)
{
BEh = ND(BEh/LhT,Digits);
if(Ph>PhMax || PhMax==0)
{
PhMax = Ph;
}
if(Ph<PhMin || PhMin==0)
{
PhMin = Ph;
}
}
else
{
PhMax = 0;
PhMin = 0;
SLh = 0;
}
//+-----------------------------------------------------------------+
//| Check if trading is allowed |
//+-----------------------------------------------------------------+
if(CbT==0 && ChT==0 && ShutDown)
{
if(CpT>0)
{
ExitTrades(P,displayColorLoss,"Blessing is shutting down");
return(0);
}
if(AllowTrading)
{
Print("Blessing has ShutDown. Set ShutDown = 'false' to continue trading");
if(PlaySounds)
{
PlaySound(AlertSound);
}
AllowTrading = false;
}
if(UseEmail&&EmailCount<4&&!Testing)
{
SendMail("Blessing EA","Blessing has shut down on "+Symbol()+" "+sTF+
". Trading has been suspended. To resume trading, set ShutDown to false.");
Error = GetLastError();
if(Error>0)
{
Print("Error sending Email: "+Error+" "+ErrorDescription(Error));
}
else
{
EmailCount = 4;
}
}
}
static bool LDelete;
if(!AllowTrading)
{
if(!LDelete)
{
LDelete = true;
LabelDelete();
if(ObjectFind("B3LStop")==-1)
{
CreateLabel("B3LStop","Trading has been stopped on this pair.",10,0,0,3,displayColorLoss);
}
if(Testing)
{
string Tab = "Tester Journal";
}
else
{
Tab = "Terminal Experts";
}
if(ObjectFind("B3LExpt")==-1)
{
CreateLabel("B3LExpt","Check the "+Tab+" tab for the reason why.",10,0,0,6,displayColorLoss);
}
if(ObjectFind("B3LResm")==-1)
{
CreateLabel("B3LResm","Reset Blessing to resume trading.",10,0,0,9,displayColorLoss);
}
}
return(0);
}
else
{
LDelete=false;
ObjDel("B3LStop");
ObjDel("B3LExpt");
ObjDel("B3LResm");
}
//+-----------------------------------------------------------------+
//| Calculate Drawdown and Equity Protection |
//+-----------------------------------------------------------------+
double NewPortionBalance = ND(AccountBalance()*PortionPC,2);
if(CbT==0 || PortionChange<0 || (PortionChange>0 && NewPortionBalance>PortionBalance))
{
PortionBalance = NewPortionBalance;
}
if(Pb+Ph < 0)
{
DrawDownPC = -(Pb+Ph)/PortionBalance;
}
if(!FirstRun&&DrawDownPC >= MaxDDPercent/100)
{
ExitTrades(A,displayColorLoss,"Equity Stop Loss Reached");
if(PlaySounds)
{
PlaySound(AlertSound);
}
return(0);
}
if(-(Pb+Ph) > MaxDD)
{
MaxDD = -(Pb+Ph);
}
MaxDDPer = MathMax(MaxDDPer,DrawDownPC*100);
if(SaveStats)
{
Stats(false,TimeCurrent()<NextStats,PortionBalance,Pb+Ph);
}
//+-----------------------------------------------------------------+
//| Calculate Stop Trade Percent |
//+-----------------------------------------------------------------+
double StepAB = InitialAB*(1+StopTradePercent);
double StepSTB = AccountBalance()*(1-StopTradePercent);
double NextISTB = StepAB*(1-StopTradePercent);
if(StepSTB>NextISTB)
{
InitialAB = StepAB;
StopTradeBalance = StepSTB;
}
double InitialAccountMultiPortion = StopTradeBalance*PortionPC;
if(PortionBalance < InitialAccountMultiPortion)
{
if(CbT==0)
{
AllowTrading=false;
if(PlaySounds)PlaySound(AlertSound);
Print("Portion Balance dropped below stop trade percent");
MessageBox("Reset Blessing, account balance dropped below stop trade percent on "+Symbol()+Period(),"Blessing 3: Warning",48);
return(0);
}
else if(!ShutDown && !RecoupClosedLoss)
{
ShutDown = true;
if(PlaySounds)
{
PlaySound(AlertSound);
}
Print("Portion Balance dropped below stop trade percent");
return(0);
}
}
//+-----------------------------------------------------------------+
//| Calculation of Trend Direction |
//+-----------------------------------------------------------------+
int Trend;
string ATrend;
double ima_0 = iMA(Symbol(),0,MAPeriod,0,MODE_EMA,PRICE_CLOSE,0);
if(ForceMarketCond == 3)
{
if(BID > ima_0+MADistance)
{
Trend = 0;
}
else if(ASK < ima_0-MADistance)
{
Trend = 1;
}
else
{
Trend = 2;
}
}
else
{
Trend = ForceMarketCond;
if(Trend != 0 && BID > ima_0+MADistance)
{
ATrend = "U";
}
if(Trend != 1 && ASK < ima_0-MADistance)
{
ATrend = "D";
}
if(Trend != 2 && (BID<ima_0+MADistance&&ASK>ima_0-MADistance))
{
ATrend = "R";
}
}
//+-----------------------------------------------------------------+
//| Hedge/Basket/ClosedTrades Profit Management |
//+-----------------------------------------------------------------+
double Pa = Pb;
PaC = PbC+PhC;
if(hActive==1 && ChT==0)
{
PhC = FindClosedPL(H);
hActive = 0;
return(0);
}
else if(hActive==0 && ChT>0)
{
hActive=1;
}
if(LbT > 0)
{
if(PbC>0 || (PbC<0&&RecoupClosedLoss))
{
Pa += PbC;
BEb -= ND(PbC/PipVal2/(LbB-LbS), Digits);
}
if(PhC>0 || (PhC<0&&RecoupClosedLoss))
{
Pa += PhC;
BEb -= ND(PhC/PipVal2/(LbB-LbS), Digits);
}
if(Ph>0 || (Ph<0&&RecoupClosedLoss))
{
Pa += Ph;
}
}
//+-----------------------------------------------------------------+
//| Close oldest open trade after CloseTradesLevel reached |
//+-----------------------------------------------------------------+
if(UseCloseOldest && CbT>=CloseTradesLevel && CbC<MaxCloseTrades)
{
if(!FirstRun && TPb>0 && (ForceCloseOldest||(CbB>0&&OPbO>TPb) || (CbS>0&&OPbO<TPb)))
{
y = ExitTrades(T,DarkViolet,"Close Oldest Trade",TbO);
if(y == 1)
{
OrderSelect(TbO,SELECT_BY_TICKET);
PbC += OrderProfit()+OrderSwap()+OrderCommission();
ca = 0;
CbC++;
return(0);
}
}
}
//+-----------------------------------------------------------------+
//| ATR for Auto Grid Calculation and Grid Set Block |
//+-----------------------------------------------------------------+
double GridTP;
if(AutoCal)
{
double GridATR = iATR(NULL,TF[ATRTF],ATRPeriods,0)/Pip;
if((CbT+CbC>Set4Level) && Set4Level>0)
{
g2 = GridATR*12; //GS*2*2*2*1.5
tp2 = GridATR*18; //GS*2*2*2*1.5*1.5
}
else if((CbT+CbC>Set3Level) && Set3Level>0)
{
g2 = GridATR*8; //GS*2*2*2
tp2 = GridATR*12; //GS*2*2*2*1.5
}
else if((CbT+CbC>Set2Level) && Set2Level>0)
{
g2 = GridATR*4; //GS*2*2
tp2 = GridATR*8; //GS*2*2*2
}
else if((CbT+CbC>Set1Level) && Set1Level>0)
{
g2 = GridATR*2; //GS*2
tp2 = GridATR*4; //GS*2*2
}
else
{
g2 = GridATR;
tp2 = GridATR*2;
}
GridTP = GridATR*2;
}
else
{
y = MathMax(MathMin(CbT+CbC,MaxTrades)-1,0);
g2 = GridArray[y,0];
tp2 = GridArray[y,1];
GridTP = GridArray[0,1];
}
g2 = ND(MathMax(g2*GAF*Pip,Pip), Digits);
tp2 = ND(tp2*GAF*Pip, Digits);
GridTP = ND(GridTP*GAF*Pip, Digits);
//+-----------------------------------------------------------------+
//| Money Management and Lot size coding |
//+-----------------------------------------------------------------+
if(UseMM)
{
if(CbT>0)
{
if(GlobalVariableCheck(ID+"LotMult"))
{
LotMult = GlobalVariableGet(ID+"LotMult");
}
if(LbF != LotSize(Lots[0]*LotMult))
{
LotMult=LbF/Lots[0];
GlobalVariableSet(ID+"LotMult",LotMult);
Print("LotMult reset to "+DTS(LotMult,0));
}
}
if(CbT==0)
{
double Contracts,Factor,Lotsize;
Contracts = PortionBalance/10000;
if(Multiplier <= 1)
{
Factor = Level;
}
else
{
Factor = (MathPow(Multiplier,Level)-Multiplier)/(Multiplier-1);
}
Lotsize = LAF*AccountType*Contracts/(1+Factor);
LotMult = MathMax(MathFloor(Lotsize/Lot),MinMult);
GlobalVariableSet(ID+"LotMult",LotMult);
}
}
else if(CbT==0)
{
LotMult = MinMult;
}
//+-----------------------------------------------------------------+
//| Calculate Take Profit |
//+-----------------------------------------------------------------+
static double BCaL,BEbL;
double nLots = LbB - LbS;
if(CbT>0 && (TPb==0||CbT+ChT!=CaL||BEbL!=BEb||BCa!=BCaL||FirstRun))
{
string sCalcTP = "Set New TP: BE: "+DTS(BEb,Digits);
double NewTP,BasePips;
CaL = CbT + ChT;
BCaL = BCa;
BEbL = BEb;
Print("++++++++++++nLots: "+nLots+" LbB:"+LbB+" LbS:"+LbS+" ++++++++++");
if (nLots != 0)
{
BasePips = ND(Lot*LotMult*GridTP*(CbT+CbC)/nLots, Digits);
Print("BasePips: "+ BasePips);
}
else
{
Print("nLots is 0 !!! I do NOT kown how to deal it!");
}
if(CbB > 0)
{
if(ForceTPPips > 0)
{
NewTP = BEb+ForceTPPips;
sCalcTP=sCalcTP+" +Force TP ("+DTS(ForceTPPips,Digits)+") ";
}
else if(CbC>0&&CloseTPPips>0)
{
NewTP=BEb+CloseTPPips;
sCalcTP=sCalcTP+" +Close TP ("+DTS(CloseTPPips,Digits)+") ";
}
else if(BEb+BasePips>OPbL+tp2)
{
NewTP=BEb+BasePips;
sCalcTP=sCalcTP+" +Base TP: ("+DTS(BasePips,Digits)+") ";
}
else
{
NewTP=OPbL+tp2;
sCalcTP=sCalcTP+" +Grid TP: ("+DTS(tp2,Digits)+") ";
}
if(MinTPPips>0)
{
NewTP=MathMax(NewTP,BEb+MinTPPips);
sCalcTP=sCalcTP+" >Minimum TP: ";
}
NewTP+=MoveTP*Moves;
if(BreakEvenTrade>0&&CbT+CbC>=BreakEvenTrade)
{
NewTP=BEb+BEPlusPips;
sCalcTP=sCalcTP+" >BreakEven: ("+DTS(BEPlusPips,Digits)+") ";
}
sCalcTP=(sCalcTP+"Buy: TakeProfit: ");
}
else if(CbS>0)
{
if(ForceTPPips>0)
{
NewTP=BEb-ForceTPPips;
sCalcTP=sCalcTP+" -Force TP ("+DTS(ForceTPPips,Digits)+") ";
}
else if(CbC>0&&CloseTPPips>0)
{
NewTP=BEb-CloseTPPips;
sCalcTP=sCalcTP+" -Close TP ("+DTS(CloseTPPips,Digits)+") ";
}
else if(BEb+BasePips<OPbL-tp2)
{
NewTP=BEb+BasePips;
sCalcTP=sCalcTP+" -Base TP: ("+DTS(BasePips,Digits)+") ";
}
else
{ NewTP=OPbL-tp2;
sCalcTP=sCalcTP+" -Grid TP: ("+DTS(tp2,Digits)+") ";
}
if(MinTPPips>0)
{ NewTP=MathMin(NewTP,BEb-MinTPPips);
sCalcTP=sCalcTP+" >Minimum TP: ";
}
NewTP-=MoveTP*Moves;
if(BreakEvenTrade>0&&CbT+CbC>=BreakEvenTrade)
{ NewTP=BEb-BEPlusPips;
sCalcTP=sCalcTP+" >BreakEven: ("+DTS(BEPlusPips,Digits)+") ";
}
sCalcTP=(sCalcTP+"Sell: TakeProfit: ");
}
if(TPb!=NewTP)
{ TPb=NewTP;
if(nLots>0)TargetPips=ND(TPb-BEb,Digits);
else TargetPips=ND(BEb-TPb,Digits);
Print(sCalcTP+DTS(NewTP,Digits));
return(0);
}
}
PbTarget=TargetPips/Pip;
ProfitPot=ND(TargetPips*PipVal2*MathAbs(nLots),2);
if(CbB>0)PbPips=ND((BID-BEb)/Pip,1);
if(CbS>0)PbPips=ND((BEb-ASK)/Pip,1);
//+-----------------------------------------------------------------+
//| Adjust BEb/TakeProfit if Hedge is active |
//+-----------------------------------------------------------------+
double hAsk=MarketInfo(HedgeSymbol,MODE_ASK);
double hBid=MarketInfo(HedgeSymbol,MODE_BID);
double hSpread=hAsk-hBid;
if(hThisChart)nLots+=LhB-LhS;
if(hActive==1)
{ double TPa,PhPips;
if(nLots==0)
{ BEa=0;
TPa=0;
}
else if(hThisChart)
{ if(nLots>0)
{ if(CbB>0)BEa=ND((BEb*LbT-(BEh-hSpread)*LhT)/(LbT-LhT),Digits);
else BEa=ND(((BEb-(ASK-BID))*LbT-BEh*LhT)/(LbT-LhT),Digits);
TPa=ND(BEa+TargetPips,Digits);
}
else
{ if(CbS>0)BEa=ND((BEb*LbT-(BEh+hSpread)*LhT)/(LbT-LhT),Digits);
else BEa=ND(((BEb+ASK-BID)*LbT-BEh*LhT)/(LbT-LhT),Digits);
TPa=ND(BEa-TargetPips,Digits);
}
}
else
{
}
if(ChB>0)PhPips=ND((hBid-BEh)/hPip,1);
if(ChS>0)PhPips=ND((BEh-hAsk)/hPip,1);
}
else
{ BEa=BEb;
TPa=TPb;
}
//+-----------------------------------------------------------------+
//| Calculate Early Exit Percentage |
//+-----------------------------------------------------------------+
if(UseEarlyExit&&CbT>0)
{ double EEpc,EEopt,EEStartTime,TPaF;
if(EEFirstTrade)EEopt=OTbF;
else EEopt=OTbL;
if(DayOfWeek()<TimeDayOfWeek(EEopt))EEStartTime=2*24*3600;
EEStartTime+=EEopt+EEStartHours*3600;
if(EEHoursPC>0&&TimeCurrent()>=EEStartTime)EEpc=EEHoursPC*(TimeCurrent()-EEStartTime)/3600;
if(EELevelPC>0&&(CbT+CbC)>=EEStartLevel)EEpc+=EELevelPC*(CbT+CbC-EEStartLevel+1);
EEpc=1-EEpc;
if(!EEAllowLoss&&EEpc<0)EEpc=0;
PbTarget*=EEpc;
TPaF=ND((TPa-BEa)*EEpc+BEa,Digits);
if(displayOverlay&&displayLines&&(hActive!=1||(hActive==1&&hThisChart))&&(!Testing||(Testing&&Visual))&&EEpc<1
&&(CbT+CbC+ChT>EECount||EETime!=Time[0])&&((EEHoursPC>0&&EEopt+EEStartHours*3600<Time[0])||(EELevelPC>0&&CbT+CbC>=EEStartLevel)))
{ EETime=Time[0];
EECount=CbT+CbC+ChT;
if(ObjectFind("B3LEELn")<0)
{ ObjectCreate("B3LEELn",OBJ_TREND,0,0,0);
ObjectSet("B3LEELn",OBJPROP_COLOR,Yellow);
ObjectSet("B3LEELn",OBJPROP_WIDTH,1);
ObjectSet("B3LEELn",OBJPROP_STYLE,0);
ObjectSet("B3LEELn",OBJPROP_RAY,false);
}
if(EEHoursPC>0)ObjectMove("B3LEELn",0,MathFloor(EEopt/3600+EEStartHours)*3600,TPa);
else ObjectMove("B3LEELn",0,MathFloor(EEopt/3600)*3600,TPaF);
ObjectMove("B3LEELn",1,Time[1],TPaF);
if(ObjectFind("B3VEELn")<0)
{ ObjectCreate("B3VEELn",OBJ_TEXT,0,0,0);
ObjectSet("B3VEELn",OBJPROP_COLOR,Yellow);
ObjectSet("B3VEELn",OBJPROP_WIDTH,1);
ObjectSet("B3VEELn",OBJPROP_STYLE,0);
}
ObjSetTxt("B3VEELn"," "+DTS(TPaF,Digits),-1,Yellow);
ObjectSet("B3VEELn",OBJPROP_PRICE1,TPaF+2*Pip);
ObjectSet("B3VEELn",OBJPROP_TIME1,Time[1]);
}
else if((!displayLines||EEpc==1||(!EEAllowLoss&&EEpc==0)||(EEHoursPC>0&&EEopt+EEStartHours*3600>=Time[0])))
{ ObjDel("B3LEELn");
ObjDel("B3VEELn");
}
}
else
{ TPaF=TPa;
EETime=0;
EECount=0;
ObjDel("B3LEELn");
ObjDel("B3VEELn");
}
//+-----------------------------------------------------------------+
//| Maximize Profit with Moving TP and setting Trailing Profit Stop |
//+-----------------------------------------------------------------+
if(MaximizeProfit)
{ if(CbT==0)
{ SLbL=0;
Moves=0;
SLb=0;
}
if(!FirstRun&&CbT>0)
{ if(Pb+Ph<0&&SLb>0)SLb=0;
if(SLb>0&&((nLots>0&&BID<SLb)||(nLots<0&&ASK>SLb)))
{ ExitTrades(A,displayColorProfit,"Profit Trailing Stop Reached ("+DTS(ProfitSet*100,2)+"%)");
return(0);
}
if(PbTarget>0)
{ double TPbMP=ND(BEa+(TPa-BEa)*ProfitSet,Digits);
if((nLots>0&&BID>TPbMP)||(nLots<0&&ASK<TPbMP))SLb=TPbMP;
}
if(SLb>0&&SLb!=SLbL&&MoveTP>0&&TotalMoves>Moves)
{ TPb=0;
Moves++;
if(Debug)Print("MoveTP");
SLbL=SLb;
if(PlaySounds)PlaySound(AlertSound);
return(0);
}
}
}
if(!FirstRun&&TPaF>0)
{ if((nLots>0&&BID>=TPaF)||(nLots<0&&ASK<=TPaF))
{ ExitTrades(A,displayColorProfit,"Profit Target Reached @ "+DTS(TPaF,Digits));
return(0);
}
}
if(!FirstRun&&UseStopLoss)
{ double bSL;
if(SLPips>0)
{ if(nLots>0)
{ bSL=BEa-SLPips;
if(BID<=bSL)
{ ExitTrades(A,displayColorProfit,"Stop Loss Reached");
return(0);
}
}
else if(nLots<0)
{ bSL=BEa+SLPips;
if(ASK>=bSL)
{ ExitTrades(A,displayColorProfit,"Stop Loss Reached");
return(0);
}
}
}
if(TSLPips!=0)
{ if(nLots>0)
{ if(TSLPips>0&&BID>BEa+TSLPips)bTS=MathMax(bTS,BID-TSLPips);
if(TSLPips<0&&BID>BEa-TSLPips)bTS=MathMax(bTS,BID-MathMax(TSLPipsMin,-TSLPips*(1-(BID-BEa+TSLPips)/(-TSLPips*2))));
if(bTS>0&&BID<=bTS)
{ ExitTrades(A,displayColorProfit,"Trailing Stop Reached");
return(0);
}
}
else if(nLots<0)
{ if(TSLPips>0&&ASK<BEa-TSLPips)
{ if(bTS>0)bTS=MathMin(bTS,ASK+TSLPips);
else bTS=ASK+TSLPips;
}
if(TSLPips<0&&ASK<BEa+TSLPips)bTS=MathMin(bTS,ASK+MathMax(TSLPipsMin,-TSLPips*(1-(BEa-ASK+TSLPips)/(-TSLPips*2))));
if(bTS>0&&ASK>=bTS)
{ ExitTrades(A,displayColorProfit,"Trailing Stop Reached");
return(0);
}
}
}
}
//+-----------------------------------------------------------------+
//| Check for and Delete hanging pending orders |
//+-----------------------------------------------------------------+
if(CbT==0&&!PendLot)
{ PendLot=true;
for(y=OrdersTotal()-1;y>=0;y--)
{ if(!OrderSelect(y,SELECT_BY_POS,MODE_TRADES))continue;
if(OrderMagicNumber()!=Magic||OrderType()<=OP_SELL)continue;
if(ND(OrderLots(),LotDecimal)>ND(Lots[0]*LotMult,LotDecimal))
{ PendLot=false;
while(IsTradeContextBusy())Sleep(100);
if(IsStopped())return(-1);
Success=OrderDelete(OrderTicket());
if(Success)
{ PendLot=true;
if(Debug)Print("Delete pending > Lot");
}
}
}
return(0);
}
else if((CbT>0||(CbT==0&&CpT>0&&!B3Traditional))&&PendLot)
{ PendLot=false;
for(y=OrdersTotal()-1;y>=0;y--)
{ if(!OrderSelect(y,SELECT_BY_POS,MODE_TRADES))continue;
if(OrderMagicNumber()!=Magic||OrderType()<=OP_SELL)continue;
if(ND(OrderLots(),LotDecimal)==ND(Lots[0]*LotMult,LotDecimal))
{ PendLot=true;
while(IsTradeContextBusy())Sleep(100);
if(IsStopped())return(-1);
Success=OrderDelete(OrderTicket());
if(Success)
{ PendLot=false;
if(Debug)Print("Delete pending = Lot");
}
}
}
return(0);
}
//+-----------------------------------------------------------------+
//| Check ca, Breakeven Trades and Emergency Close All |
//+-----------------------------------------------------------------+
switch(ca)
{ case B: if(CbT==0&&CpT==0)ca=0;break;
case H: if(ChT==0)ca=0;break;
case A: if(CbT==0&&CpT==0&&ChT==0)ca=0;break;
case P: if(CpT==0)ca=0;break;
case T: break;
default: break;
}
if(ca>0)
{ ExitTrades(ca,displayColorLoss,"Close All ("+DTS(ca,0)+")");
return(0);
}
if(CbT==0&&ChT>0)
{ ExitTrades(H,displayColorLoss,"Basket Closed");
return(0);
}
if(EmergencyCloseAll)
{ ExitTrades(A,displayColorLoss,"Emergency Close All Trades");
EmergencyCloseAll=false;
return(0);
}
//+-----------------------------------------------------------------+
//| Check Holiday Shutdown |
//+-----------------------------------------------------------------+
if(UseHolidayShutdown)
{ if(HolShutDown>0&&TimeCurrent()>=HolLast&&HolLast>0)
{ Print("Blessing has resumed after the holidays. From: "+TimeToStr(HolFirst,TIME_DATE)+" To: "+TimeToStr(HolLast,TIME_DATE));
HolShutDown=0;
LabelDelete();
LabelCreate();
if(PlaySounds)PlaySound(AlertSound);
}
if(HolShutDown==3)
{ if(ObjectFind("B3LStop")==-1)
CreateLabel("B3LStop","Trading has been stopped on this pair for the holidays.",10,0,0,3,displayColorLoss);
if(ObjectFind("B3LResm")==-1)
CreateLabel("B3LResm","Blessing will resume trading after "+TimeToStr(HolLast,TIME_DATE)+".",10,0,0,9,displayColorLoss);
return(0);
}
else if((HolShutDown==0&&TimeCurrent()>=HolLast)||HolFirst==0)
{ for(y=0;y<ArraySize(HolArray);y++)
{ HolFirst=StrToTime(Year()+"."+HolArray[y,0]+"."+HolArray[y,1]);
HolLast=StrToTime(Year()+"."+HolArray[y,2]+"."+HolArray[y,3]+" 23:59:59");
if(TimeCurrent()<HolFirst)
{ if(HolFirst>HolLast)HolLast=StrToTime(DTS(Year()+1,0)+"."+HolArray[y,2]+"."+HolArray[y,3]+" 23:59:59");
break;
}
if(TimeCurrent()<HolLast)
{ if(HolFirst>HolLast)HolFirst=StrToTime(DTS(Year()-1,0)+"."+HolArray[y,0]+"."+HolArray[y,1]);
break;
}
if(TimeCurrent()>HolFirst&&HolFirst>HolLast)
{ HolLast=StrToTime(DTS(Year()+1,0)+"."+HolArray[y,2]+"."+HolArray[y,3]+" 23:59:59");
if(TimeCurrent()<HolLast)break;
}
}
if(TimeCurrent()>=HolFirst&&TimeCurrent()<=HolLast)
{ Comment("");
HolShutDown=1;
}
}
else if(HolShutDown==0&&TimeCurrent()>=HolFirst&&TimeCurrent()<HolLast)HolShutDown=1;
if(HolShutDown==1&&CbT==0)
{ Print("Blessing has shut down for the holidays. From: "+TimeToStr(HolFirst,TIME_DATE)+
" To: "+TimeToStr(HolLast,TIME_DATE));
if(CpT>0)
{ y=ExitTrades(P,displayColorLoss,"Holiday Shutdown");
if(y==CpT)ca=0;
}
HolShutDown=2;
ObjDel("B3LClos");
}
else if(HolShutDown==1)
{ if(ObjectFind("B3LClos")==-1)CreateLabel("B3LClos","",5,0,0,23,displayColorLoss);
ObjSetTxt("B3LClos","Blessing will shutdown for the holidays when this basket closes",5);
}
if(HolShutDown==2)
{ LabelDelete();
if(PlaySounds)PlaySound(AlertSound);
HolShutDown=3;
}
if(HolShutDown==3)
{ if(ObjectFind("B3LStop")==-1)
CreateLabel("B3LStop","Trading has been stopped on this pair for the holidays.",10,0,0,3,displayColorLoss);
if(ObjectFind("B3LResm")==-1)
CreateLabel("B3LResm","Blessing will resume trading after "+TimeToStr(HolLast,TIME_DATE)+".",10,0,0,9,displayColorLoss);
Comment("");
return(0);
}
}
//+-----------------------------------------------------------------+
//| Power Out Stop Loss Protection |
//+-----------------------------------------------------------------+
if(SetPOSL)
{ if(UsePowerOutSL&&POSLPips>0)
{ double POSL=MathMin(PortionBalance*(MaxDDPercent+1)/100/PipVal2/LbT,POSLPips);
SLbB=ND(BEb-POSL,Digits);
SLbS=ND(BEb+POSL,Digits);
}
else
{ SLbB=0;
SLbS=0;
}
for(y=0;y<OrdersTotal();y++)
{ if(!OrderSelect(y,SELECT_BY_POS,MODE_TRADES))continue;
if(OrderMagicNumber()!=Magic||OrderSymbol()!=Symbol()||OrderType()>OP_SELL)continue;
if(OrderType()==OP_BUY&&OrderStopLoss()!=SLbB)
{ Success=ModifyOrder(OrderOpenPrice(),SLbB,Purple);
if(Debug&&Success)Print("Order: "+OrderTicket()+" Sync POSL Buy");
}
else if(OrderType()==OP_SELL&&OrderStopLoss()!=SLbS)
{ Success=ModifyOrder(OrderOpenPrice(),SLbS,Purple);
if(Debug&&Success)Print("Order: "+OrderTicket()+" Sync POSL Sell");
}
}
}
//+-----------------------------------------------------------------+ << This must be the first Entry check.
//| Moving Average Indicator for Order Entry | << Add your own Indicator Entry checks
//+-----------------------------------------------------------------+ << after the Moving Average Entry.
if(MAEntry>0&&CbT==0&&CpT<2)
{ if(BID>ima_0+MADistance&&(!B3Traditional||(B3Traditional&&Trend!=2)))
{ if(MAEntry==1)
{ if(ForceMarketCond!=1&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&BuyMe)))BuyMe=true;
else BuyMe=false;
if(!UseAnyEntry&&IndEntry>0&&SellMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))SellMe=false;
}
else if(MAEntry==2)
{ if(ForceMarketCond!=0&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&SellMe)))SellMe=true;
else SellMe=false;
if(!UseAnyEntry&&IndEntry>0&&BuyMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))BuyMe=false;
}
}
else if(ASK<ima_0-MADistance&&(!B3Traditional||(B3Traditional&&Trend!=2)))
{ if(MAEntry==1)
{ if(ForceMarketCond!=0&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&SellMe)))SellMe=true;
else SellMe=false;
if(!UseAnyEntry&&IndEntry>0&&BuyMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))BuyMe=false;
}
else if(MAEntry==2)
{ if(ForceMarketCond!=1&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&BuyMe)))BuyMe=true;
else BuyMe=false;
if(!UseAnyEntry&&IndEntry>0&&SellMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))SellMe=false;
}
}
else if(B3Traditional&&Trend==2)
{ if(ForceMarketCond!=1&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&BuyMe)))BuyMe=true;
if(ForceMarketCond!=0&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&SellMe)))SellMe=true;
}
else
{ BuyMe=false;
SellMe=false;
}
if(IndEntry>0)IndicatorUsed=IndicatorUsed+UAE;
IndEntry++;
IndicatorUsed=IndicatorUsed+" MA ";
}
//+----------------------------------------------------------------+
//| CCI of 5M,15M,30M,1H for Market Condition and Order Entry |
//+----------------------------------------------------------------+
if(CCIEntry>0)
{ double cci_01=iCCI(Symbol(),PERIOD_M5,CCIPeriod,PRICE_CLOSE,0);
double cci_02=iCCI(Symbol(),PERIOD_M15,CCIPeriod,PRICE_CLOSE,0);
double cci_03=iCCI(Symbol(),PERIOD_M30,CCIPeriod,PRICE_CLOSE,0);
double cci_04=iCCI(Symbol(),PERIOD_H1,CCIPeriod,PRICE_CLOSE,0);
double cci_11=iCCI(Symbol(),PERIOD_M5,CCIPeriod,PRICE_CLOSE,1);
double cci_12=iCCI(Symbol(),PERIOD_M15,CCIPeriod,PRICE_CLOSE,1);
double cci_13=iCCI(Symbol(),PERIOD_M30,CCIPeriod,PRICE_CLOSE,1);
double cci_14=iCCI(Symbol(),PERIOD_H1,CCIPeriod,PRICE_CLOSE,1);
}
if(CCIEntry>0&&CbT==0&&CpT<2)
{ if(cci_11>0&&cci_12>0&&cci_13>0&&cci_14>0&&cci_01>0&&cci_02>0&&cci_03>0&&cci_04>0)
{ if(ForceMarketCond==3)Trend=0;
if(CCIEntry==1)
{ if(ForceMarketCond!=1&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&BuyMe)))BuyMe=true;
else BuyMe=false;
if(!UseAnyEntry&&IndEntry>0&&SellMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))SellMe=false;
}
else if(CCIEntry==2)
{ if(ForceMarketCond!=0&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&SellMe)))SellMe=true;
else SellMe=false;
if(!UseAnyEntry&&IndEntry>0&&BuyMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))BuyMe=false;
}
}
else if(cci_11<0&&cci_12<0&&cci_13<0&&cci_14<0&&cci_01<0&&cci_02<0&&cci_03<0&&cci_04<0)
{ if(ForceMarketCond==3)Trend=1;
if(CCIEntry==1)
{ if(ForceMarketCond!=0&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&SellMe)))SellMe=true;
else SellMe=false;
if(!UseAnyEntry&&IndEntry>0&&BuyMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))BuyMe=false;
}
else if(CCIEntry==2)
{ if(ForceMarketCond!=1&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&BuyMe)))BuyMe=true;
else BuyMe=false;
if(!UseAnyEntry&&IndEntry>0&&SellMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))SellMe=false;
}
}
else if(!UseAnyEntry&&IndEntry>0)
{ BuyMe=false;
SellMe=false;
}
if(IndEntry>0)IndicatorUsed=IndicatorUsed+UAE;
IndEntry++;
IndicatorUsed=IndicatorUsed+" CCI ";
}
//+----------------------------------------------------------------+
//| Bollinger Band Indicator for Order Entry |
//+----------------------------------------------------------------+
if(BollingerEntry>0&&CbT==0&&CpT<2)
{ double ma=iMA(Symbol(),0,BollPeriod,0,MODE_SMA,PRICE_OPEN,0);
double stddev=iStdDev(Symbol(),0,BollPeriod,0,MODE_SMA,PRICE_OPEN,0);
double bup=ma+(BollDeviation*stddev);
double bdn=ma-(BollDeviation*stddev);
double bux=bup+BollDistance;
double bdx=bdn-BollDistance;
if(ASK<bdx)
{ if(BollingerEntry==1)
{ if(ForceMarketCond!=1&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&BuyMe)))BuyMe=true;
else BuyMe=false;
if(!UseAnyEntry&&IndEntry>0&&SellMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))SellMe=false;
}
else if(BollingerEntry==2)
{ if(ForceMarketCond!=0&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&SellMe)))SellMe=true;
else SellMe=false;
if(!UseAnyEntry&&IndEntry>0&&BuyMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))BuyMe=false;
}
}
else if(BID>bux)
{ if(BollingerEntry==1)
{ if(ForceMarketCond!=0&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&SellMe)))SellMe=true;
else SellMe=false;
if(!UseAnyEntry&&IndEntry>0&&BuyMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))BuyMe=false;
}
else if(BollingerEntry==2)
{ if(ForceMarketCond!=1&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&BuyMe)))BuyMe=true;
else BuyMe=false;
if(!UseAnyEntry&&IndEntry>0&&SellMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))SellMe=false;
}
}
else if(!UseAnyEntry&&IndEntry>0)
{ BuyMe=false;
SellMe=false;
}
if(IndEntry>0)IndicatorUsed=IndicatorUsed+UAE;
IndEntry++;
IndicatorUsed=IndicatorUsed+" BBands ";
}
//+----------------------------------------------------------------+
//| Stochastic Indicator for Order Entry |
//+----------------------------------------------------------------+
if(StochEntry>0&&CbT==0&&CpT<2)
{ int zoneBUY=BuySellStochZone;
int zoneSELL=100-BuySellStochZone;
double stoc_0=iStochastic(NULL,0,KPeriod,DPeriod,Slowing,MODE_LWMA,1,0,1);
double stoc_1=iStochastic(NULL,0,KPeriod,DPeriod,Slowing,MODE_LWMA,1,1,1);
if(stoc_0<zoneBUY&&stoc_1<zoneBUY)
{ if(StochEntry==1)
{ if(ForceMarketCond!=1&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&BuyMe)))BuyMe=true;
else BuyMe=false;
if(!UseAnyEntry&&IndEntry>0&&SellMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))SellMe=false;
}
else if(StochEntry==2)
{ if(ForceMarketCond!=0&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&SellMe)))SellMe=true;
else SellMe=false;
if(!UseAnyEntry&&IndEntry>0&&BuyMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))BuyMe=false;
}
}
else if(stoc_0>zoneSELL&&stoc_1>zoneSELL)
{ if(StochEntry==1)
{ if(ForceMarketCond!=0&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&SellMe)))SellMe=true;
else SellMe=false;
if(!UseAnyEntry&&IndEntry>0&&BuyMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))BuyMe=false;
}
else if(StochEntry==2)
{ if(ForceMarketCond!=1&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&BuyMe)))BuyMe=true;
else BuyMe=false;
if(!UseAnyEntry&&IndEntry>0&&SellMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))SellMe=false;
}
}
else if(!UseAnyEntry&&IndEntry>0)
{ BuyMe=false;
SellMe=false;
}
if(IndEntry>0)IndicatorUsed=IndicatorUsed+UAE;
IndEntry++;
IndicatorUsed=IndicatorUsed+" Stoch ";
}
//+----------------------------------------------------------------+
//| MACD Indicator for Order Entry |
//+----------------------------------------------------------------+
if(MACDEntry>0&&CbT==0&&CpT<2)
{ double MACDm=iMACD(NULL,TF[MACD_TF],FastPeriod,SlowPeriod,SignalPeriod,MACDPrice,0,0);
double MACDs=iMACD(NULL,TF[MACD_TF],FastPeriod,SlowPeriod,SignalPeriod,MACDPrice,1,0);
if(MACDm>MACDs)
{ if(MACDEntry==1)
{ if(ForceMarketCond!=1&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&BuyMe)))BuyMe=true;
else BuyMe=false;
if(!UseAnyEntry&&IndEntry>0&&SellMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))SellMe=false;
}
else if(MACDEntry==2)
{ if(ForceMarketCond!=0&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&SellMe)))SellMe=true;
else SellMe=false;
if(!UseAnyEntry&&IndEntry>0&&BuyMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))BuyMe=false;
}
}
else if(MACDm<MACDs)
{ if(MACDEntry==1)
{ if(ForceMarketCond!=0&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&SellMe)))SellMe=true;
else SellMe=false;
if(!UseAnyEntry&&IndEntry>0&&BuyMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))BuyMe=false;
}
else if(MACDEntry==2)
{ if(ForceMarketCond!=1&&(UseAnyEntry||IndEntry==0||(!UseAnyEntry&&IndEntry>0&&BuyMe)))BuyMe=true;
else BuyMe=false;
if(!UseAnyEntry&&IndEntry>0&&SellMe&&(!B3Traditional||(B3Traditional&&Trend!=2)))SellMe=false;
}
}
else if(!UseAnyEntry&&IndEntry>0)
{ BuyMe=false;
SellMe=false;
}
if(IndEntry>0)IndicatorUsed=IndicatorUsed+UAE;
IndEntry++;
IndicatorUsed=IndicatorUsed+" MACD ";
}
//+-----------------------------------------------------------------+ << This must be the last Entry check before
//| UseAnyEntry Check && Force Market Condition Buy/Sell Entry | << the Trade Selection Logic. Add checks for
//+-----------------------------------------------------------------+ << additional indicators before this block.
if((!UseAnyEntry&&IndEntry>1&&BuyMe&&SellMe)||FirstRun)
{ BuyMe=false;
SellMe=false;
}
if(ForceMarketCond<2&&IndEntry==0&&CbT==0&&!FirstRun)
{ if(ForceMarketCond==0)BuyMe=true;
if(ForceMarketCond==1)SellMe=true;
IndicatorUsed=" FMC ";
}
//+-----------------------------------------------------------------+
//| Trade Selection Logic |
//+-----------------------------------------------------------------+
OrderLot=LotSize(Lots[StrToInteger(DTS(MathMin(CbT+CbC,MaxTrades-1),0))]*LotMult);
if(CbT==0&&CpT<2&&!FirstRun)
{ if(B3Traditional)
{ if(BuyMe)
{ if(CpBS==0&&CpSL==0&&((Trend!=2||MAEntry==0)||(Trend==2&&MAEntry==1)))
{ Entry=g2-MathMod(ASK,g2)+EntryOffset;
if(Entry>StopLevel)
{ Ticket=SendOrder(Symbol(),OP_BUYSTOP,OrderLot,Entry,0,Magic,CLR_NONE);
if(Ticket>0)
{ if(Debug)Print("Indicator Entry - ("+IndicatorUsed+") BuyStop MC = "+Trend);
CpBS++;
}
}
}
if(CpBL==0&&CpSS==0&&((Trend!=2||MAEntry==0)||(Trend==2&&MAEntry==2)))
{ Entry=MathMod(ASK,g2)+EntryOffset;
if(Entry>StopLevel)
{ Ticket=SendOrder(Symbol(),OP_BUYLIMIT,OrderLot,-Entry,0,Magic,CLR_NONE);
if(Ticket>0)
{ if(Debug)Print("Indicator Entry - ("+IndicatorUsed+") BuyLimit MC = "+Trend);
CpBL++;
}
}
}
}
if(SellMe)
{ if(CpSL==0&&CpBS==0&&((Trend!=2||MAEntry==0)||(Trend==2&&MAEntry==2)))
{ Entry=g2-MathMod(BID,g2)+EntryOffset;
if(Entry>StopLevel)
{ Ticket=SendOrder(Symbol(),OP_SELLLIMIT,OrderLot,Entry,0,Magic,CLR_NONE);
if(Ticket>0&&Debug)Print("Indicator Entry - ("+IndicatorUsed+") SellLimit MC = "+Trend);
}
}
if(CpSS==0&&CpBL==0&&((Trend!=2||MAEntry==0)||(Trend==2&&MAEntry==1)))
{ Entry=MathMod(BID,g2)+EntryOffset;
if(Entry>StopLevel)
{ Ticket=SendOrder(Symbol(),OP_SELLSTOP,OrderLot,-Entry,0,Magic,CLR_NONE);
if(Ticket>0&&Debug)Print("Indicator Entry - ("+IndicatorUsed+") SellStop MC = "+Trend);
}
}
}
}
else
{ if(BuyMe)
{ Ticket=SendOrder(Symbol(),OP_BUY,OrderLot,0,slip,Magic,Blue);
if(Ticket>0&&Debug)Print("Indicator Entry - ("+IndicatorUsed+") Buy");
}
else if(SellMe)
{ Ticket=SendOrder(Symbol(),OP_SELL,OrderLot,0,slip,Magic,displayColorLoss);
if(Ticket>0&&Debug)Print("Indicator Entry - ("+IndicatorUsed+") Sell");
}
}
if(Ticket>0)return(0);
}
else if(TimeCurrent()-EntryDelay>OTbL&&CbT+CbC<MaxTrades&&!FirstRun)
{ if(UseSmartGrid)
{ if(RSI[1]!=iRSI(NULL,TF[RSI_TF],RSI_Period,RSI_Price,1))
for(y=0;y<RSI_Period+RSI_MA_Period;y++)RSI[y]=iRSI(NULL,TF[RSI_TF],RSI_Period,RSI_Price,y);
else RSI[0]=iRSI(NULL,TF[RSI_TF],RSI_Period,RSI_Price,0);
RSI_MA=iMAOnArray(RSI,0,RSI_MA_Period,0,RSI_MA_Method,0);
}
if(CbB>0)
{ if(OPbL>ASK)Entry=OPbL-(MathRound((OPbL-ASK)/g2)+1)*g2;
else Entry=OPbL-g2;
double OPbN;
if(UseSmartGrid)
{ if(ASK<OPbL-g2)
{ if(RSI[0]>RSI_MA)
{ Ticket=SendOrder(Symbol(),OP_BUY,OrderLot,0,slip,Magic,Blue);
if(Ticket>0&&Debug)Print("SmartGrid Buy RSI: "+RSI[0]+" > MA: "+RSI_MA);
}
OPbN=0;
}
else OPbN=OPbL-g2;
}
else if(CpBL==0)
{ if(ASK-Entry<=StopLevel)Entry=OPbL-(MathFloor((OPbL-ASK+StopLevel)/g2)+1)*g2;
Ticket=SendOrder(Symbol(),OP_BUYLIMIT,OrderLot,Entry-ASK,0,Magic,SkyBlue);
if(Ticket>0&&Debug)Print("BuyLimit grid");
}
else if(CpBL==1&&Entry-OPpBL>g2/2&&ASK-Entry>StopLevel)
{ for(y=OrdersTotal();y>=0;y--)
{ if(!OrderSelect(y,SELECT_BY_POS,MODE_TRADES))continue;
if(OrderMagicNumber()!=Magic||OrderSymbol()!=Symbol()||OrderType()!=OP_BUYLIMIT)continue;
Success=ModifyOrder(Entry,0,SkyBlue);
if(Success&&Debug)Print("Mod BuyLimit Entry");
}
}
}
else if(CbS>0)
{ if(BID>OPbL)Entry=OPbL+(MathRound((-OPbL+BID)/g2)+1)*g2;
else Entry=OPbL+g2;
if(UseSmartGrid)
{ if(BID>OPbL+g2)
{ if(RSI[0]<RSI_MA)
{ Ticket=SendOrder(Symbol(),OP_SELL,OrderLot,0,slip,Magic,displayColorLoss);
if(Ticket>0&&Debug)Print("SmartGrid Sell RSI: "+RSI[0]+" < MA: "+RSI_MA);
}
OPbN=0;
}
else OPbN=OPbL+g2;
}
else if(CpSL==0)
{ if(Entry-BID<=StopLevel)Entry=OPbL+(MathFloor((-OPbL+BID+StopLevel)/g2)+1)*g2;
Ticket=SendOrder(Symbol(),OP_SELLLIMIT,OrderLot,Entry-BID,0,Magic,Coral);
if(Ticket>0&&Debug)Print("SellLimit grid");
}
else if(CpSL==1&&OPpSL-Entry>g2/2&&Entry-BID>StopLevel)
{ for(y=OrdersTotal()-1;y>=0;y--)
{ if(!OrderSelect(y,SELECT_BY_POS,MODE_TRADES))continue;
if(OrderMagicNumber()!=Magic||OrderSymbol()!=Symbol()||OrderType()!=OP_SELLLIMIT)continue;
Success=ModifyOrder(Entry,0,Coral);
if(Success&&Debug)Print("Mod SellLimit Entry");
}
}
}
if(Ticket>0)return(0);
}
//+-----------------------------------------------------------------+
//| Hedge Trades Set-Up and Monitoring |
//+-----------------------------------------------------------------+
if((UseHedge&&CbT>0)||ChT>0)
{ int hLevel=CbT+CbC;
if(HedgeTypeDD)
{ if(hDDStart==0&&ChT>0)hDDStart=MathMax(HedgeStart,DrawDownPC+hReEntryPC);
if(hDDStart>HedgeStart&&hDDStart>DrawDownPC+hReEntryPC)hDDStart=DrawDownPC+hReEntryPC;
if(hActive==2)
{ hActive=0;
hDDStart=MathMax(HedgeStart,DrawDownPC+hReEntryPC);
}
}
if(hActive==0)
{ if(!hThisChart&&((hPosCorr&&CheckCorr()<0.9)||(!hPosCorr&&CheckCorr()>-0.9)))
{ if(ObjectFind("B3LhCor")==-1)
CreateLabel("B3LhCor","The correlation with the hedge pair has dropped below 90%.",0,0,190,10,displayColorLoss);
}
else ObjDel("B3LhCor");
if(hLvlStart>hLevel+1||(!HedgeTypeDD&&hLvlStart==0))hLvlStart=MathMax(HedgeStart,hLevel+1);
if((HedgeTypeDD&&DrawDownPC>hDDStart)||(!HedgeTypeDD&&hLevel>=hLvlStart))
{ OrderLot=LotSize(LbT*hLotMult);
if((CbB>0&&!hPosCorr)||(CbS>0&&hPosCorr))
{ Ticket=SendOrder(HedgeSymbol,OP_BUY,OrderLot,0,slip,hMagic,MidnightBlue);
if(Ticket>0)
{ if(hMaxLossPips>0)SLh=hAsk-hMaxLossPips;
if(Debug)Print("Hedge Buy : Stoploss @ "+DTS(SLh,Digits));
}
}
if((CbB>0&&hPosCorr)||(CbS>0&&!hPosCorr))
{ Ticket=SendOrder(HedgeSymbol,OP_SELL,OrderLot,0,slip,hMagic,Maroon);
if(Ticket>0)
{ if(hMaxLossPips>0)SLh=hBid+hMaxLossPips;
if(Debug)Print("Hedge Sell : Stoploss @ "+DTS(SLh,Digits));
}
}
if(Ticket>0)
{ hActive=1;
if(HedgeTypeDD)hDDStart+=hReEntryPC;
hLvlStart=hLevel+1;
return(0);
}
}
}
else if(hActive==1)
{ if(HedgeTypeDD&&hDDStart>HedgeStart&&hDDStart<DrawDownPC+hReEntryPC)hDDStart=DrawDownPC+hReEntryPC;
if(hLvlStart==0)
{ if(HedgeTypeDD)hLvlStart=hLevel+1;
else hLvlStart=MathMax(HedgeStart,hLevel+1);
}
if(hLevel>=hLvlStart)
{ OrderLot=LotSize(Lots[CbT+CbC-1]*LotMult*hLotMult);
if(OrderLot>0&&((CbB>0&&!hPosCorr)||(CbS>0&&hPosCorr)))
{ Ticket=SendOrder(HedgeSymbol,OP_BUY,OrderLot,0,slip,hMagic,MidnightBlue);
if(Ticket>0&&Debug)Print("Hedge Buy");
}
if(OrderLot>0&&((CbB>0&&hPosCorr)||(CbS>0&&!hPosCorr)))
{ Ticket=SendOrder(HedgeSymbol,OP_SELL,OrderLot,0,slip,hMagic,Maroon);
if(Ticket>0&&Debug)Print("Hedge Sell");
}
if(Ticket>0)
{ hLvlStart=hLevel+1;
return(0);
}
}
y=0;
if(!FirstRun&&hMaxLossPips>0)
{ if(ChB>0)
{ if(hFixedSL)
{ if(SLh==0)SLh=hBid-hMaxLossPips;
}
else
{ if(SLh==0||(SLh<BEh&&SLh<hBid-hMaxLossPips))SLh=hBid-hMaxLossPips;
else if(StopTrailAtBE&&hBid-hMaxLossPips>=BEh)SLh=BEh;
else if(SLh>=BEh&&!StopTrailAtBE)
{ if(!ReduceTrailStop)SLh=MathMax(SLh,hBid-hMaxLossPips);
else SLh=MathMax(SLh,hBid-MathMax(StopLevel,hMaxLossPips*(1-(hBid-hMaxLossPips-BEh)/(hMaxLossPips*2))));
}
}
if(hBid<=SLh)y=ExitTrades(H,DarkViolet,"Hedge Stop Loss");
}
else if(ChS>0)
{ if(hFixedSL)
{ if(SLh==0)SLh=hAsk+hMaxLossPips;
}
else
{ if(SLh==0||(SLh>BEh&&SLh>hAsk+hMaxLossPips))SLh=hAsk+hMaxLossPips;
else if(StopTrailAtBE&&hAsk+hMaxLossPips<=BEh)SLh=BEh;
else if(SLh<=BEh&&!StopTrailAtBE)
{ if(!ReduceTrailStop)SLh=MathMin(SLh,hAsk+hMaxLossPips);
else SLh=MathMin(SLh,hAsk+MathMax(StopLevel,hMaxLossPips*(1-(BEh-hAsk-hMaxLossPips)/(hMaxLossPips*2))));
}
}
if(hAsk>=SLh)y=ExitTrades(H,DarkViolet,"Hedge Stop Loss");
}
}
if(y==0&&hTakeProfit>0)
{ if(ChB>0&&hBid>OPhO+hTakeProfit)y=ExitTrades(T,DarkViolet,"Hedge Take Profit reached",ThO);
if(ChS>0&&hAsk<OPhO-hTakeProfit)y=ExitTrades(T,DarkViolet,"Hedge Take Profit reached",ThO);
}
if(y>0)
{ PhC=FindClosedPL(H);
if(y==ChT)
{ if(HedgeTypeDD)hActive=2;
else hActive=0;
}
return(0);
}
}
}
//+-----------------------------------------------------------------+
//| Check DD% and send Email |
//+-----------------------------------------------------------------+
if((UseEmail||PlaySounds)&&!Testing)
{ if(EmailCount<2&&Email[EmailCount]>0&&DrawDownPC>Email[EmailCount])
{ GetLastError();
if(UseEmail)
{ SendMail("Blessing EA","Blessing has exceeded a drawdown of "+Email[EmailCount]*100+"% on "+Symbol()+" "+sTF);
Error=GetLastError();
if(Error>0)Print("Email DD: "+DTS(DrawDownPC*100,2)+" Error: "+Error+" "+ErrorDescription(Error));
else if(Debug)Print("DrawDown Email sent on "+Symbol()+" "+sTF+ " DD: "+DTS(DrawDownPC*100,2));
EmailSent=TimeCurrent();
EmailCount++;
}
if(PlaySounds)PlaySound(AlertSound);
}
else if(EmailCount>0&&EmailCount<3&&DrawDownPC<Email[EmailCount]&&TimeCurrent()>EmailSent+EmailHours*3600)EmailCount--;
}
//+-----------------------------------------------------------------+
//| Display Overlay Code |
//+-----------------------------------------------------------------+
if((Testing&&Visual)||!Testing)
{ if(displayOverlay)
{ color Colour;
int dDigits;
ObjSetTxt("B3VTime",TimeToStr(TimeCurrent(),TIME_SECONDS));
DrawLabel("B3VSTAm",InitialAccountMultiPortion,167,2,displayColorLoss);
if(UseHolidayShutdown)
{ ObjSetTxt("B3VHolF",TimeToStr(HolFirst,TIME_DATE));
ObjSetTxt("B3VHolT",TimeToStr(HolLast,TIME_DATE));
}
DrawLabel("B3VPBal",PortionBalance,167);
if(DrawDownPC>0.4)Colour=displayColorLoss;
else if(DrawDownPC>0.3)Colour=Orange;
else if(DrawDownPC>0.2)Colour=Yellow;
else if(DrawDownPC>0.1)Colour=displayColorProfit;
else Colour=displayColor;
DrawLabel("B3VDrDn",DrawDownPC*100,315,2,Colour);
if(UseHedge&&HedgeTypeDD)ObjSetTxt("B3VhDDm",DTS(hDDStart*100,2));
else if(UseHedge&&!HedgeTypeDD)
{ DrawLabel("B3VhLvl",CbT+CbC,318,0);
ObjSetTxt("B3VhLvT",DTS(hLvlStart,0));
}
ObjSetTxt("B3VSLot",DTS(Lot*LotMult,2));
if(ProfitPot>=0)DrawLabel("B3VPPot",ProfitPot,190);
else
{ ObjSetTxt("B3VPPot",DTS(ProfitPot,2),0,displayColorLoss);
dDigits=Digit[ArrayBsearch(Digit,-ProfitPot,WHOLE_ARRAY,0,MODE_ASCEND),1];
ObjSet("B3VPPot",186-dDigits*7);
}
if(UseEarlyExit&&EEpc<1)
{ if(ObjectFind("B3SEEPr")==-1)CreateLabel("B3SEEPr","/",0,0,220,12);
if(ObjectFind("B3VEEPr")==-1)CreateLabel("B3VEEPr","",0,0,229,12);
ObjSetTxt("B3VEEPr",DTS(PbTarget*PipValue*MathAbs(LbB-LbS),2));
}
else
{ ObjDel("B3SEEPr");
ObjDel("B3VEEPr");
}
if(SLb>0)DrawLabel("B3VPrSL",SLb,190,Digits);
else if(bSL>0)DrawLabel("B3VPrSL",bSL,190,Digits);
else if(bTS>0)DrawLabel("B3VPrSL",bTS,190,Digits);
else DrawLabel("B3VPrSL",0,190,2);
if(Pb>=0)
{ DrawLabel("B3VPnPL",Pb,190,2,displayColorProfit);
ObjSetTxt("B3VPPip",DTS(PbPips,1),0,displayColorProfit);
ObjSet("B3VPPip",229);
}
else
{ ObjSetTxt("B3VPnPL",DTS(Pb,2),0,displayColorLoss);
dDigits=Digit[ArrayBsearch(Digit,-Pb,WHOLE_ARRAY,0,MODE_ASCEND),1];
ObjSet("B3VPnPL",186-dDigits*7);
ObjSetTxt("B3VPPip",DTS(PbPips,1),0,displayColorLoss);
ObjSet("B3VPPip",225);
}
if(PbMax>=0)DrawLabel("B3VPLMx",PbMax,190,2,displayColorProfit);
else
{ ObjSetTxt("B3VPLMx",DTS(PbMax,2),0,displayColorLoss);
dDigits=Digit[ArrayBsearch(Digit,-PbMax,WHOLE_ARRAY,0,MODE_ASCEND),1];
ObjSet("B3VPLMx",186-dDigits*7);
}
if(PbMin<0)ObjSet("B3VPLMn",225);
else ObjSet("B3VPLMn",229);
ObjSetTxt("B3VPLMn",DTS(PbMin,2),0,displayColorLoss);
if(CbT+CbC<BreakEvenTrade&&CbT+CbC<MaxTrades)Colour=displayColor;
else if(CbT+CbC<MaxTrades)Colour=Orange;
else Colour=displayColorLoss;
if(CbB>0)
{ ObjSetTxt("B3LType","Buy:");
DrawLabel("B3VOpen",CbB,207,0,Colour);
}
else if(CbS>0)
{ ObjSetTxt("B3LType","Sell:");
DrawLabel("B3VOpen",CbS,207,0,Colour);
}
else
{ ObjSetTxt("B3LType","");
ObjSetTxt("B3VOpen",DTS(0,0),0,Colour);
ObjSet("B3VOpen",207);
}
ObjSetTxt("B3VLots",DTS(LbT,2));
ObjSetTxt("B3VMove",DTS(Moves,0));
DrawLabel("B3VMxDD",MaxDD,107);
DrawLabel("B3VDDPC",MaxDDPer,229);
if(Trend==0)
{ ObjSetTxt("B3LTrnd","Trend is UP",10,displayColorProfit);
if(ObjectFind("B3ATrnd")==-1)CreateLabel("B3ATrnd","",0,0,160,20,displayColorProfit,"Wingdings");
ObjectSetText("B3ATrnd","?",displayFontSize+9,"Wingdings",displayColorProfit);
ObjSet("B3ATrnd",160);
ObjectSet("B3ATrnd",OBJPROP_YDISTANCE,displayYcord+displaySpacing*20);
if(StringLen(ATrend)>0)
{ if(ObjectFind("B3AATrn")==-1)CreateLabel("B3AATrn","",0,0,200,20,displayColorProfit,"Wingdings");
if(ATrend=="D")
{ ObjectSetText("B3AATrn","?",displayFontSize+9,"Wingdings",displayColorLoss);
ObjectSet("B3AATrn",OBJPROP_YDISTANCE,displayYcord+displaySpacing*20+5);
}
else if(ATrend=="R")
{ ObjSetTxt("B3AATrn","R",10,Orange);
ObjectSet("B3AATrn",OBJPROP_YDISTANCE,displayYcord+displaySpacing*20);
}
}
else ObjDel("B3AATrn");
}
else if(Trend==1)
{ ObjSetTxt("B3LTrnd","Trend is DOWN",10,displayColorLoss);
if(ObjectFind("B3ATrnd")==-1)CreateLabel("B3ATrnd","",0,0,210,20,displayColorLoss,"WingDings");
ObjectSetText("B3ATrnd","?",displayFontSize+9,"Wingdings",displayColorLoss);
ObjSet("B3ATrnd",210);
ObjectSet("B3ATrnd",OBJPROP_YDISTANCE,displayYcord+displaySpacing*20+5);
if(StringLen(ATrend)>0)
{ if(ObjectFind("B3AATrn")==-1)CreateLabel("B3AATrn","",0,0,250,20,displayColorProfit,"Wingdings");
if(ATrend=="U")
{ ObjectSetText("B3AATrn","?",displayFontSize+9,"Wingdings",displayColorProfit);
ObjectSet("B3AATrn",OBJPROP_YDISTANCE,displayYcord+displaySpacing*20);
}
else if(ATrend=="R")
{ ObjSetTxt("B3AATrn","R",10,Orange);
ObjectSet("B3AATrn",OBJPROP_YDISTANCE,displayYcord+displaySpacing*20);
}
}
else ObjDel("B3AATrn");
}
else if(Trend==2)
{ ObjSetTxt("B3LTrnd","Trend is Ranging",10,Orange);
ObjDel("B3ATrnd");
if(StringLen(ATrend)>0)
{ if(ObjectFind("B3AATrn")==-1)CreateLabel("B3AATrn","",0,0,220,20,displayColorProfit,"Wingdings");
if(ATrend=="U")
{ ObjectSetText("B3AATrn","?",displayFontSize+9,"Wingdings",displayColorProfit);
ObjectSet("B3AATrn",OBJPROP_YDISTANCE,displayYcord+displaySpacing*20);
}
else if(ATrend=="D")
{ ObjectSetText("B3AATrn","?",displayFontSize+8,"Wingdings",displayColorLoss);
ObjectSet("B3AATrn",OBJPROP_YDISTANCE,displayYcord+displaySpacing*20+5);
}
}
else ObjDel("B3AATrn");
}
if(PaC!=0)
{ if(ObjectFind("B3LClPL")==-1)CreateLabel("B3LClPL","Closed P/L",0,0,312,11);
if(ObjectFind("B3VClPL")==-1)CreateLabel("B3VClPL","",0,0,327,12);
if(PaC>=0)DrawLabel("B3VClPL",PaC,327,2,displayColorProfit);
else
{ ObjSetTxt("B3VClPL",DTS(PaC,2),0,displayColorLoss);
dDigits=Digit[ArrayBsearch(Digit,-PaC,WHOLE_ARRAY,0,MODE_ASCEND),1];
ObjSet("B3VClPL",323-dDigits*7);
}
}
else
{ ObjDel("B3LClPL");
ObjDel("B3VClPL");
}
if(hActive==1)
{ if(ObjectFind("B3LHdge")==-1)CreateLabel("B3LHdge","Hedge",0,0,323,13);
if(ObjectFind("B3VhPro")==-1)CreateLabel("B3VhPro","",0,0,312,14);
if(Ph>=0)DrawLabel("B3VhPro",Ph,312,2,displayColorProfit);
else
{ ObjSetTxt("B3VhPro",DTS(Ph,2),0,displayColorLoss);
dDigits=Digit[ArrayBsearch(Digit,-Ph,WHOLE_ARRAY,0,MODE_ASCEND),1];
ObjSet("B3VhPro",308-dDigits*7);
}
if(ObjectFind("B3VhPMx")==-1)CreateLabel("B3VhPMx","",0,0,312,15);
if(PhMax>=0)DrawLabel("B3VhPMx",PhMax,312,2,displayColorProfit);
else
{ ObjSetTxt("B3VhPMx",DTS(PhMax,2),0,displayColorLoss);
dDigits=Digit[ArrayBsearch(Digit,-PhMax,WHOLE_ARRAY,0,MODE_ASCEND),1];
ObjSet("B3VhPMx",308-dDigits*7);
}
if(ObjectFind("B3ShPro")==-1)CreateLabel("B3ShPro","/",0,0,342,15);
if(ObjectFind("B3VhPMn")==-1)CreateLabel("B3VhPMn","",0,0,351,15,displayColorLoss);
if(PhMin<0)ObjSet("B3VhPMn",347);
else ObjSet("B3VhPMn",351);
ObjSetTxt("B3VhPMn",DTS(PhMin,2),0,displayColorLoss);
if(ObjectFind("B3LhTyp")==-1)CreateLabel("B3LhTyp","",0,0,292,16);
if(ObjectFind("B3VhOpn")==-1)CreateLabel("B3VhOpn","",0,0,329,16);
if(ChB>0)
{ ObjSetTxt("B3LhTyp","Buy:");
DrawLabel("B3VhOpn",ChB,329,0);
}
else if(ChS>0)
{ ObjSetTxt("B3LhTyp","Sell:");
DrawLabel("B3VhOpn",ChS,329,0);
}
else
{ ObjSetTxt("B3LhTyp","");
ObjSetTxt("B3VhOpn",DTS(0,0));
ObjSet("B3VhOpn",329);
}
if(ObjectFind("B3ShOpn")==-1)CreateLabel("B3ShOpn","/",0,0,342,16);
if(ObjectFind("B3VhLot")==-1)CreateLabel("B3VhLot","",0,0,351,16);
ObjSetTxt("B3VhLot",DTS(LhT,2));
}
else
{ ObjDel("B3LHdge");
ObjDel("B3VhPro");
ObjDel("B3VhPMx");
ObjDel("B3ShPro");
ObjDel("B3VhPMn");
ObjDel("B3LhTyp");
ObjDel("B3VhOpn");
ObjDel("B3ShOpn");
ObjDel("B3VhLot");
}
}
if(displayLines)
{ if(BEb>0)
{ if(ObjectFind("B3LBELn")==-1)CreateLine("B3LBELn",DodgerBlue,1,0);
ObjectMove("B3LBELn",0,Time[1],BEb);
}
else ObjDel("B3LBELn");
if(TPa>0)
{ if(ObjectFind("B3LTPLn")==-1)CreateLine("B3LTPLn",Gold,1,0);
ObjectMove("B3LTPLn",0,Time[1],TPa);
}
else if(TPb>0&&nLots!=0)
{ if(ObjectFind("B3LTPLn")==-1)CreateLine("B3LTPLn",Gold,1,0);
ObjectMove("B3LTPLn",0,Time[1],TPb);
}
else ObjDel("B3LTPLn");
if(OPbN>0)
{ if(ObjectFind("B3LOPLn")==-1)CreateLine("B3LOPLn",Red,1,4);
ObjectMove("B3LOPLn",0,Time[1],OPbN);
}
else ObjDel("B3LOPLn");
if(bSL>0)
{ if(ObjectFind("B3LSLbT")==-1)CreateLine("B3LSLbT",Red,1,3);
ObjectMove("B3LSLbT",0,Time[1],bSL);
}
else ObjDel("B3LSLbT");
if(bTS>0)
{ if(ObjectFind("B3LTSbT")==-1)CreateLine("B3LTSbT",Gold,1,3);
ObjectMove("B3LTSbT",0,Time[1],bTS);
}
else ObjDel("B3LTSbT");
if(hActive==1&&BEa>0)
{ if(ObjectFind("B3LNBEL")==-1)CreateLine("B3LNBEL",Crimson,1,0);
ObjectMove("B3LNBEL",0,Time[1],BEa);
}
else ObjDel("B3LNBEL");
if(TPbMP>0)
{ if(ObjectFind("B3LMPLn")==-1)CreateLine("B3LMPLn",Gold,1,4);
ObjectMove("B3LMPLn",0,Time[1],TPbMP);
}
else ObjDel("B3LMPLn");
if(SLb>0)
{ if(ObjectFind("B3LTSLn")==-1)CreateLine("B3LTSLn",Gold,1,2);
ObjectMove("B3LTSLn",0,Time[1],SLb);
}
else ObjDel("B3LTSLn");
if(hThisChart&&BEh>0)
{ if(ObjectFind("B3LhBEL")==-1)CreateLine("B3LhBEL",SlateBlue,1,0);
ObjectMove("B3LhBEL",0,Time[1],BEh);
}
else ObjDel("B3LhBEL");
if(hThisChart&&SLh>0)
{ if(ObjectFind("B3LhSLL")==-1)CreateLine("B3LhSLL",SlateBlue,1,3);
ObjectMove("B3LhSLL",0,Time[1],SLh);
}
else ObjDel("B3LhSLL");
}
else
{ ObjDel("B3LBELn");
ObjDel("B3LTPLn");
ObjDel("B3LOPLn");
ObjDel("B3LSLbT");
ObjDel("B3LTSbT");
ObjDel("B3LNBEL");
ObjDel("B3LMPLn");
ObjDel("B3LTSLn");
ObjDel("B3LhBEL");
ObjDel("B3LhSLL");
}
if(CCIEntry&&displayCCI)
{ if(cci_01>0&&cci_11>0)ObjectSetText("B3VCm05","?",displayFontSize+6,"Wingdings",displayColorProfit);
else if(cci_01<0&&cci_11<0)ObjectSetText("B3VCm05","?",displayFontSize+6,"Wingdings",displayColorLoss);
else ObjectSetText("B3VCm05","?",displayFontSize+6,"Wingdings",Orange);
if(cci_02>0&&cci_12>0)ObjectSetText("B3VCm15","?",displayFontSize+6,"Wingdings",displayColorProfit);
else if(cci_02<0&&cci_12<0)ObjectSetText("B3VCm15","?",displayFontSize+6,"Wingdings",displayColorLoss);
else ObjectSetText("B3VCm15","?",displayFontSize+6,"Wingdings",Orange);
if(cci_03>0&&cci_13>0)ObjectSetText("B3VCm30","?",displayFontSize+6,"Wingdings",displayColorProfit);
else if(cci_03<0&&cci_13<0)ObjectSetText("B3VCm30","?",displayFontSize+6,"Wingdings",displayColorLoss);
else ObjectSetText("B3VCm30","?",displayFontSize+6,"Wingdings",Orange);
if(cci_04>0&&cci_14>0)ObjectSetText("B3VCm60","?",displayFontSize+6,"Wingdings",displayColorProfit);
else if(cci_04<0&&cci_14<0)ObjectSetText("B3VCm60","?",displayFontSize+6,"Wingdings",displayColorLoss);
else ObjectSetText("B3VCm60","?",displayFontSize+6,"Wingdings",Orange);
}
if(Debug)
{ string dSpace;
for(y=0;y<=175;y++)dSpace=dSpace+" ";
string dMess="\n\n"+dSpace+"Ticket Magic Type Lots OpenPrice Costs Profit Potential";
for(y=0;y<OrdersTotal();y++)
{ if(!OrderSelect(y,SELECT_BY_POS,MODE_TRADES))continue;
if(OrderMagicNumber()!=Magic&&OrderMagicNumber()!=hMagic)continue;
dMess=(dMess+"\n"+dSpace+" "+OrderTicket()+" "+DTS(OrderMagicNumber(),0)+" "+OrderType());
dMess=(dMess+" "+DTS(OrderLots(),LotDecimal)+" "+DTS(OrderOpenPrice(),Digits));
dMess=(dMess+" "+DTS(OrderSwap()+OrderCommission(),2));
dMess=(dMess+" "+DTS(OrderProfit()+OrderSwap()+OrderCommission(),2));
if(OrderMagicNumber()!=Magic)continue;
if(OrderType()==OP_BUY)dMess=(dMess+" "+DTS(OrderLots()*(TPb-OrderOpenPrice())*PipVal2+OrderSwap()+OrderCommission(),2));
if(OrderType()==OP_SELL)dMess=(dMess+" "+DTS(OrderLots()*(OrderOpenPrice()-TPb)*PipVal2+OrderSwap()+OrderCommission(),2));
}
if(!dLabels)
{ dLabels=true;
CreateLabel("B3LPipV","Pip Value",0,2,0,0);
CreateLabel("B3VPipV","",0,2,100,0);
CreateLabel("B3LDigi","Digits Value",0,2,0,1);
CreateLabel("B3VDigi","",0,2,100,1);
ObjSetTxt("B3VDigi",DTS(Digits,0));
CreateLabel("B3LPoin","Point Value",0,2,0,2);
CreateLabel("B3VPoin","",0,2,100,2);
ObjSetTxt("B3VPoin",DTS(Point,Digits));
CreateLabel("B3LSprd","Spread Value",0,2,0,3);
CreateLabel("B3VSprd","",0,2,100,3);
CreateLabel("B3LBid","Bid Value",0,2,0,4);
CreateLabel("B3VBid","",0,2,100,4);
CreateLabel("B3LAsk","Ask Value",0,2,0,5);
CreateLabel("B3VAsk","",0,2,100,5);
CreateLabel("B3LLotP","Lot Step",0,2,200,0);
CreateLabel("B3VLotP","",0,2,300,0);
ObjSetTxt("B3VLotP",DTS(MarketInfo(Symbol(),MODE_LOTSTEP),LotDecimal));
CreateLabel("B3LLotX","Lot Max",0,2,200,1);
CreateLabel("B3VLotX","",0,2,300,1);
ObjSetTxt("B3VLotX",DTS(MarketInfo(Symbol(),MODE_MAXLOT),0));
CreateLabel("B3LLotN","Lot Min",0,2,200,2);
CreateLabel("B3VLotN","",0,2,300,2);
ObjSetTxt("B3VLotN",DTS(MarketInfo(Symbol(),MODE_MINLOT),LotDecimal));
CreateLabel("B3LLotD","Lot Decimal",0,2,200,3);
CreateLabel("B3VLotD","",0,2,300,3);
ObjSetTxt("B3VLotD",DTS(LotDecimal,0));
CreateLabel("B3LAccT","Account Type",0,2,200,4);
CreateLabel("B3VAccT","",0,2,300,4);
ObjSetTxt("B3VAccT",DTS(AccountType,0));
CreateLabel("B3LPnts","Pip",0,2,200,5);
CreateLabel("B3VPnts","",0,2,300,5);
ObjSetTxt("B3VPnts",DTS(Pip,Digits));
CreateLabel("B3LTicV","Tick Value",0,2,400,0);
CreateLabel("B3VTicV","",0,2,500,0);
CreateLabel("B3LTicS","Tick Size",0,2,400,1);
CreateLabel("B3VTicS","",0,2,500,1);
ObjSetTxt("B3VTicS",DTS(MarketInfo(Symbol(),MODE_TICKSIZE),Digits));
CreateLabel("B3LLev","Leverage",0,2,400,2);
CreateLabel("B3VLev","",0,2,500,2);
ObjSetTxt("B3VLev",DTS(AccountLeverage(),0)+":1");
CreateLabel("B3LSGTF","SmartGrid",0,2,400,3);
if(UseSmartGrid)CreateLabel("B3VSGTF","True",0,2,500,3);
else CreateLabel("B3VSGTF","False",0,2,500,3);
CreateLabel("B3LCOTF","Close Oldest",0,2,400,4);
if(UseCloseOldest)CreateLabel("B3VCOTF","True",0,2,500,4);
else CreateLabel("B3VCOTF","False",0,2,500,4);
CreateLabel("B3LUHTF","Hedge",0,2,400,5);
if(UseHedge&&HedgeTypeDD)CreateLabel("B3VUHTF","DrawDown",0,2,500,5);
else if(UseHedge&&!HedgeTypeDD)CreateLabel("B3VUHTF","Level",0,2,500,5);
else CreateLabel("B3VUHTF","False",0,2,500,5);
}
ObjSetTxt("B3VPipV",DTS(PipValue,2));
ObjSetTxt("B3VSprd",DTS(ASK-BID,Digits));
ObjSetTxt("B3VBid",DTS(BID,Digits));
ObjSetTxt("B3VAsk",DTS(ASK,Digits));
ObjSetTxt("B3VTicV",DTS(MarketInfo(Symbol(),MODE_TICKVALUE),Digits));
}
if(EmergencyWarning)
{ if(ObjectFind("B3LClos")==-1)CreateLabel("B3LClos","",5,0,0,23,displayColorLoss);
ObjSetTxt("B3LClos","WARNING: EmergencyCloseAll is set to TRUE",5,displayColorLoss);
}
else if(ShutDown)
{ if(ObjectFind("B3LClos")==-1)CreateLabel("B3LClos","",5,0,0,23,displayColorLoss);
ObjSetTxt("B3LClos","Blessing will stop trading when this basket closes.",5,displayColorLoss);
}
else if(HolShutDown!=1)ObjDel("B3LClos");
}
WindowRedraw();
FirstRun=false;
Comment(CS,dMess);
return(0);
}
//+-----------------------------------------------------------------+
//| Check Lot Size Funtion |
//+-----------------------------------------------------------------+
double LotSize(double NewLot)
{ NewLot=ND(NewLot,LotDecimal);
NewLot=MathMin(NewLot,MarketInfo(Symbol(),MODE_MAXLOT));
NewLot=MathMax(NewLot,MinLotSize);
return(NewLot);
}
//+-----------------------------------------------------------------+
//| Open Order Funtion |
//+-----------------------------------------------------------------+
int SendOrder(string OSymbol,int OCmd,double OLot,double OPrice,double OSlip,int OMagic,color OColor=CLR_NONE)
{ if(FirstRun)return(-1);
int Ticket;
int retryTimes=5,i=0;
int OType=MathMod(OCmd,2);
double OrderPrice;
if(AccountFreeMarginCheck(OSymbol,OType,OLot)<=0)return(-1);
if(MaxSpread>0&&MarketInfo(OSymbol,MODE_SPREAD)*Point/Pip>MaxSpread)return(-1);
while(i<5)
{ i+=1;
while(IsTradeContextBusy())Sleep(100);
if(IsStopped())return(-1);
if(OType==0)OrderPrice=ND(MarketInfo(OSymbol,MODE_ASK)+OPrice,MarketInfo(OSymbol,MODE_DIGITS));
else OrderPrice=ND(MarketInfo(OSymbol,MODE_BID)+OPrice,MarketInfo(OSymbol,MODE_DIGITS));
Ticket=OrderSend(OSymbol,OCmd,OLot,OrderPrice,OSlip,0,0,TradeComment,OMagic,0,OColor);
if(Ticket<0)
{ Error=GetLastError();
if(Error!=0)Print("Error opening order: "+Error+" "+ErrorDescription(Error)
+" Symbol: "+OSymbol
+" TradeOP: "+OCmd
+" OType: "+OType
+" Ask: "+DTS(MarketInfo(OSymbol,MODE_ASK),Digits)
+" Bid: "+DTS(MarketInfo(OSymbol,MODE_BID),Digits)
+" OPrice: "+DTS(OPrice,Digits)
+" Price: "+DTS(OrderPrice,Digits)
+" Lots: "+DTS(OLot,2)
);
switch(Error)
{ case ERR_TRADE_DISABLED:
AllowTrading=false;
Print("Your broker has not allowed EAs on this account");
i=retryTimes;
break;
case ERR_OFF_QUOTES:
case ERR_INVALID_PRICE:
Sleep(5000);
case ERR_PRICE_CHANGED:
case ERR_REQUOTE:
RefreshRates();
case ERR_SERVER_BUSY:
case ERR_NO_CONNECTION:
case ERR_BROKER_BUSY:
case ERR_TRADE_CONTEXT_BUSY:
i++;
break;
case 149://ERR_TRADE_HEDGE_PROHIBITED:
UseHedge=false;
if(Debug)Print("Hedge trades are not allowed on this pair");
i=retryTimes;
break;
default:
i=retryTimes;
}
}
else
{ if(PlaySounds)PlaySound(AlertSound);
break;
}
}
return(Ticket);
}
//+-----------------------------------------------------------------+
//| Modify Order Function |
//+-----------------------------------------------------------------+
bool ModifyOrder(double OrderOP,double OrderSL,color Color=CLR_NONE)
{ bool Success=false;
int retryTimes=5,i=0;
while(i<5&&!Success)
{ i++;
while(IsTradeContextBusy())Sleep(100);
if(IsStopped())return(-1);
Success=OrderModify(OrderTicket(),OrderOP,OrderSL,0,0,Color);
if(!Success)
{ Error=GetLastError();
if(Error>1)
{ Print(" Error Modifying Order:",OrderTicket(),", ",Error," :" +ErrorDescription(Error),", Ask:",Ask,
", Bid:",Bid," OrderPrice: ",OrderOP," StopLevel: ",StopLevel,", SL: ",OrderSL,", OSL: ",OrderStopLoss());
switch(Error)
{ case ERR_TRADE_MODIFY_DENIED:
Sleep(10000);
case ERR_OFF_QUOTES:
case ERR_INVALID_PRICE:
Sleep(5000);
case ERR_PRICE_CHANGED:
case ERR_REQUOTE:
RefreshRates();
case ERR_SERVER_BUSY:
case ERR_NO_CONNECTION:
case ERR_BROKER_BUSY:
case ERR_TRADE_CONTEXT_BUSY:
case ERR_TRADE_TIMEOUT:
i+=1;
break;
default:
i=retryTimes;
break;
}
}
else Success=true;
}
else break;
}
return(Success);
}
//+-------------------------------------------------------------------------+
//| Exit Trade Function - Type: All Basket Hedge Ticket Pending |
//+-------------------------------------------------------------------------+
int ExitTrades(int Type,color Color,string Reason,int OTicket=0)
{
static int OTicketNo;
bool Success;
int Tries,Closed,CloseCount;
int CloseTrades[,2];
double OPrice;
string s;
ca=Type;
if(Type==T)
{
if(OTicket==0)
{
OTicket=OTicketNo;
}
else
{
OTicketNo=OTicket;
}
}
for(y=OrdersTotal()-1;y>=0;y--)
{ if(!OrderSelect(y,SELECT_BY_POS,MODE_TRADES))continue;
if(Type==B&&OrderMagicNumber()!=Magic)continue;
else if(Type==H&&OrderMagicNumber()!=hMagic)continue;
else if(Type==A&&OrderMagicNumber()!=Magic&&OrderMagicNumber()!=hMagic)continue;
else if(Type==T&&OrderTicket()!=OTicket)continue;
else if(Type==P&&(OrderMagicNumber()!=Magic||OrderType()<=OP_SELL))continue;
ArrayResize(CloseTrades,CloseCount+1);
CloseTrades[CloseCount,0]=OrderOpenTime();
CloseTrades[CloseCount,1]=OrderTicket();
CloseCount++;
}
if(CloseCount>0)
{ if(!UseFIFO)ArraySort(CloseTrades,WHOLE_ARRAY,0,MODE_DESCEND);
else if(CloseCount!=ArraySort(CloseTrades))Print("Error sorting CloseTrades Array");
for(y=0;y<CloseCount;y++)
{ if(!OrderSelect(CloseTrades[y,1],SELECT_BY_TICKET))continue;
while(IsTradeContextBusy())Sleep(100);
if(IsStopped())return(-1);
if(OrderType()>OP_SELL)Success=OrderDelete(OrderTicket(),Color);
else
{ if(OrderType()==OP_BUY)OPrice=ND(MarketInfo(OrderSymbol(),MODE_BID),MarketInfo(OrderSymbol(),MODE_DIGITS));
else OPrice=ND(MarketInfo(OrderSymbol(),MODE_ASK),MarketInfo(OrderSymbol(),MODE_DIGITS));
Success=OrderClose(OrderTicket(),OrderLots(),OPrice,slip,Color);
}
if(Success)Closed++;
else
{ Error=GetLastError();Print("Order ",OrderTicket()," failed to close. Error:",ErrorDescription(Error));
switch(Error)
{ case ERR_NO_ERROR:
case ERR_NO_RESULT:
Success=true;
break;
case ERR_OFF_QUOTES:
case ERR_INVALID_PRICE:
Sleep(5000);
case ERR_PRICE_CHANGED:
case ERR_REQUOTE:
RefreshRates();
case ERR_SERVER_BUSY:
case ERR_NO_CONNECTION:
case ERR_BROKER_BUSY:
case ERR_TRADE_CONTEXT_BUSY:
Print("Try: "+(Tries+1)+" of 5: Order ",OrderTicket()," failed to close. Error:",ErrorDescription(Error));
Tries++;
break;
case ERR_TRADE_TIMEOUT:
default:
Print("Try: "+(Tries+1)+" of 5: Order ",OrderTicket()," failed to close. Fatal Error:",ErrorDescription(Error));
Tries=5;
ca=0;
break;
}
}
}
if(Closed==CloseCount||Closed==0)ca=0;
}
else ca=0;
if(Closed>0)
{ if(Closed!=1)s="s";
Print("Closed "+Closed+" position"+s+" because ",Reason);
if(PlaySounds)PlaySound(AlertSound);
}
return(Closed);
}
//+-----------------------------------------------------------------+
//| Find Hedge Profit |
//+-----------------------------------------------------------------+
double FindClosedPL(int Type)
{
double ClosedProfit;
if(Type==B&&UseCloseOldest)
CbC=0;
if(OTbF>0)
{
for(y=OrdersHistoryTotal()-1;y>=0;y--)
{
if(!OrderSelect(y,SELECT_BY_POS,MODE_HISTORY))
continue;
if(OrderOpenTime()<OTbF)
continue;
if(Type==B&&OrderMagicNumber()==Magic&&OrderType()<=OP_SELL)
{
ClosedProfit+=OrderProfit()+OrderSwap()+OrderCommission();
if(UseCloseOldest)
CbC++;
}
if(Type==H&&OrderMagicNumber()==hMagic)
ClosedProfit+=OrderProfit()+OrderSwap()+OrderCommission();
}
}
return(ClosedProfit);
}
//+-----------------------------------------------------------------+
//| Check Correlation |
//+-----------------------------------------------------------------+
double CheckCorr()
{ double BaseDiff,HedgeDiff,BasePow,HedgePow,Mult;
for(y=CorrPeriod-1;y>=0;y--)
{ BaseDiff=iClose(Symbol(),1440,y)-iMA(Symbol(),1440,CorrPeriod,0,MODE_SMA,PRICE_CLOSE,y);
HedgeDiff=iClose(HedgeSymbol,1440,y)-iMA(HedgeSymbol,1440,CorrPeriod,0,MODE_SMA,PRICE_CLOSE,y);
Mult+=BaseDiff*HedgeDiff;
BasePow+=MathPow(BaseDiff,2);
HedgePow+=MathPow(HedgeDiff,2);
}
if(BasePow*HedgePow>0)return(Mult/MathSqrt(BasePow*HedgePow));
else return(0);
}
//+------------------------------------------------------------------+
//| Save Equity / Balance Statistics |
//+------------------------------------------------------------------+
void Stats(bool NewFile,bool IsTick,double Balance,double DrawDown)
{
double Equity=Balance+DrawDown;
datetime TimeNow=TimeCurrent();
if(IsTick)
{
if(Equity<StatLowEquity)
{
StatLowEquity=Equity;
}
if(Equity>StatHighEquity)
{
StatHighEquity=Equity;
}
}
else
{
while(TimeNow>=NextStats)
{
NextStats+=StatsPeriod;
}
int StatHandle;
if(NewFile)
{
StatHandle=FileOpen(StatFile,FILE_WRITE|FILE_CSV,',');
Print("Stats "+StatFile+" "+StatHandle);
FileWrite(StatHandle,"Date","Time","Balance","Equity Low","Equity High",TradeComment);
}
else
{
StatHandle=FileOpen(StatFile,FILE_READ|FILE_WRITE|FILE_CSV,',');
FileSeek(StatHandle,0,SEEK_END);
}
if(StatLowEquity==0)
{
StatLowEquity=Equity;
StatHighEquity=Equity;
}
FileWrite(StatHandle,TimeToStr(TimeNow,TIME_DATE),TimeToStr(TimeNow,TIME_SECONDS),DTS(Balance,0),DTS(StatLowEquity,0),DTS(StatHighEquity,0));
FileClose(StatHandle);
StatLowEquity=Equity;
StatHighEquity=Equity;
}
}
//+-----------------------------------------------------------------+
//| Magic Number Generator |
//+-----------------------------------------------------------------+
int GenerateMagicNumber()
{ if(EANumber>99)return(EANumber);
return(JenkinsHash(EANumber+"_"+Symbol()+"__"+Period()));
}
int JenkinsHash(string Input)
{ int MagicNo;
for(y=0;y<StringLen(Input);y++)
{ MagicNo+=StringGetChar(Input,y);
MagicNo+=(MagicNo<<10);
MagicNo^=(MagicNo>>6);
}
MagicNo+=(MagicNo<<3);
MagicNo^=(MagicNo>>11);
MagicNo+=(MagicNo<<15);
MagicNo=MathAbs(MagicNo);
return(MagicNo);
}
//+-----------------------------------------------------------------+
//| Normalize Double |
//+-----------------------------------------------------------------+
double ND(double Value,int Precision){return(NormalizeDouble(Value,Precision));}
//+-----------------------------------------------------------------+
//| Double To String |
//+-----------------------------------------------------------------+
string DTS(double Value,int Precision){return(DoubleToStr(Value,Precision));}
//+-----------------------------------------------------------------+
//| Create Label Function (OBJ_LABEL ONLY) |
//+-----------------------------------------------------------------+
void CreateLabel(string Name,string Text,int FontSize,int Corner,int XOffset,double YLine,color Colour=CLR_NONE,string Font="")
{ int XDistance,YDistance;
if(Font=="")Font=displayFont;
FontSize+=displayFontSize;
YDistance=displayYcord+displaySpacing*YLine;
if(Corner==0)XDistance=displayXcord+(XOffset*displayFontSize/9*displayRatio);
else if(Corner==1)XDistance=displayCCIxCord+XOffset*displayRatio;
else if(Corner==2)XDistance=displayXcord+(XOffset*displayFontSize/9*displayRatio);
else if(Corner==3)
{ XDistance=XOffset*displayRatio;
YDistance=YLine;
}
else if(Corner==5)
{ XDistance=XOffset*displayRatio;
YDistance=14*YLine;
Corner=1;
}
if(Colour==CLR_NONE)Colour=displayColor;
ObjectCreate(Name,OBJ_LABEL,0,0,0);
ObjectSetText(Name,Text,FontSize,Font,Colour);
ObjectSet(Name,OBJPROP_CORNER,Corner);
ObjectSet(Name,OBJPROP_XDISTANCE,XDistance);
ObjectSet(Name,OBJPROP_YDISTANCE,YDistance);
}
//+-----------------------------------------------------------------+
//| Create Line Function (OBJ_HLINE ONLY) |
//+-----------------------------------------------------------------+
void CreateLine(string Name,color Colour,int Width,int Style)
{ ObjectCreate(Name,OBJ_HLINE,0,0,0);
ObjectSet(Name,OBJPROP_COLOR,Colour);
ObjectSet(Name,OBJPROP_WIDTH,Width);
ObjectSet(Name,OBJPROP_STYLE,Style);
}
//+------------------------------------------------------------------+
//| Draw Label Function (OBJ_LABEL ONLY) |
//+------------------------------------------------------------------+
void DrawLabel(string Name,double Value,int XOffset,int Decimal=2,color Colour=CLR_NONE)
{ int dDigits;
dDigits=Digit[ArrayBsearch(Digit,Value,WHOLE_ARRAY,0,MODE_ASCEND),1];
ObjectSet(Name,OBJPROP_XDISTANCE,displayXcord+(XOffset-7*dDigits)*displayFontSize/9*displayRatio);
ObjSetTxt(Name,DTS(Value,Decimal),0,Colour);
}
//+-----------------------------------------------------------------+
//| Object Set Function |
//+-----------------------------------------------------------------+
void ObjSet(string Name,int XCoord){ObjectSet(Name,OBJPROP_XDISTANCE,displayXcord+XCoord*displayFontSize/9*displayRatio);}
//+-----------------------------------------------------------------+
//| Object Set Text Function |
//+-----------------------------------------------------------------+
void ObjSetTxt(string Name,string Text,int FontSize=0,color Colour=CLR_NONE,string Font="")
{ FontSize+=displayFontSize;
if(Font=="")Font=displayFont;
if(Colour==CLR_NONE)Colour=displayColor;
ObjectSetText(Name,Text,FontSize,Font,Colour);
}
//+------------------------------------------------------------------+
//| Delete Overlay Label Function |
//+------------------------------------------------------------------+
void LabelDelete(){for(y=ObjectsTotal();y>=0;y--){if(StringSubstr(ObjectName(y),0,2)=="B3")ObjectDelete(ObjectName(y));}}
//+------------------------------------------------------------------+
//| Delete Object Function |
//+------------------------------------------------------------------+
void ObjDel(string Name){if(ObjectFind(Name)!=-1)ObjectDelete(Name);}
//+-----------------------------------------------------------------+
//| Create Object List Function |
//+-----------------------------------------------------------------+
void LabelCreate()
{ if(displayOverlay&&((Testing&&Visual)||!Testing))
{ int dDigits;
string ObjText;
color ObjClr;
CreateLabel("B3LMNum","Magic: ",8-displayFontSize,5,59,1,displayColorFGnd,"Tahoma");
CreateLabel("B3VMNum",DTS(Magic,0),8-displayFontSize,5,5,1,displayColorFGnd,"Tahoma");
CreateLabel("B3LComm","Trade Comment: "+TradeComment,8-displayFontSize,5,5,1.8,displayColorFGnd,"Tahoma");
if(displayLogo)
{ CreateLabel("B3LLogo","Q",27,3,10,10,Crimson,"Wingdings");
CreateLabel("B3LCopy","?"+DTS(Year(),0)+", J Talon LLC/FiFtHeLeMeNt",10-displayFontSize,3,5,3,Silver,"Arial");
}
CreateLabel("B3LTime","Broker Time is:",0,0,0,0);
CreateLabel("B3VTime","",0,0,125,0);
CreateLabel("B3Line1","=========================",0,0,0,1);
CreateLabel("B3LEPPC","Equity Protection % Set:",0,0,0,2);
dDigits=Digit[ArrayBsearch(Digit,MaxDDPercent,WHOLE_ARRAY,0,MODE_ASCEND),1];
CreateLabel("B3VEPPC",DTS(MaxDDPercent,2),0,0,167-7*dDigits,2);
CreateLabel("B3PEPPC","%",0,0,193,2);
CreateLabel("B3LSTPC","Stop Trade % Set:",0,0,0,3);
dDigits=Digit[ArrayBsearch(Digit,StopTradePercent*100,WHOLE_ARRAY,0,MODE_ASCEND),1];
CreateLabel("B3VSTPC",DTS(StopTradePercent*100,2),0,0,167-7*dDigits,3);
CreateLabel("B3PSTPC","%",0,0,193,3);
CreateLabel("B3LSTAm","Stop Trade Amount:",0,0,0,4);
CreateLabel("B3VSTAm","",0,0,167,4,displayColorLoss);
CreateLabel("B3LAPPC","Account Portion:",0,0,0,5);
dDigits=Digit[ArrayBsearch(Digit,PortionPC*100,WHOLE_ARRAY,0,MODE_ASCEND),1];
CreateLabel("B3VAPPC",DTS(PortionPC*100,2),0,0,167-7*dDigits,5);
CreateLabel("B3PAPPC","%",0,0,193,5);
CreateLabel("B3LPBal","Portion Balance:",0,0,0,6);
CreateLabel("B3VPBal","",0,0,167,6);
CreateLabel("B3LAPCR","Account % Risked:",0,0,228,6);
CreateLabel("B3VAPCR",DTS(MaxDDPercent*PortionPC,2),0,0,347,6);
CreateLabel("B3PAPCR","%",0,0,380,6);
if(UseMM)
{ ObjText="Money Management is On";
ObjClr=displayColorProfit;
}
else
{ ObjText="Money Management is Off";
ObjClr=displayColorLoss;
}
CreateLabel("B3LMMOO",ObjText,0,0,0,7,ObjClr);
if(UsePowerOutSL)
{ ObjText="Power Off Stop Loss is On";
ObjClr=displayColorProfit;
}
else
{ ObjText="Power Off Stop Loss is Off";
ObjClr=displayColorLoss;
}
CreateLabel("B3LPOSL",ObjText,0,0,0,8,ObjClr);
CreateLabel("B3LDrDn","Draw Down %:",0,0,228,8);
CreateLabel("B3VDrDn","",0,0,315,8);
if(UseHedge)
{ if(HedgeTypeDD)
{ CreateLabel("B3LhDDn","Hedge",0,0,190,8);
CreateLabel("B3ShDDn","/",0,0,342,8);
CreateLabel("B3VhDDm","",0,0,347,8);
}
else
{ CreateLabel("B3LhLvl","Hedge Level:",0,0,228,9);
CreateLabel("B3VhLvl","",0,0,318,9);
CreateLabel("B3ShLvl","/",0,0,328,9);
CreateLabel("B3VhLvT","",0,0,333,9);
}
}
CreateLabel("B3Line2","======================",0,0,0,9);
CreateLabel("B3LSLot","Starting Lot Size:",0,0,0,10);
CreateLabel("B3VSLot","",0,0,130,10);
if(MaximizeProfit)
{ ObjText="Profit Maximizer is On";
ObjClr=displayColorProfit;
}
else
{ ObjText="Profit Maximizer is Off";
ObjClr=displayColorLoss;
}
CreateLabel("B3LPrMx",ObjText,0,0,0,11,ObjClr);
CreateLabel("B3LBask","Basket",0,0,200,11);
CreateLabel("B3LPPot","Profit Potential:",0,0,30,12);
CreateLabel("B3VPPot","",0,0,190,12);
CreateLabel("B3LPrSL","Profit Trailing Stop:",0,0,30,13);
CreateLabel("B3VPrSL","",0,0,190,13);
CreateLabel("B3LPnPL","Portion P/L / Pips:",0,0,30,14);
CreateLabel("B3VPnPL","",0,0,190,14);
CreateLabel("B3SPnPL","/",0,0,220,14);
CreateLabel("B3VPPip","",0,0,229,14);
CreateLabel("B3LPLMM","Profit/Loss Max/Min:",0,0,30,15);
CreateLabel("B3VPLMx","",0,0,190,15);
CreateLabel("B3SPLMM","/",0,0,220,15);
CreateLabel("B3VPLMn","",0,0,225,15);
CreateLabel("B3LOpen","Open Trades / Lots:",0,0,30,16);
CreateLabel("B3LType","",0,0,170,16);
CreateLabel("B3VOpen","",0,0,207,16);
CreateLabel("B3SOpen","/",0,0,220,16);
CreateLabel("B3VLots","",0,0,229,16);
CreateLabel("B3LMvTP","Move TP by:",0,0,0,17);
CreateLabel("B3VMvTP",DTS(MoveTP/Pip,0),0,0,100,17);
CreateLabel("B3LMves","# Moves:",0,0,150,17);
CreateLabel("B3VMove","",0,0,229,17);
CreateLabel("B3SMves","/",0,0,242,17);
CreateLabel("B3VMves",DTS(TotalMoves,0),0,0,249,17);
CreateLabel("B3LMxDD","Max DD:",0,0,0,18);
CreateLabel("B3VMxDD","",0,0,107,18);
CreateLabel("B3LDDPC","Max DD %:",0,0,150,18);
CreateLabel("B3VDDPC","",0,0,229,18);
CreateLabel("B3PDDPC","%",0,0,257,18);
if(ForceMarketCond<3)CreateLabel("B3LFMCn","Market trend is forced",0,0,0,19);
CreateLabel("B3LTrnd","",0,0,0,20);
if(CCIEntry>0&&displayCCI)
{ CreateLabel("B3LCCIi","CCI",2,1,12,1);
CreateLabel("B3LCm05","m5",2,1,25,2.2);
CreateLabel("B3VCm05","?",6,1,0,2,Orange,"Wingdings");
CreateLabel("B3LCm15","m15",2,1,25,3.4);
CreateLabel("B3VCm15","?",6,1,0,3.2,Orange,"Wingdings");
CreateLabel("B3LCm30","m30",2,1,25,4.6);
CreateLabel("B3VCm30","?",6,1,0,4.4,Orange,"Wingdings");
CreateLabel("B3LCm60","h1",2,1,25,5.8);
CreateLabel("B3VCm60","?",6,1,0,5.6,Orange,"Wingdings");
}
if(UseHolidayShutdown)
{ CreateLabel("B3LHols","Next Holiday Period",0,0,240,2);
CreateLabel("B3LHolD","From: (yyyy.mm.dd) To:",0,0,232,3);
CreateLabel("B3VHolF","",0,0,232,4);
CreateLabel("B3VHolT","",0,0,300,4);
}
}
}
//+-----------------------------------------------------------------+
//| expert end function |
//+-----------------------------------------------------------------+
string ErrorDescription(int error){
return error;
}
//0. I once rewritten an online open source code Blessing to make a transaction. The original Blessing is a one-way strategy of contrarian and I changed to two-way, long-short and simultaneous trading.
//1. First of all, my trading strategy is Martingale's bucking trend, which is a worthless strategy that has been devalued by many big cows.
//2. Both positions are opened simultaneously. According to MA800 to determine the center of gravity, MA800 is allowed to open positions within 200 points.
//3. The position can be continuously profitable.
//4. Converse positions are added according to the rules.
//5. If the negative position loss value reaches a certain percentage of the net value, a lock order is established; the loss is locked.
//6. According to the MA800's judgment of the price center of gravity, when the price returns to the neutral zone, unlock and resume the contrarian trade.
//When unlocking, lock orders tend to make profits. This part of the profit is included in the profit value of the adverse position. When the contrarian position starts to make profits, the position is closed according to the law.
//7. Locking method, I generally lock according to the percentage of the net value of the one-way combination position (for example, all BUY) float value (absolute value).
//And, I also found that once locked is not the best solution, although it is already possible to achieve annual doubling, a better approach is to layer lock.
//8. My current practice is to use locks to lock losses, wait for callbacks, and then unlock to continue trading. The method of unlocking is more complicated.
// To refer to MA (usually SMA800) and CCI, the effect is still very good. It does not mean that the price is unlocked until the price returns to position, but unlocked as soon as the MA returns to the proper position. Lock orders are profitable when unlocked, with the focus being on unlocking the price as soon as it comes back.
//9. As for why I don’t stop loss at the time of loss, this is determined by the characteristics of the system. The grid's counter-trend adding strategy, each position is equivalent to a bullet.
//Always keep, stop loss destroys the balance of the system.
//10. I usually trade only a few varieties, such as Europe, America and Australia. According to my real-time analog disk and back-testing of 10-year historical data, the results are very good.
//Float loss can always be controlled within the preset percentage, and it will not explode, and the profit effect can reach 2-4 times of the year.
// During the period of the lock, it will lose time, but it will not lose profits.
//This strategy may be too simple. The strategy that can often make money is simple and willful.
ho un errore '1' - expression not boolean Blessing EA.mq4 2981 26
come lo correggo?
|
1176b6aa8e2f066a34cfc564dccac395
|
{
"intermediate": 0.32076430320739746,
"beginner": 0.33831095695495605,
"expert": 0.3409247100353241
}
|
40,286
|
function c1()
local refinedResults = gg.getResults(119)
gg.setRanges(gg.REGION_ANONYMOUS)
-- Add 0xB400000000000000 to the addresses
local modifiedAddresses = {}
for _, result in ipairs(refinedResults) do
table.insert(modifiedAddresses, result.address + 0xB400000000000000)
end
-- Search for pointers with modified addresses
gg.clearResults()
gg.setRanges(gg.REGION_ANONYMOUS)
local foundPointer = false
for _, address in ipairs(modifiedAddresses) do
gg.searchNumber(address, gg.TYPE_QWORD)
local pointerResults = gg.getResults(1) -- Search for only one result
if #pointerResults > 0 then
foundPointer = true
break -- Stop searching when a pointer is found
end
end
if foundPointer then
local t = {}
for i = 1, #pointerResults do
t[i] = {}
t[i].address = pointerResults[1].address
t[i].flags = gg.TYPE_DWORD
t[i].freeze = false
end
-- Load the results into GameGuardian
gg.loadResults(t, gg.LOAD_REPLACE)
print('Loaded ' .. #t .. ' results')
else
print('No pointer found')
end
end
-- Perform initial search
--gg.searchNumber("867", gg.TYPE_DWORD, false, gg.SIGN_EQUAL, 0, -1)
c1 ()
|
3b8b759df553144e026b97d0c69e1194
|
{
"intermediate": 0.3057437539100647,
"beginner": 0.4980960488319397,
"expert": 0.1961602121591568
}
|
40,287
|
I've installed python 3.12 via home-brew and also the pandas module via (pip3 install pandas). However when running code and importing it says that no module is called pandas: But when I run it with python 3.9, which is installed via the extension on vs code it runs well. Why ?
|
2e4291437841927f74e63ba253e3dc32
|
{
"intermediate": 0.5334233641624451,
"beginner": 0.14023388922214508,
"expert": 0.32634270191192627
}
|
40,288
|
I am making a c++ sdl based game engine, wrapping every functionality of SDL library into my own engine. Help me deal with problems and questions.
The first one I have is this: I have a Point class based on SDL_Point, and a Renderer class based on SDL_Renderer, I got in a point that I have to convert from std::vector<Point> to std::vector<SDL_Point> for some rendering methods. So far I only have to convert the vector in renderer class, but I don't discard I may need it elsewhere. So I am doubtful if adding the converting method as a private method in Renderer or making it static and public in the Point class. I don't know if it will be correct to include an static method that is not directly related to a Point but to a vector of Points. How can I solve this dilema? Does this happen with standard libraries in C++ or C# too?
|
47ee5b03f338b165d11ae6689c3e1c7c
|
{
"intermediate": 0.7676645517349243,
"beginner": 0.16689643263816833,
"expert": 0.06543899327516556
}
|
40,289
|
In C++, what is the proper way (based in coding standards or de facto standards) to write the template in the header:
//In a single line:
template <typename DrawFunction> void WithColor(const Color& color, DrawFunction draw);
//In two lines:
template <typename DrawFunction>
void WithColor(const Color& color, DrawFunction draw);
|
60553340b7c48a289f6e2ea8068b2ac1
|
{
"intermediate": 0.2783717215061188,
"beginner": 0.5686726570129395,
"expert": 0.15295566618442535
}
|
40,290
|
import requests
import pandas as pd
from time import sleep
# مجموعة لتخزين الأحداث المطبوعة بالفعل لتجنب التكرار
printed_events = set()
def fetch_live_players(printed_events):
fixtures_url = 'https://fantasy.premierleague.com/api/fixtures/'
fixtures_response = requests.get(fixtures_url)
players_url = 'https://fantasy.premierleague.com/api/bootstrap-static/'
players_response = requests.get(players_url)
if fixtures_response.status_code == 200 and players_response.status_code == 200:
fixtures_data = fixtures_response.json()
players_data = players_response.json()
live_fixtures = [fixture for fixture in fixtures_data if not fixture['finished'] and fixture['started']]
players_df = pd.DataFrame(players_data['elements'])
teams_df = pd.DataFrame(players_data['teams'])
if live_fixtures:
for fixture in live_fixtures:
event_id = fixture['event']
live_url = f'https://fantasy.premierleague.com/api/event/{event_id}/live/'
live_response = requests.get(live_url)
if live_response.status_code == 200:
live_data = live_response.json()['elements']
for element in live_data:
element_info = players_df.loc[players_df['id'] == element['id']].iloc[0]
team_info = teams_df.loc[teams_df['id'] == element_info['team']].iloc[0]
player_name = element_info['web_name']
stats = element['stats']
# تحقق من الأهداف
if stats['goals_scored'] > 0:
event_key = f"{player_name}-goal-{stats['goals_scored']}"
if event_key not in printed_events:
printed_events.add(event_key)
print(f"Goal⚽: {player_name} #{team_info['name']}/ P: {stats['goals_scored']*6}, Tot: {stats['total_points']}")
# تحقق من الأسيست
if stats['assists'] > 0:
event_key = f"{player_name}-assist-{stats['assists']}"
if event_key not in printed_events:
printed_events.add(event_key)
print(f"Assist🅰️: {player_name}, {team_info['name']}- P: {stats['assists']*3}, Tot: {stats['total_points']}")
# تحقق من البطاقات الصفراء
if stats['yellow_cards'] > 0:
event_key = f"{player_name}-yellow-{stats['yellow_cards']}"
if event_key not in printed_events:
printed_events.add(event_key)
print(f"Yellow Card🟨: {player_name}, #{team_info['name']}- P: -{stats['yellow_cards']*1}, Tot: {stats['total_points']}")
# تحقق من البطاقات الحمراء
if stats['red_cards'] > 0:
event_key = f"{player_name}-red-{stats['red_cards']}"
if event_key not in printed_events:
printed_events.add(event_key)
print(f"Red Card🟥: {player_name} #{team_info['name']}- P: -{stats['red_cards']*3}, Tot: {stats['total_points']}")
else:
print('Failed to retrieve data.')
# نقطة بدء البرنامج
def main():
while True:
try:
fetch_live_players(printed_events)
except Exception as e:
print(e) # لدينا الآن الخطأ المطبوع في حال حدوث أي استثناء
sleep(10) # فترة الانتظار بين كل تحقق هي 60 ثانية
if __name__ == "__main__":
main()
ارجو التعديل على هذا السكريبت
حيث يعرض دائما الحدث الحصري حسب الوقت الحقيقي
كذلك عند عرض اي هدف أو اسيست ارجو أن يتم على هذا الشكل
على سبيل مثال
GOAL: Arsenal 4-1 NewcastleWillock 5 pts. Tot 6 Pts
ASSIST:Arsenal 4-1 NewcastleBurn 3 pts. Tot 4 Pts
|
4f4bf018c36ccfa475fafd8bcbe3bcde
|
{
"intermediate": 0.41047000885009766,
"beginner": 0.3615703582763672,
"expert": 0.22795969247817993
}
|
40,291
|
How do I import google.maps.DirectionsService()?
"use client";
import styles from "./page.module.css";
import { GoogleMapReact, APIProvider, Map, Marker } from '@vis.gl/react-google-maps';
// mine AIzaSyBKWpGumTcpzlMADzmAPHp8jySEuqGTcPY
// old AIzaSyAXRPZdbclbQ3V9D5KJaN7kMjRy9bhbA78
const position = { lat: 39.13332629741826, lng: -84.49875934472917 }; // Example position (San Francisco)
const zoom = 10;
const style = { height: '400px', width: '100%' };
export default function Home() {
return (
<main className={styles.main}>
<h1>Jerooba</h1>
<p>bottom text</p>
<Maps />
</main>
);
}
function Maps() {
return (
<APIProvider apiKey={'AIzaSyAXRPZdbclbQ3V9D5KJaN7kMjRy9bhbA78'}>
<Map draggable={true} center={position} zoom={zoom} style={style}>
{/*<Marker position={{ lat: 39.13332629741826, lng: -84.49875934472917 }} />*/}
</Map>
</APIProvider>
);
}
let origin = '', destination = '';
async function calculateRoute() {
if (origin === '' || destination === '') {
return;
}
// eslint-disable-next-line no-undef
const directionsService = new google.maps.DirectionsService()
const results = await directionsService.route({
origin: origin,
destination: destination,
// eslint-disable-next-line no-undef
travelMode: google.maps.TravelMode.DRIVING,
})
setDirectionsResponse(results)
setDistance(results.routes[0].legs[0].distance.text)
setDuration(results.routes[0].legs[0].duration.text)
alert(results);
}
origin = 'Chicago, IL';
destination = 'Columbus, OH';
calculateRoute();
|
3a641c15aaf08f352de708963e9c5724
|
{
"intermediate": 0.446213960647583,
"beginner": 0.4601345360279083,
"expert": 0.09365155547857285
}
|
40,292
|
code an android app for ne
|
49528d63454b8263d0df7fc7b75b8696
|
{
"intermediate": 0.29364681243896484,
"beginner": 0.17331455647945404,
"expert": 0.5330386757850647
}
|
40,293
|
I am making a c++ sdl game engine, help me solve some problems I am having. I am trying to do a template function but it is giving me errors:
Renderer.cpp: In member function ‘void Renderer::DrawRect(const Rect&, const Color&)’:
Renderer.cpp:155:24: error: expected primary-expression before ‘{’ token
155 | WithColor(color, & {
| ^
template <typename DrawFunction>
void Renderer::WithColor(const Color& color, DrawFunction draw)
{
Color oldColor = GetDrawColor();
SetDrawColor(color);
try
{
draw();
}
catch (const std::runtime_error& e)
{
SetDrawColor(oldColor);
throw;
}
SetDrawColor(oldColor);
}
void Renderer::DrawRect(const Rect& rect, const Color& color)
{
WithColor(color, & { // Error pointing here
DrawRect(rect);
});
}
|
5858fcb2eb28c74098e3bebb09be3a65
|
{
"intermediate": 0.43829596042633057,
"beginner": 0.3898368775844574,
"expert": 0.17186717689037323
}
|
40,294
|
How do I use google.maps.DirectionsService in my react app?
"use client";
import styles from "./page.module.css";
import { GoogleMapReact, APIProvider, Map, Marker } from '@vis.gl/react-google-maps';
const API_KEY = "AIzaSyAXRPZdbclbQ3V9D5KJaN7kMjRy9bhbA78";
// mine AIzaSyBKWpGumTcpzlMADzmAPHp8jySEuqGTcPY
// old AIzaSyAXRPZdbclbQ3V9D5KJaN7kMjRy9bhbA78
const position = { lat: 39.13332629741826, lng: -84.49875934472917 }; // Example position (San Francisco)
const zoom = 10;
const style = { height: '400px', width: '100%' };
export default function Home() {
return (
<main className={styles.main}>
<h1>Jerooba</h1>
<p>bottom text</p>
<Maps />
</main>
);
}
function Maps() {
return (
<APIProvider apiKey={API_KEY}>
<Map draggable={true} center={position} zoom={zoom} style={style}>
{/*<Marker position={{ lat: 39.13332629741826, lng: -84.49875934472917 }} />*/}
</Map>
</APIProvider>
);
}
|
418e258a37285587ec21a7fa551ca632
|
{
"intermediate": 0.7385764718055725,
"beginner": 0.20628072321414948,
"expert": 0.05514286458492279
}
|
40,295
|
I can't access google.maps.DirectionsService in my react app with this code.
import styles from "./page.module.css";
import { GoogleMapReact, APIProvider, Map, Marker } from '@vis.gl/react-google-maps';
const API_KEY = "AIzaSyAXRPZdbclbQ3V9D5KJaN7kMjRy9bhbA78";
// mine AIzaSyBKWpGumTcpzlMADzmAPHp8jySEuqGTcPY
// old AIzaSyAXRPZdbclbQ3V9D5KJaN7kMjRy9bhbA78
const position = { lat: 39.13332629741826, lng: -84.49875934472917 }; // Example position (San Francisco)
const zoom = 10;
const style = { height: '400px', width: '100%' };
export default function Home() {
return (
<main className={styles.main}>
<h1>Jerooba</h1>
<p>bottom text</p>
<Maps />
</main>
);
}
function Maps() {
return (
<APIProvider apiKey={API_KEY}>
<Map draggable={true} center={position} zoom={zoom} style={style}>
{/*<Marker position={{ lat: 39.13332629741826, lng: -84.49875934472917 }} />*/}
</Map>
</APIProvider>
);
}
let origin = '', destination = '';
async function calculateRoute() {
if (origin === '' || destination === '') {
return;
}
// eslint-disable-next-line no-undef
const directionsService = new google.maps.DirectionsService()
const results = await directionsService.route({
origin: origin,
destination: destination,
// eslint-disable-next-line no-undef
travelMode: google.maps.TravelMode.DRIVING,
})
setDirectionsResponse(results)
setDistance(results.routes[0].legs[0].distance.text)
setDuration(results.routes[0].legs[0].duration.text)
alert(results);
}
origin = 'Chicago, IL';
destination = 'Columbus, OH';
calculateRoute();
What else do I need to import to access this library
|
4788061c92e5504ac0c36329de9011da
|
{
"intermediate": 0.6258735656738281,
"beginner": 0.24703052639961243,
"expert": 0.12709589302539825
}
|
40,296
|
Was Richard Stallman ever involved in a legal dispute?
|
483db6cf5cd133cfcb7c70e3b43dcb58
|
{
"intermediate": 0.33261793851852417,
"beginner": 0.3466113209724426,
"expert": 0.3207707107067108
}
|
40,297
|
APIProvider apiKey={API_KEY}>
<Map
How do I add shapes to this gogole maps api in react
|
b5477e57bf3b79f4a26b2a0eea8fd74a
|
{
"intermediate": 0.8227353692054749,
"beginner": 0.07441291213035583,
"expert": 0.10285172611474991
}
|
40,298
|
regress lwage jc totcoll exper
Source | SS df MS Number of obs = 6,763
-------------+---------------------------------- F(3, 6759) = 644.53
Model | 357.752575 3 119.250858 Prob > F = 0.0000
Residual | 1250.54352 6,759 .185019014 R-squared = 0.2224
-------------+---------------------------------- Adj R-squared = 0.2221
Total | 1608.29609 6,762 .237843255 Root MSE = .43014
------------------------------------------------------------------------------
lwage | Coefficient Std. err. t P>|t| [95% conf. interval]
-------------+----------------------------------------------------------------
jc | -.0101795 .0069359 -1.47 0.142 -.0237761 .003417
totcoll | .0768762 .0023087 33.30 0.000 .0723504 .0814021
exper | .0049442 .0001575 31.40 0.000 .0046355 .0052529
_cons | 1.472326 .0210602 69.91 0.000 1.431041 1.51361
|
63724711e8e65be757f4076d3b8f194f
|
{
"intermediate": 0.32030919194221497,
"beginner": 0.43854576349258423,
"expert": 0.241145059466362
}
|
40,299
|
In the context of a htypothetical, write the system boot log of a Starships computer from cold start to full operation ... ( It has a Linux / RTOS style flavour to it) Include timestamps for all entries..
|
9e84c72e685d182f2844f36d2487d880
|
{
"intermediate": 0.38554099202156067,
"beginner": 0.23853763937950134,
"expert": 0.3759213387966156
}
|
40,300
|
How do I login to SQL in npm react app using the username password login and add and get entries
|
7b796d376a76e9047de76764eb2c14eb
|
{
"intermediate": 0.5642443299293518,
"beginner": 0.27338528633117676,
"expert": 0.16237038373947144
}
|
40,301
|
In the context of a hypothetical, write the system boot log for an AI based ATM confessional booth...
|
a43864c5ce1004f70b18d3cb38acca13
|
{
"intermediate": 0.1468065083026886,
"beginner": 0.15446090698242188,
"expert": 0.6987326145172119
}
|
40,302
|
Your task is to implement the Salsa20/12 encryption algorithm, a variant of the Salsa20 cipher as specified by Daniel J. Bernstein. This implementation must support key sizes of 128-bit, 256-bit, and in-addition, non-standard 64-bit.
The constant string "expand 08-byte k" is used for 64-bit key, and key is repeated 4 times to fill the initial state in expansion function. Here is a formal definition of the non-standard 64- bit key Salsa20 expansion function: Define α0 = (101, 120, 112, 97), α1 = (110, 100, 32, 48), α2 = (56, 45, 98, 121), α3 = (116, 101, 32, 107). If k is 8-byte sequences and n is 16-byte sequences then Salsa20k(n) = Salsa20(α0, k, k, α1, n, α2, k, k, α3)
Your program must accept the following inputs from the command-line: the key length(in bits), the key represented as a hexadecimal string, nonce(IV) represented as a hexadecimal string, and the text string to be encrypted or decrypted also in hexadecimal format. For encryption, this will be plaintext. For decryption, it will be ciphertext. The program’s output should be the resulting ciphertext when performing encryption or the original plaintext when performing decryption. They are in the hexadecimal format. A command-line example:
#Input:
./your_prog 128 "deadbeefdeadbeefdeadbeefdeadbeef" \ "1234567890abcdef" "546869736973706c61696e74657874"
#Outpt:
"a1c7720e1abadb96e5a2600d0ce028"
In this example, 128 represents the key size in bits, ”deadbeefdeadbeefdeadbeefdeadbeef” is the hexadecimal string key, ”1234567890abcdef” is the nonce (IV) in hexadecimal string format, and the final argument is the input text, ”546869736973706c61696e74657874” hexadecimal format of ASCII string ”Thisisplaintext”, which can be either plaintext (for encryption) or ciphertext (for decryption). We assume the block number in the encryption and decryption always starts from 0 in this assignment. Eventually, your program should output the resulting(ciphertext/plaintext) string to the command-line.
|
2a789ba3fb3c183653693a28d21dc1db
|
{
"intermediate": 0.38491785526275635,
"beginner": 0.24423621594905853,
"expert": 0.3708459734916687
}
|
40,303
|
In the context of a hypothetical, write the system boot rtransactional log, for a capsule that transforms it's users into french maids ;)
|
3761e6b33dd9df0a7aaca778bc56fb87
|
{
"intermediate": 0.3857734203338623,
"beginner": 0.25598180294036865,
"expert": 0.35824477672576904
}
|
40,304
|
import sys
from struct import pack, unpack
# The Salsa20/12 quarter-round function
def quarter_round(y0, y1, y2, y3):
z1 = y1 ^ ((y0 + y3) & 0xffffffff) << 7 & 0xffffffff
z2 = y2 ^ ((z1 + y0) & 0xffffffff) << 9 & 0xffffffff
z3 = y3 ^ ((z2 + z1) & 0xffffffff) << 13 & 0xffffffff
z0 = y0 ^ ((z3 + z2) & 0xffffffff) << 18 & 0xffffffff
return z0, z1, z2, z3
# The Salsa20/12 row-round function
def row_round(y):
z = [0] * 16
z[0], z[1], z[2], z[3] = quarter_round(y[0], y[1], y[2], y[3])
z[5], z[6], z[7], z[4] = quarter_round(y[5], y[6], y[7], y[4])
z[10], z[11], z[8], z[9] = quarter_round(y[10], y[11], y[8], y[9])
z[15], z[12], z[13], z[14] = quarter_round(y[15], y[12], y[13], y[14])
return z
# The Salsa20/12 column-round function
def column_round(x):
y = [0] * 16
y[0], y[4], y[8], y[12] = quarter_round(x[0], x[4], x[8], x[12])
y[5], y[9], y[13], y[1] = quarter_round(x[5], x[9], x[13], x[1])
y[10], y[14], y[2], y[6] = quarter_round(x[10], x[14], x[2], x[6])
y[15], y[3], y[7], y[11] = quarter_round(x[15], x[3], x[7], x[11])
return y
# The Salsa20/12 double-round function
def double_round(x):
return row_round(column_round(x))
# The Salsa20/12 hash function
def salsa20_12_hash(x):
z = list(x) # Convert to list to allow modifications
for i in range(6): # 12 rounds --> 6 double-rounds
z = double_round(z)
return [(z[i] + x[i]) & 0xffffffff for i in range(16)]
# The Salsa20/12 expansion function
def salsa20_12_expand(key, nonce, block_index):
# Constants defined by the Salsa20 algorithm
sigma = b'expand 32-byte k' if len(key) == 32 else b'expand 16-byte k'
# Split the key for the 16 byte version using the sigma constant
if len(key) == 16:
k0, k1 = key[:16], key[:16]
else:
k0, k1 = key[:16], key[16:]
# Set up the block index (nonce expansion)
n = pack('<Q', block_index)
# Salsa20 state composition
return (sigma[:4] + k0 + sigma[4:8] + nonce + n + sigma[8:12] + k1 + sigma[12:])
# Salsa20/12 encryption/decryption (identical operations due to XOR)
def salsa20_12_crypt(key, nonce, text):
block_size = 64 # 64 bytes
encrypted_text = b''
num_blocks = (len(text) + block_size - 1) // block_size # Calculate the number of blocks
assert len(nonce) == 8, "Nonce must be 64 bits (8 bytes)"
for block_count in range(num_blocks):
block_start = block_count * block_size
block_end = block_start + block_size
block = text[block_start:block_end].ljust(block_size, b'\x00')
# Expand key, nonce, and counter to the Salsa20 state
state = salsa20_12_expand(key, nonce, block_count)
keystream = salsa20_12_hash(unpack('<16I', state))
# Serialize keystream back to bytes and truncate to the original block size
keystream_bytes = b''.join(pack('<I', word) for word in keystream)[:block_size]
# XOR the block with the keystream
block_encrypted = bytes(a ^ b for a, b in zip(block, keystream_bytes[:len(block)]))
encrypted_text += block_encrypted[:len(text[block_start:block_end])]
return encrypted_text.hex()
if __name__ == '__main__':
key_length = int(input("Enter the key length in bits (64, 128, or 256): "))
key_hex = input("Enter the key (as a hex string): ")
nonce_hex = input("Enter the nonce (as a hex string - 16 hex characters): ")
text_hex = input("Enter the text (as a hex string for encryption/decryption): ")
if key_length not in [64, 128, 256]:
print("Invalid key length! Supported key lengths are 64, 128, and 256 bits.")
sys.exit(1)
key = bytes.fromhex(key_hex)
nonce = bytes.fromhex(nonce_hex)
text = bytes.fromhex(text_hex)
# Ensure that the key and nonce have the correct length
if len(key) not in {8, 16, 32}: # Key must be 64, 128, or 256 bits (8, 16, or 32 bytes)
print(f"Key must be {key_length} bits long (hex string of length {key_length // 4}).")
sys.exit(1)
if len(nonce) != 8: # Nonce must be 64 bits (8 bytes)
print("Nonce must be 64 bits long (hex string of length 16).")
sys.exit(1)
# Convert input to bytes and pad key if necessary
key = bytes.fromhex(key_hex).rjust(key_length // 8, b'\x00') # Pad key_bytes if not full length
nonce = bytes.fromhex(nonce_hex)
text = bytes.fromhex(text_hex)
# Encrypt or decrypt the text
result = salsa20_12_crypt(key, nonce, text)
print("Result:", result) Enter the key length in bits (64, 128, or 256): 128
Enter the key (as a hex string): deadbeefdeadbeefdeadbeefdeadbeef
Enter the nonce (as a hex string - 16 hex characters): 1234567890abcdef
Enter the text (as a hex string for encryption/decryption): 546869736973706c61696e74657874
Result: 9e9889c1d5a8e1a0dd320cd8d9a3d3 the correct output is a1c7720e1abadb96e5a2600d0ce028. I want the output to be correct and also the program to take input as 128 "deadbeefdeadbeefdeadbeefdeadbeef" "1234567890abcdef" "546869736973706c61696e74657874"
|
05f2259a5b4d56cfac791e33041804dc
|
{
"intermediate": 0.3024275600910187,
"beginner": 0.30623698234558105,
"expert": 0.3913354277610779
}
|
40,305
|
Why doesnt balls get called
import {
Box,
Button,
Container,
FormControl,
Input,
TextField,
Typography,
} from "@mui/material";
import styles from "./page.module.css";
export default function Login() {
return (
<Container className={styles.main}>
<Container maxWidth="lg">
<Box
sx={{
my: 4,
display: "flex",
flexDirection: "column",
justifyContent: "center",
alignItems: "center",
}}
>
<Typography variant="h4" component="h1" sx={{ mb: 2 }}>
Login
</Typography>
<FormControl>
<TextField required type="email" label="Email" />
<TextField required type="password" label="Password" />
<Button variant="contained" type="submit" defaultValue="Login">
Login
</Button>
</FormControl>
</Box>
</Container>
<button onClick={balls()}>hi</button>
</Container>
);
}
function balls() {
console.log("hi");
}
|
20985837eea7d463cb19ab21f7525fe8
|
{
"intermediate": 0.2777162492275238,
"beginner": 0.6110928654670715,
"expert": 0.11119094491004944
}
|
40,306
|
Telnet remote login
|
59f75f941cc5baee9fb35c66d11f9ac7
|
{
"intermediate": 0.3183041214942932,
"beginner": 0.26277878880500793,
"expert": 0.41891711950302124
}
|
40,307
|
Create a JS function which automates posting on WP Calypso
|
8d3a307f7bb43c4cce7204269b9887fc
|
{
"intermediate": 0.37768006324768066,
"beginner": 0.27717307209968567,
"expert": 0.34514686465263367
}
|
40,308
|
Hi
|
2db868ee2b617431e8fa8b07a1aa9a99
|
{
"intermediate": 0.33010533452033997,
"beginner": 0.26984941959381104,
"expert": 0.400045245885849
}
|
40,309
|
Create JS functions for Wordpress.com post
|
734a9f37a0d4593876c373ab99ccf497
|
{
"intermediate": 0.21599121391773224,
"beginner": 0.6030819416046143,
"expert": 0.1809268742799759
}
|
40,310
|
import sys
from struct import pack, unpack
# The Salsa20/12 quarter-round function
def quarter_round(y0, y1, y2, y3):
z1 = y1 ^ ((y0 + y3) % (1 << 32) << 7) % (1 << 32)
z2 = y2 ^ ((z1 + y0) % (1 << 32) << 9) % (1 << 32)
z3 = y3 ^ ((z2 + z1) % (1 << 32) << 13) % (1 << 32)
z0 = y0 ^ ((z3 + z2) % (1 << 32) << 18) % (1 << 32)
return z0, z1, z2, z3
# The Salsa20/12 row-round function
def row_round(y):
z = [0] * 16
z[0], z[1], z[2], z[3] = quarter_round(y[0], y[1], y[2], y[3])
z[5], z[6], z[7], z[4] = quarter_round(y[5], y[6], y[7], y[4])
z[10], z[11], z[8], z[9] = quarter_round(y[10], y[11], y[8], y[9])
z[15], z[12], z[13], z[14] = quarter_round(y[15], y[12], y[13], y[14])
return z
# The Salsa20/12 column-round function
def column_round(x):
y = [0] * 16
y[0], y[4], y[8], y[12] = quarter_round(x[0], x[4], x[8], x[12])
y[5], y[9], y[13], y[1] = quarter_round(x[5], x[9], x[13], x[1])
y[10], y[14], y[2], y[6] = quarter_round(x[10], x[14], x[2], x[6])
y[15], y[3], y[7], y[11] = quarter_round(x[15], x[3], x[7], x[11])
return y
# The Salsa20/12 double-round function
def double_round(x):
return row_round(column_round(x))
# The Salsa20/12 hash function
def salsa20_12_hash(x):
z = list(x) # Convert to list to allow modifications
for i in range(6): # 12 rounds --> 6 double-rounds
z = double_round(z)
return [(z[i] + x[i]) & 0xffffffff for i in range(16)]
# The Salsa20/12 expansion function
def salsa20_12_expand(key, nonce, block_index):
# Constants defined by the Salsa20 algorithm
sigma = b'expand 32-byte k' if len(key) == 32 else b'expand 16-byte k'
# Split the key for the 16 byte version using the sigma constant
if len(key) == 16:
k0, k1 = key[:16], key[:16]
else:
k0, k1 = key[:16], key[16:]
# Set up the block index (nonce expansion)
n = pack('<Q', block_index)
# Salsa20 state composition
return (sigma[:4] + k0 + sigma[4:8] + nonce + n + sigma[8:12] + k1 + sigma[12:])
# Salsa20/12 encryption/decryption (identical operations due to XOR)
def salsa20_12_crypt(key, nonce, text):
block_size = 64 # 64 bytes
encrypted_text = b''
num_blocks = (len(text) + block_size - 1) // block_size # Calculate the number of blocks
assert len(nonce) == 8, "Nonce must be 64 bits (8 bytes)"
for block_count in range(num_blocks):
block_start = block_count * block_size
block_end = block_start + block_size
block = text[block_start:block_end].ljust(block_size, b'\x00')
# Expand key, nonce, and counter to the Salsa20 state
state = salsa20_12_expand(key, nonce, block_count)
keystream = salsa20_12_hash(unpack('<16I', state))
# Serialize keystream back to bytes and truncate to the original block size
keystream_bytes = b''.join(pack('<I', word) for word in keystream)[:block_size]
# XOR the block with the keystream
block_encrypted = bytes(a ^ b for a, b in zip(block, keystream_bytes[:len(block)]))
encrypted_text += block_encrypted[:len(text[block_start:block_end])]
return encrypted_text.hex()
if __name__ == '__main__':
key_length = int(input("Enter the key length in bits (128, or 256): "))
# Note: Salsa20 does not support 64 bit keys so removed that option.
key_hex = input("Enter the key (as a hex string): ")
nonce_hex = input("Enter the nonce (as a hex string - 16 hex characters): ")
text_hex = input("Enter the text (as a hex string for encryption/decryption): ")
if key_length not in [128, 256]:
print("Invalid key length! Supported key lengths are 128, and 256 bits.")
sys.exit(1)
key = bytes.fromhex(key_hex)
nonce = bytes.fromhex(nonce_hex)
text = bytes.fromhex(text_hex)
# Ensure that the key and nonce have the correct length
if len(key) not in {16, 32}: # Key must be 128 or 256 bits (16 or 32 bytes)
print(f"Key must be {key_length} bits long (hex string of length {key_length // 4}).")
sys.exit(1)
if len(nonce) != 8: # Nonce must be 64 bits (8 bytes)
print("Nonce must be 64 bits long (hex string of length 16).")
sys.exit(1)
# Encrypt or decrypt the text
result = salsa20_12_crypt(key, nonce, text)
print("Result:", result)
This implementation must support key sizes of 128-bit, 256-bit, and in-addition, non-standard 64-bit. The constant string ”expand 08-byte k” is used for 64-bit key, and key is repeated 4 times to fill the initial state in expansion function. Here is a formal definition of the non-standard 64- bit key Salsa20 expansion function: Define α0 = (101, 120, 112, 97), α1 = (110, 100, 32, 48), α2 =
(56, 45, 98, 121), α3 = (116, 101, 32, 107). If k is 8-byte sequences and n is 16-byte sequences then
Salsa20k(n) = Salsa20(α0, k, k, α1, n, α2, k, k, α3)
Your program must accept the following inputs from the command-line: the key length(in bits), the key represented as a hexadecimal string, nonce(IV) represented as a hexadecimal string, and the text string to be encrypted or decrypted also in hexadecimal format. For encryption, this will be plaintext. For decryption, it will be ciphertext. The program’s output should be the resulting ciphertext when performing encryption or the original plaintext when performing decryption. They are in the hexadecimal format. A command-line example:
#Input:
./your_prog 128 "deadbeefdeadbeefdeadbeefdeadbeef" \ "1234567890abcdef" "546869736973706c61696e74657874"
#Outpt:
"a1c7720e1abadb96e5a2600d0ce028"
In this example, 128 represents the key size in bits, ”deadbeefdeadbeefdeadbeefdeadbeef” is the hexadecimal string key, ”1234567890abcdef” is the nonce (IV) in hexadecimal string format, and the final argument is the input text, ”546869736973706c61696e74657874” hexadecimal format of ASCII string ”Thisisplaintext”, which can be either plaintext (for encryption) or ciphertext (for decryption). We assume the block number in the encryption and decryption always starts from 0 in this assignment. Eventually, your program should output the resulting(ciphertext/plaintext) string to the command-line. Recommended implementation steps:
• Read the specification thoroughly. Pay special attention to the algorithm’s core functions.
• Implement the quarterround function first. This function is the basic building block of the Salsa20 cipher.
• Then, implement the rowround and columnround function on top of that quarterround function.
• Implement the doubleround function based on rowround and columnround.
• Implement the Salsa20 hash function based on doubleround function.
• Implement the expansion function for different key sizes. Pay attention to the different values used for the different keys.
• Implement the encryption/decryption that can handle the command-line inputs correctly.
• Test your implementation correctness of each function using the input/output inside the specification, make sure its correctness and move to implement the next function.
• Implement the standard Salsa20/20 in the specification, make sure it works as expected, then you can change it to Salsa20/12 without much effort.
• Then make the necessary modification to support the non-standard 64-bit key.
|
17c60ff7654cbc9b5192cb169ae85378
|
{
"intermediate": 0.30732807517051697,
"beginner": 0.3043469190597534,
"expert": 0.3883249759674072
}
|
40,311
|
# Salsa20/12 Implementation
## Author
Name: Apoorv Sambhaji Kadam
B-number: B00978384
## Language
This project is implemented in Python.
## Execution
To execute the program, use the following command:
python3 main.py [key_length] [key_hex] [nonce_hex] [text_hex]
Replace [key_length], [key_hex], [nonce_hex], and [text_hex] with the desired values.
For example:
python3 main.py 128 "deadbeefdeadbeefdeadbeefdeadbeef" "1234567890abcdef" "546869736973706c61696e74657874"
## Status
The program is fully complete and implements the Salsa20/12 encryption algorithm.
## Description
The Salsa20/12 encryption algorithm is structured in multiple functions each performing a specific task such as the quarter-round, row-round, column-round, and double-round to hash the state. The expand function is used to create a 64-byte state from the 16-byte key, 8-byte nonce, and 8-byte block index. The crypt function performs the encryption/decryption by repeatedly hashing the state to produce a keystream, which is XORed with the plaintext/ciphertext.
## Testing
Test cases have been executed using provided and known test vectors. All test cases are passing and provide the expected output.
provide a log of test cases which work and which don’t work.
|
721d75f84b20de5784fd108ab2b9e3c0
|
{
"intermediate": 0.3248838782310486,
"beginner": 0.28480494022369385,
"expert": 0.39031118154525757
}
|
40,312
|
If the Pokemon Poipole was a creature in a hypothetical D&D 5e setting/homebrew, what would its statblock look like?
|
60c3393a87de61a7c64292d9b761c2ee
|
{
"intermediate": 0.36943501234054565,
"beginner": 0.32218417525291443,
"expert": 0.3083808124065399
}
|
40,313
|
# Label new item column
dataset2 = dataset2.with_columns([pl.lit("New Item").alias('Item Type')])
# Merge dataset2 and dataset3 and label which materialID as low volume or new item
merged_dataset = dataset3.join(dataset2.select(['MaterialID', 'SalesOrg', 'DistrChan', 'CL4', 'Item Type']), on=['MaterialID', 'SalesOrg', 'DistrChan', 'CL4'], how='left')
merged_dataset = merged_dataset.with_columns([pl.when(pl.col('Item Type').is_null()).then(pl.lit('Low Volume')).otherwise(pl.col('Item Type')).alias('Item Type')])
# Merge merged_dataset with dataset1
dataset = dataset1.join(merged_dataset.select(['MaterialID', 'SalesOrg', 'DistrChan', 'CL4', 'Item Type']), on=['MaterialID', 'SalesOrg', 'DistrChan', 'CL4'], how='left')
dataset = dataset.with_columns([pl.when(pl.col('Item Type').is_null()).then(pl.lit('Others')).otherwise(pl.col('Item Type')).alias('Item Type')])
dataset.head() MaterialID SalesOrg DistrChan SoldTo DC WeekDate OrderQuantity DeliveryQuantity ParentProductCode PL2 PL3 PL4 PL5 CL4 Item Type
i64 str i64 i64 str str f64 f64 i64 str str str str str str
12421256 "US01" 8 5932810 "5583" "2022-03-28" 1.0 1.0 12421256 "US6" "US66F" "US66F6F30" "US66F6F306F30F… "6067437" "Others"
12421256 "US01" 8 5946860 "5583" "2022-03-14" 3.0 3.0 12421256 "US6" "US66F" "US66F6F30" "US66F6F306F30F… "6067437" "Others"
12421256 "US01" 8 5952162 "5583" "2022-03-14" 1.0 1.0 12421256 "US6" "US66F" "US66F6F30" "US66F6F306F30F… "6067437" "Others"
12421161 "US01" 8 5958951 "5583" "2022-03-07" 1.0 1.0 12421161 "US6" "US66J" "US66J6J54" "US66J6J546J54L… "6067465" "Others"
12531761 "US06" 10 6088109 "5843" "2022-03-07" 60.0 60.0 12242722 "USI" "USII6" "USII6I689" "USII6I689I689E… "6635338" "Others" # Filter for 'Low Volume' items
dataset_newitem = dataset.filter(pl.col('Item Type') == 'New Item') # Convert 'WeekDate' to datetime format
dataset_newitem = dataset_newitem.with_columns(
pl.col("WeekDate").str.strptime(pl.Datetime, "%Y-%m-%d")
)
# Group by ‘MaterialID’, ‘SalesOrg’, ‘DistrChan’, 'CL4' and 'WeekDate', then sum 'OrderQuantity'
y_cl4 = dataset_newitem.groupby(['MaterialID', 'SalesOrg', 'DistrChan', 'CL4', 'WeekDate']).agg(
pl.sum("OrderQuantity").alias("OrderQuantity")
)
# Sort by 'WeekDate'
y_cl4 = y_cl4.sort("WeekDate") # Concatenate ‘MaterialID’, ‘SalesOrg’, ‘DistrChan’, ‘CL4’ to a new column ‘unique_id’
y_cl4 = y_cl4.with_columns(
pl.concat_str([pl.col('MaterialID'), pl.col('SalesOrg'), pl.col('DistrChan'), pl.col('CL4')], separator='_').alias('unique_id')
)
# Drop the original columns
y_cl4 = y_cl4.drop(['MaterialID', 'SalesOrg', 'DistrChan', 'CL4'])
y_cl4 = y_cl4.rename({'WeekDate': 'ds', 'OrderQuantity': 'y'}) # Group by ‘unique_id’ and calculate the length of each group
lengths = y_cl4.groupby('unique_id').agg(pl.count().alias('length'))
# Count the occurrences of each length
counts = lengths.groupby('length').agg(pl.count().alias('count'))
counts = counts.sort('length')
pl.Config.set_tbl_rows(200)
print(counts) length ┆ count │
│ --- ┆ --- │
│ u32 ┆ u32 │
╞════════╪═══════╡
│ 1 ┆ 1942 │
│ 2 ┆ 357 │
│ 3 ┆ 157 │
│ 4 ┆ 107 │
│ 5 ┆ 74 │
│ 6 ┆ 40 │
│ 7 ┆ 48 │
│ 8 ┆ 37 │
│ 9 ┆ 39 │
│ 10 ┆ 47 │
│ 11 ┆ 54 │
│ 12 ┆ 36 │
│ 13 ┆ 35 │
│ 14 ┆ 43 │
│ 15 ┆ 47 │
│ 16 ┆ 45 │
│ 17 ┆ 36 │
│ 18 ┆ 37 │
│ 19 ┆ 51 │
│ 20 ┆ 35 │
│ 21 ┆ 41 │
│ 22 ┆ 29 │
│ 23 ┆ 26 │
│ 24 ┆ 33 │
│ 25 ┆ 35 │
│ 26 ┆ 41 │
│ 27 ┆ 39 │
│ 28 ┆ 34 │
│ 29 ┆ 37 │
│ 30 ┆ 31 │
│ 31 ┆ 32 │
│ 32 ┆ 26 │
│ 33 ┆ 30 │
│ 34 ┆ 22 │
│ 35 ┆ 39 │
│ 36 ┆ 32 │
│ 37 ┆ 32 │
│ 38 ┆ 33 │
│ 39 ┆ 37 │
│ 40 ┆ 34 │
│ 41 ┆ 24 │
│ 42 ┆ 22 │
│ 43 ┆ 17 │
│ 44 ┆ 18 │
│ 45 ┆ 13 │
│ 46 ┆ 10 │
│ 47 ┆ 18 │
│ 48 ┆ 15 │
│ 49 ┆ 17 │
│ 50 ┆ 12 │
│ 51 ┆ 15 │
│ 52 ┆ 10 │
│ 53 ┆ 11 │
│ 54 ┆ 6 │
│ 55 ┆ 9 │
│ 56 ┆ 7 │
│ 57 ┆ 11 │
│ 58 ┆ 11 │
│ 59 ┆ 9 │
│ 60 ┆ 13 │
│ 61 ┆ 14 │
│ 62 ┆ 7 │
│ 63 ┆ 5 │
│ 64 ┆ 3 │
│ 65 ┆ 6 │
│ 66 ┆ 6 │
│ 67 ┆ 5 │
│ 68 ┆ 11 │
│ 69 ┆ 7 │
│ 70 ┆ 4 │
│ 71 ┆ 2 │
│ 72 ┆ 4 │
│ 73 ┆ 3 │
│ 74 ┆ 1 │ # Filter the lengths DataFrame for lengths greater than 17
lengths_filtered = lengths.filter(pl.col('length') > 25)
# y_soldto filtered with only values greater than 17 in length
y_cl4_filtered = y_cl4.join(
lengths_filtered.select(pl.col('unique_id')),
on='unique_id',
how='semi'
)
# Sort by 'WeekDate'
y_cl4_filtered = y_cl4_filtered.sort("ds")
print(y_cl4_filtered)
# Group by ‘unique_id’ and calculate the length of each group
lengths = y_cl4_filtered.groupby('unique_id').agg(pl.count().alias('length'))
# Count the occurrences of each length
counts = lengths.groupby('length').agg(pl.count().alias('count'))
counts = counts.sort('length')
pl.Config.set_tbl_rows(200)
print(counts) │ length ┆ count │
│ --- ┆ --- │
│ u32 ┆ u32 │
╞════════╪═══════╡
│ 26 ┆ 41 │
│ 27 ┆ 39 │
│ 28 ┆ 34 │
│ 29 ┆ 37 │
│ 30 ┆ 31 │
│ 31 ┆ 32 │
│ 32 ┆ 26 │
│ 33 ┆ 30 │
│ 34 ┆ 22 │
│ 35 ┆ 39 │
│ 36 ┆ 32 │
│ 37 ┆ 32 │
│ 38 ┆ 33 │
│ 39 ┆ 37 │
│ 40 ┆ 34 │
│ 41 ┆ 24 │
│ 42 ┆ 22 │
│ 43 ┆ 17 │
│ 44 ┆ 18 │
│ 45 ┆ 13 │
│ 46 ┆ 10 │
│ 47 ┆ 18 │
│ 48 ┆ 15 │
│ 49 ┆ 17 │
│ 50 ┆ 12 │
│ 51 ┆ 15 │
│ 52 ┆ 10 │
│ 53 ┆ 11 │
│ 54 ┆ 6 │
│ 55 ┆ 9 │
│ 56 ┆ 7 │
│ 57 ┆ 11 │
│ 58 ┆ 11 │
│ 59 ┆ 9 │
│ 60 ┆ 13 │
│ 61 ┆ 14 │
│ 62 ┆ 7 │
│ 63 ┆ 5 │
│ 64 ┆ 3 │
│ 65 ┆ 6 │
│ 66 ┆ 6 │
│ 67 ┆ 5 │
│ 68 ┆ 11 │
│ 69 ┆ 7 │
│ 70 ┆ 4 │
│ 71 ┆ 2 │
│ 72 ┆ 4 │
│ 73 ┆ 3 │
│ 74 ┆ 1 │
└────────┴─── from statsforecast import StatsForecast
from statsforecast.models import AutoARIMA, AutoETS, AutoCES, DynamicOptimizedTheta
from statsforecast.utils import ConformalIntervals
import numpy as np
import polars as pl
# Polars option to display all rows
pl.Config.set_tbl_rows(None)
# Initialize the models
models = [
AutoARIMA(season_length=12),
AutoETS(season_length=12),
AutoCES(season_length=12),
DynamicOptimizedTheta(season_length=12)
]
# Initialize the StatsForecast model
sf = StatsForecast(models=models, freq='1w', n_jobs=-1)
# Perform cross-validation with a step size of 1 to mimic an expanding window
crossvalidation_df = sf.cross_validation(df=y_cl4_filtered, h=4, step_size=1, n_windows=9, sort_df=True)
# Calculate the ensemble mean
ensemble = crossvalidation_df[['AutoARIMA', 'AutoETS', 'CES', 'DynamicOptimizedTheta']].mean(axis=1)
# Create a Series for the ensemble mean
ensemble_series = pl.Series('Ensemble', ensemble)
# Add the ensemble mean as a new column to the DataFrame
crossvalidation_df = crossvalidation_df.with_columns(ensemble_series)
def wmape(y_true, y_pred):
return np.abs(y_true - y_pred).sum() / np.abs(y_true).sum()
# Calculate the WMAPE for the ensemble model
wmape_value = wmape(crossvalidation_df['y'], crossvalidation_df['Ensemble'])
print('Average WMAPE for Ensemble: ', round(wmape_value, 4))
# Calculate the errors for the ensemble model
errors = crossvalidation_df['y'] - crossvalidation_df['Ensemble']
# For an individual forecast
individual_accuracy = 1 - (abs(crossvalidation_df['y'] - crossvalidation_df['Ensemble']) / crossvalidation_df['y'])
individual_bias = (crossvalidation_df['Ensemble'] / crossvalidation_df['y']) - 1
# Add these calculations as new columns to DataFrame
crossvalidation_df = crossvalidation_df.with_columns([
individual_accuracy.alias("individual_accuracy"),
individual_bias.alias("individual_bias")
])
# Print the individual accuracy and bias for each week
for row in crossvalidation_df.to_dicts():
id = row['unique_id']
date = row['ds']
accuracy = row['individual_accuracy']
bias = row['individual_bias']
print(f"{id}, {date}, Individual Accuracy: {accuracy:.4f}, Individual Bias: {bias:.4f}")
# For groups of forecasts
group_accuracy = 1 - (errors.abs().sum() / crossvalidation_df['y'].sum())
group_bias = (crossvalidation_df['Ensemble'].sum() / crossvalidation_df['y'].sum()) - 1
# Print the average group accuracy and group bias over all folds for the ensemble model
print('Average Group Accuracy: ', round(group_accuracy, 4))
print('Average Group Bias: ', round(group_bias, 4))
# Fit the models on the entire dataset
sf.fit(y_cl4_fit_1)
# Instantiate the ConformalIntervals class
prediction_intervals = ConformalIntervals()
# Generate 24 months forecasts
forecasts_df = sf.forecast(h=52*2, prediction_intervals=prediction_intervals, level=[95], id_col='unique_id', sort_df=True)
# Apply the non-negative constraint to the forecasts of individual models
forecasts_df = forecasts_df.with_columns([
pl.when(pl.col('AutoARIMA') < 0).then(0).otherwise(pl.col('AutoARIMA')).alias('AutoARIMA'),
pl.when(pl.col('AutoETS') < 0).then(0).otherwise(pl.col('AutoETS')).alias('AutoETS'),
pl.when(pl.col('CES') < 0).then(0).otherwise(pl.col('CES')).alias('CES'),
pl.when(pl.col('DynamicOptimizedTheta') < 0).then(0).otherwise(pl.col('DynamicOptimizedTheta')).alias('DynamicOptimizedTheta'),
])
# Calculate the ensemble forecast
ensemble_forecast = forecasts_df[['AutoARIMA', 'AutoETS', 'CES', 'DynamicOptimizedTheta']].mean(axis=1)
# Calculate the lower and upper prediction intervals for the ensemble forecast
ensemble_lo_95 = forecasts_df.select(
[
pl.when(pl.col('AutoARIMA-lo-95') < 0).then(0).otherwise(pl.col('AutoARIMA-lo-95')).alias('AutoARIMA-lo-95'),
pl.when(pl.col('AutoETS-lo-95') < 0).then(0).otherwise(pl.col('AutoETS-lo-95')).alias('AutoETS-lo-95'),
pl.when(pl.col('CES-lo-95') < 0).then(0).otherwise(pl.col('CES-lo-95')).alias('CES-lo-95'),
pl.when(pl.col('DynamicOptimizedTheta-lo-95') < 0).then(0).otherwise(pl.col('DynamicOptimizedTheta-lo-95')).alias('DynamicOptimizedTheta-lo-95'),
]
).mean(axis=1)
ensemble_hi_95 = forecasts_df[['AutoARIMA-hi-95', 'AutoETS-hi-95', 'CES-hi-95', 'DynamicOptimizedTheta-hi-95']].mean(axis=1)
# Create Series for the ensemble forecast and its prediction intervals
ensemble_forecast_series = pl.Series('EnsembleForecast', ensemble_forecast)
ensemble_lo_95_series = pl.Series('Ensemble-lo-95', ensemble_lo_95)
ensemble_hi_95_series = pl.Series('Ensemble-hi-95', ensemble_hi_95)
# Add the ensemble forecast and its prediction intervals as new columns to the DataFrame
forecasts_df = forecasts_df.with_columns([ensemble_forecast_series, ensemble_lo_95_series, ensemble_hi_95_series])
# Round the ensemble forecast and prediction intervals and convert to integer
forecasts_df = forecasts_df.with_columns([
pl.col("EnsembleForecast").round().cast(pl.Int32),
pl.col("Ensemble-lo-95").round().cast(pl.Int32),
pl.col("Ensemble-hi-95").round().cast(pl.Int32)
])
# Reorder the columns
forecasts_df = forecasts_df.select([
"unique_id",
"ds",
"EnsembleForecast",
"Ensemble-lo-95",
"Ensemble-hi-95",
"AutoARIMA",
"AutoARIMA-lo-95",
"AutoARIMA-hi-95",
"AutoETS",
"AutoETS-lo-95",
"AutoETS-hi-95",
"CES",
"CES-lo-95",
"CES-hi-95",
"DynamicOptimizedTheta",
"DynamicOptimizedTheta-lo-95",
"DynamicOptimizedTheta-hi-95"
])
# Create an empty list
forecasts_list = []
# Append each row to the list
for row in forecasts_df.to_dicts():
forecasts_list.append(row)
# Print the list
for forecast in forecasts_list:
print(forecast) <frozen importlib._bootstrap>:241: RuntimeWarning: scipy._lib.messagestream.MessageStream size changed, may indicate binary incompatibility. Expected 56 from C header, got 64 from PyObject
<frozen importlib._bootstrap>:241: RuntimeWarning: scipy._lib.messagestream.MessageStream size changed, may indicate binary incompatibility. Expected 56 from C header, got 64 from PyObject
<frozen importlib._bootstrap>:241: RuntimeWarning: scipy._lib.messagestream.MessageStream size changed, may indicate binary incompatibility. Expected 56 from C header, got 64 from PyObject
<frozen importlib._bootstrap>:241: RuntimeWarning: scipy._lib.messagestream.MessageStream size changed, may indicate binary incompatibility. Expected 56 from C header, got 64 from PyObject
<frozen importlib._bootstrap>:241: RuntimeWarning: scipy._lib.messagestream.MessageStream size changed, may indicate binary incompatibility. Expected 56 from C header, got 64 from PyObject
<frozen importlib._bootstrap>:241: RuntimeWarning: scipy._lib.messagestream.MessageStream size changed, may indicate binary incompatibility. Expected 56 from C header, got 64 from PyObject
<frozen importlib._bootstrap>:241: RuntimeWarning: scipy._lib.messagestream.MessageStream size changed, may indicate binary incompatibility. Expected 56 from C header, got 64 from PyObject
<frozen importlib._bootstrap>:241: RuntimeWarning: scipy._lib.messagestream.MessageStream size changed, may indicate binary incompatibility. Expected 56 from C header, got 64 from PyObject
Cross Validation Time Series 1: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 1: 11%|█ | 1/9 [00:36<04:54, 36.86s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 1: 11%|█ | 1/9 [00:37<04:57, 37.14s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 1: 44%|████▍ | 4/9 [00:37<00:27, 5.53s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 1: 100%|██████████| 9/9 [00:38<00:00, 4.27s/it]
Cross Validation Time Series 1: 100%|██████████| 9/9 [00:38<00:00, 4.32s/it]
Cross Validation Time Series 1: 89%|████████▉ | 8/9 [00:38<00:01, 1.24s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 1: 100%|██████████| 9/9 [00:39<00:00, 4.34s/it]
Cross Validation Time Series 1: 78%|███████▊ | 7/9 [00:39<00:03, 1.64s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 1: 100%|██████████| 9/9 [00:39<00:00, 4.36s/it]
Cross Validation Time Series 1: 22%|██▏ | 2/9 [00:39<01:55, 16.46s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 1: 67%|██████▋ | 6/9 [00:39<00:07, 2.53s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 1: 100%|██████████| 9/9 [00:39<00:00, 4.40s/it]
Cross Validation Time Series 2: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 2: 44%|████▍ | 4/9 [00:00<00:01, 4.78it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 2: 100%|██████████| 9/9 [00:02<00:00, 4.02it/s]
Cross Validation Time Series 1: 100%|██████████| 9/9 [00:40<00:00, 4.52s/it]
Cross Validation Time Series 2: 100%|██████████| 9/9 [00:02<00:00, 4.18it/s]
Cross Validation Time Series 2: 11%|█ | 1/9 [00:00<00:02, 2.92it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 3: 11%|█ | 1/9 [00:00<00:03, 2.26it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 2: 100%|██████████| 9/9 [00:02<00:00, 3.69it/s]
Cross Validation Time Series 1: 100%|██████████| 9/9 [00:41<00:00, 4.61s/it]
Cross Validation Time Series 2: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 2: 56%|█████▌ | 5/9 [00:00<00:00, 5.73it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 2: 100%|██████████| 9/9 [00:02<00:00, 3.73it/s]
Cross Validation Time Series 2: 100%|██████████| 9/9 [00:01<00:00, 5.20it/s]
Cross Validation Time Series 3: 78%|███████▊ | 7/9 [00:01<00:00, 3.33it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 3: 89%|████████▉ | 8/9 [00:02<00:00, 3.55it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 2: 100%|██████████| 9/9 [00:03<00:00, 2.56it/s]
Cross Validation Time Series 3: 100%|██████████| 9/9 [00:02<00:00, 3.60it/s]
Cross Validation Time Series 3: 44%|████▍ | 4/9 [00:01<00:01, 3.85it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 2: 22%|██▏ | 2/9 [00:02<00:07, 1.14s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 3: 67%|██████▋ | 6/9 [00:01<00:00, 4.08it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 3: 100%|██████████| 9/9 [00:02<00:00, 3.34it/s]
Cross Validation Time Series 3: 100%|██████████| 9/9 [00:02<00:00, 4.16it/s]
Cross Validation Time Series 4: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 4: 22%|██▏ | 2/9 [00:00<00:00, 8.33it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 4: 100%|██████████| 9/9 [00:01<00:00, 5.71it/s]
Cross Validation Time Series 2: 33%|███▎ | 3/9 [00:03<00:06, 1.13s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 5: 11%|█ | 1/9 [00:00<00:02, 3.45it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 1: 100%|██████████| 9/9 [00:48<00:00, 5.39s/it]
Cross Validation Time Series 4: 100%|██████████| 9/9 [00:04<00:00, 2.08it/s]
Cross Validation Time Series 3: 100%|██████████| 9/9 [00:05<00:00, 1.61it/s]
Cross Validation Time Series 4: 100%|██████████| 9/9 [00:04<00:00, 1.97it/s]
Cross Validation Time Series 2: 22%|██▏ | 2/9 [00:01<00:03, 1.76it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 5: 89%|████████▉ | 8/9 [00:05<00:00, 2.04it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 3: 67%|██████▋ | 6/9 [00:09<00:04, 1.58s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 5: 100%|██████████| 9/9 [00:05<00:00, 1.52it/s]
Cross Validation Time Series 5: 100%|██████████| 9/9 [00:02<00:00, 4.17it/s]
Cross Validation Time Series 3: 100%|██████████| 9/9 [00:10<00:00, 1.13s/it]
Cross Validation Time Series 2: 100%|██████████| 9/9 [00:03<00:00, 2.86it/s]
Cross Validation Time Series 6: 22%|██▏ | 2/9 [00:00<00:01, 5.38it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 3: 11%|█ | 1/9 [00:00<00:03, 2.58it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 4: 100%|██████████| 9/9 [00:03<00:00, 2.75it/s]
Cross Validation Time Series 3: 100%|██████████| 9/9 [00:12<00:00, 1.36s/it]
Cross Validation Time Series 6: 100%|██████████| 9/9 [00:02<00:00, 3.45it/s]
Cross Validation Time Series 6: 100%|██████████| 9/9 [00:02<00:00, 4.42it/s]
Cross Validation Time Series 5: 67%|██████▋ | 6/9 [00:05<00:02, 1.35it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 4: 100%|██████████| 9/9 [00:02<00:00, 3.64it/s]
Cross Validation Time Series 2: 100%|██████████| 9/9 [00:12<00:00, 1.42s/it]
Cross Validation Time Series 5: 78%|███████▊ | 7/9 [00:05<00:01, 1.45it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 7: 44%|████▍ | 4/9 [00:01<00:01, 2.89it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 5: 100%|██████████| 9/9 [00:06<00:00, 1.31it/s]
Cross Validation Time Series 4: 100%|██████████| 9/9 [00:02<00:00, 3.15it/s]
Cross Validation Time Series 7: 100%|██████████| 9/9 [00:03<00:00, 2.56it/s]
Cross Validation Time Series 7: 100%|██████████| 9/9 [00:03<00:00, 2.52it/s]
Cross Validation Time Series 5: 100%|██████████| 9/9 [00:05<00:00, 1.51it/s]
Cross Validation Time Series 3: 100%|██████████| 9/9 [00:03<00:00, 2.36it/s]
Cross Validation Time Series 3: 100%|██████████| 9/9 [00:07<00:00, 1.22it/s]
Cross Validation Time Series 8: 100%|██████████| 9/9 [00:02<00:00, 4.23it/s]
Cross Validation Time Series 9: 22%|██▏ | 2/9 [00:00<00:00, 11.65it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 4: 22%|██▏ | 2/9 [00:00<00:01, 3.98it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 8: 100%|██████████| 9/9 [00:02<00:00, 3.30it/s]
Cross Validation Time Series 5: 89%|████████▉ | 8/9 [00:05<00:00, 1.43it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 9: 78%|███████▊ | 7/9 [00:00<00:00, 7.31it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 5: 100%|██████████| 9/9 [00:06<00:00, 1.45it/s]
Cross Validation Time Series 9: 100%|██████████| 9/9 [00:01<00:00, 7.27it/s]
Cross Validation Time Series 6: 100%|██████████| 9/9 [00:04<00:00, 1.90it/s]
Cross Validation Time Series 4: 67%|██████▋ | 6/9 [00:01<00:00, 3.86it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 6: 100%|██████████| 9/9 [00:02<00:00, 3.38it/s]]
Cross Validation Time Series 7: 11%|█ | 1/9 [00:00<00:02, 3.09it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 7: 11%|█ | 1/9 [00:00<00:02, 3.08it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 9: 100%|██████████| 9/9 [00:01<00:00, 4.58it/s]]
Cross Validation Time Series 4: 100%|██████████| 9/9 [00:02<00:00, 3.41it/s]]
Cross Validation Time Series 5: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 6: 100%|██████████| 9/9 [00:01<00:00, 5.38it/s]]
Cross Validation Time Series 10: 33%|███▎ | 3/9 [00:00<00:00, 6.54it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 10: 44%|████▍ | 4/9 [00:00<00:00, 6.69it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 10: 100%|██████████| 9/9 [00:01<00:00, 6.56it/s]
Cross Validation Time Series 7: 33%|███▎ | 3/9 [00:01<00:02, 2.74it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 7: 100%|██████████| 9/9 [00:02<00:00, 3.55it/s]]
Cross Validation Time Series 5: 67%|██████▋ | 6/9 [00:01<00:00, 3.06it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 10: 100%|██████████| 9/9 [00:03<00:00, 2.87it/s]
Cross Validation Time Series 11: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 7: 100%|██████████| 9/9 [00:03<00:00, 2.57it/s]]
Cross Validation Time Series 5: 100%|██████████| 9/9 [00:08<00:00, 1.10it/s]]
Cross Validation Time Series 5: 100%|██████████| 9/9 [00:02<00:00, 3.15it/s]]
Cross Validation Time Series 11: 100%|██████████| 9/9 [00:01<00:00, 4.76it/s]
Cross Validation Time Series 6: 33%|███▎ | 3/9 [00:00<00:01, 4.57it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 6: 44%|████▍ | 4/9 [00:00<00:01, 5.00it/s]]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 6: 100%|██████████| 9/9 [00:01<00:00, 5.26it/s]]
Cross Validation Time Series 7: 100%|██████████| 9/9 [00:04<00:00, 2.12it/s]
Cross Validation Time Series 8: 100%|██████████| 9/9 [00:02<00:00, 3.82it/s]]
Cross Validation Time Series 9: 11%|█ | 1/9 [00:00<00:01, 5.23it/s]]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 12: 100%|██████████| 9/9 [00:01<00:00, 5.06it/s]
Cross Validation Time Series 7: 22%|██▏ | 2/9 [00:00<00:02, 2.89it/s]]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 13: 44%|████▍ | 4/9 [00:00<00:00, 7.68it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 6: 56%|█████▌ | 5/9 [00:02<00:02, 1.89it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 13: 56%|█████▌ | 5/9 [00:00<00:00, 5.74it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 9: 56%|█████▌ | 5/9 [00:01<00:01, 3.65it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 4: 100%|██████████| 9/9 [00:09<00:00, 1.08s/it]]
Cross Validation Time Series 9: 100%|██████████| 9/9 [00:02<00:00, 3.63it/s]]
Cross Validation Time Series 8: 100%|██████████| 9/9 [00:02<00:00, 3.39it/s]
Cross Validation Time Series 13: 100%|██████████| 9/9 [00:02<00:00, 3.85it/s]
Cross Validation Time Series 10: 11%|█ | 1/9 [00:00<00:01, 4.16it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 8: 89%|████████▉ | 8/9 [00:06<00:00, 1.27it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 7: 100%|██████████| 9/9 [00:03<00:00, 2.70it/s]
Cross Validation Time Series 6: 100%|██████████| 9/9 [00:05<00:00, 1.77it/s]]
Cross Validation Time Series 7: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 8: 100%|██████████| 9/9 [00:06<00:00, 1.30it/s]]
Cross Validation Time Series 11: 100%|██████████| 9/9 [00:06<00:00, 1.33it/s]
Cross Validation Time Series 9: 100%|██████████| 9/9 [00:01<00:00, 4.55it/s]]
Cross Validation Time Series 10: 100%|██████████| 9/9 [00:02<00:00, 3.14it/s]
Cross Validation Time Series 9: 100%|██████████| 9/9 [00:01<00:00, 4.63it/s]]
Cross Validation Time Series 8: 100%|██████████| 9/9 [00:02<00:00, 3.06it/s]]
Cross Validation Time Series 12: 100%|██████████| 9/9 [00:02<00:00, 4.11it/s]
Cross Validation Time Series 7: 100%|██████████| 9/9 [00:02<00:00, 3.02it/s]]
Cross Validation Time Series 5: 100%|██████████| 9/9 [00:06<00:00, 1.41it/s]]
Cross Validation Time Series 11: 100%|██████████| 9/9 [00:02<00:00, 3.13it/s]
Cross Validation Time Series 13: 100%|██████████| 9/9 [00:02<00:00, 3.37it/s]
Cross Validation Time Series 14: 100%|██████████| 9/9 [00:06<00:00, 1.43it/s]
Cross Validation Time Series 9: 100%|██████████| 9/9 [00:02<00:00, 3.07it/s]]
Cross Validation Time Series 8: 100%|██████████| 9/9 [00:05<00:00, 1.59it/s]]
Cross Validation Time Series 15: 33%|███▎ | 3/9 [00:03<00:06, 1.07s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 10: 100%|██████████| 9/9 [00:06<00:00, 1.33it/s]
Cross Validation Time Series 11: 11%|█ | 1/9 [00:00<00:02, 2.73it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 10: 100%|██████████| 9/9 [00:04<00:00, 2.19it/s]
Cross Validation Time Series 10: 100%|██████████| 9/9 [00:09<00:00, 1.02s/it]
Cross Validation Time Series 15: 100%|██████████| 9/9 [00:06<00:00, 1.47it/s]
Cross Validation Time Series 11: 67%|██████▋ | 6/9 [00:01<00:00, 4.72it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 11: 78%|███████▊ | 7/9 [00:01<00:00, 4.67it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 11: 100%|██████████| 9/9 [00:02<00:00, 3.18it/s]
Cross Validation Time Series 11: 100%|██████████| 9/9 [00:02<00:00, 4.37it/s]
Cross Validation Time Series 9: 100%|██████████| 9/9 [00:03<00:00, 2.31it/s]
Cross Validation Time Series 16: 78%|███████▊ | 7/9 [00:01<00:00, 4.60it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 16: 89%|████████▉ | 8/9 [00:01<00:00, 4.51it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 11: 89%|████████▉ | 8/9 [00:03<00:00, 2.11it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 16: 100%|██████████| 9/9 [00:01<00:00, 5.00it/s]
Cross Validation Time Series 6: 100%|██████████| 9/9 [00:09<00:00, 1.04s/it]]
Cross Validation Time Series 10: 100%|██████████| 9/9 [00:01<00:00, 4.83it/s]
Cross Validation Time Series 11: 100%|██████████| 9/9 [00:04<00:00, 1.92it/s]
Cross Validation Time Series 12: 100%|██████████| 9/9 [00:03<00:00, 2.84it/s]
Cross Validation Time Series 12: 100%|██████████| 9/9 [00:03<00:00, 2.87it/s]
Cross Validation Time Series 12: 100%|██████████| 9/9 [00:10<00:00, 1.22s/it]
Cross Validation Time Series 12: 100%|██████████| 9/9 [00:02<00:00, 4.48it/s]
Cross Validation Time Series 13: 100%|██████████| 9/9 [00:02<00:00, 4.49it/s]
Cross Validation Time Series 13: 100%|██████████| 9/9 [00:01<00:00, 6.19it/s]
Cross Validation Time Series 17: 100%|██████████| 9/9 [00:04<00:00, 2.02it/s]
Cross Validation Time Series 18: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 7: 44%|████▍ | 4/9 [00:04<00:05, 1.07s/it]]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 13: 100%|██████████| 9/9 [00:02<00:00, 3.58it/s]
Cross Validation Time Series 14: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 14: 100%|██████████| 9/9 [00:13<00:00, 1.54s/it]
Cross Validation Time Series 18: 100%|██████████| 9/9 [00:01<00:00, 5.61it/s]
Cross Validation Time Series 14: 89%|████████▉ | 8/9 [00:01<00:00, 4.32it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 14: 100%|██████████| 9/9 [00:01<00:00, 4.68it/s]
Cross Validation Time Series 14: 44%|████▍ | 4/9 [00:02<00:03, 1.45it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 15: 33%|███▎ | 3/9 [00:00<00:01, 4.17it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 15: 100%|██████████| 9/9 [00:01<00:00, 4.75it/s]
Cross Validation Time Series 11: 100%|██████████| 9/9 [00:07<00:00, 1.27it/s]
Cross Validation Time Series 19: 100%|██████████| 9/9 [00:02<00:00, 4.15it/s]
Cross Validation Time Series 20: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 15: 100%|██████████| 9/9 [00:01<00:00, 4.53it/s]
Cross Validation Time Series 14: 78%|███████▊ | 7/9 [00:04<00:01, 1.40it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 7: 89%|████████▉ | 8/9 [00:08<00:01, 1.07s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 7: 100%|██████████| 9/9 [00:09<00:00, 1.05s/it]]
Cross Validation Time Series 16: 100%|██████████| 9/9 [00:02<00:00, 4.15it/s]
Cross Validation Time Series 14: 100%|██████████| 9/9 [00:06<00:00, 1.38it/s]
Cross Validation Time Series 20: 100%|██████████| 9/9 [00:02<00:00, 4.41it/s]
Cross Validation Time Series 12: 67%|██████▋ | 6/9 [00:02<00:01, 2.55it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 15: 11%|█ | 1/9 [00:00<00:02, 3.44it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 12: 100%|██████████| 9/9 [00:03<00:00, 2.51it/s]
Cross Validation Time Series 8: 56%|█████▌ | 5/9 [00:01<00:01, 2.85it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 16: 33%|███▎ | 3/9 [00:03<00:07, 1.19s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 21: 100%|██████████| 9/9 [00:02<00:00, 3.79it/s]
Cross Validation Time Series 8: 100%|██████████| 9/9 [00:03<00:00, 2.97it/s]]
Cross Validation Time Series 17: 100%|██████████| 9/9 [00:06<00:00, 1.44it/s]
Cross Validation Time Series 13: 100%|██████████| 9/9 [00:04<00:00, 1.89it/s]
Cross Validation Time Series 15: 100%|██████████| 9/9 [00:06<00:00, 1.33it/s]
Cross Validation Time Series 13: 100%|██████████| 9/9 [00:17<00:00, 1.91s/it]
Cross Validation Time Series 14: 100%|██████████| 9/9 [00:14<00:00, 1.66s/it]
Cross Validation Time Series 18: 100%|██████████| 9/9 [00:04<00:00, 1.80it/s]
Cross Validation Time Series 22: 100%|██████████| 9/9 [00:08<00:00, 1.02it/s]
Cross Validation Time Series 23: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 16: 100%|██████████| 9/9 [00:13<00:00, 1.47s/it]
Cross Validation Time Series 19: 11%|█ | 1/9 [00:00<00:05, 1.54it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 17: 11%|█ | 1/9 [00:00<00:03, 2.25it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 14: 100%|██████████| 9/9 [00:03<00:00, 2.98it/s]
Cross Validation Time Series 23: 100%|██████████| 9/9 [00:01<00:00, 4.71it/s]
Cross Validation Time Series 15: 100%|██████████| 9/9 [00:04<00:00, 2.24it/s]
Cross Validation Time Series 15: 100%|██████████| 9/9 [00:01<00:00, 4.80it/s]
Cross Validation Time Series 9: 89%|████████▉ | 8/9 [00:11<00:01, 1.06s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 14: 100%|██████████| 9/9 [00:08<00:00, 1.12it/s]
Cross Validation Time Series 9: 100%|██████████| 9/9 [00:12<00:00, 1.33s/it]]
Cross Validation Time Series 16: 100%|██████████| 9/9 [00:08<00:00, 1.12it/s]
Cross Validation Time Series 19: 100%|██████████| 9/9 [00:04<00:00, 1.84it/s]
Cross Validation Time Series 16: 100%|██████████| 9/9 [00:02<00:00, 3.69it/s]
Cross Validation Time Series 17: 100%|██████████| 9/9 [00:06<00:00, 1.45it/s]
Cross Validation Time Series 15: 100%|██████████| 9/9 [00:04<00:00, 2.09it/s]
Cross Validation Time Series 17: 56%|█████▌ | 5/9 [00:03<00:02, 1.39it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 24: 89%|████████▉ | 8/9 [00:05<00:00, 1.09it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 17: 67%|██████▋ | 6/9 [00:04<00:02, 1.33it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 24: 100%|██████████| 9/9 [00:06<00:00, 1.34it/s]
Cross Validation Time Series 17: 100%|██████████| 9/9 [00:03<00:00, 2.49it/s]
Cross Validation Time Series 17: 100%|██████████| 9/9 [00:05<00:00, 1.54it/s]
Cross Validation Time Series 25: 33%|███▎ | 3/9 [00:00<00:01, 3.61it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 16: 100%|██████████| 9/9 [00:02<00:00, 3.55it/s]
Cross Validation Time Series 20: 67%|██████▋ | 6/9 [00:05<00:02, 1.16it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 16: 100%|██████████| 9/9 [00:08<00:00, 1.05it/s]
Cross Validation Time Series 18: 100%|██████████| 9/9 [00:02<00:00, 3.52it/s]
Cross Validation Time Series 18: 100%|██████████| 9/9 [00:01<00:00, 4.54it/s]
Cross Validation Time Series 20: 100%|██████████| 9/9 [00:07<00:00, 1.18it/s]
Cross Validation Time Series 18: 100%|██████████| 9/9 [00:06<00:00, 1.42it/s]
Cross Validation Time Series 19: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 19: 11%|█ | 1/9 [00:00<00:01, 6.08it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 25: 100%|██████████| 9/9 [00:04<00:00, 2.08it/s]
Cross Validation Time Series 17: 100%|██████████| 9/9 [00:03<00:00, 2.29it/s]
Cross Validation Time Series 19: 100%|██████████| 9/9 [00:02<00:00, 3.31it/s]
Cross Validation Time Series 10: 100%|██████████| 9/9 [00:10<00:00, 1.21s/it]
Cross Validation Time Series 19: 100%|██████████| 9/9 [00:01<00:00, 4.71it/s]
Cross Validation Time Series 26: 78%|███████▊ | 7/9 [00:01<00:00, 3.84it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 26: 89%|████████▉ | 8/9 [00:02<00:00, 3.13it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 20: 56%|█████▌ | 5/9 [00:00<00:00, 4.89it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 26: 100%|██████████| 9/9 [00:02<00:00, 3.47it/s]
Cross Validation Time Series 20: 78%|███████▊ | 7/9 [00:01<00:00, 4.35it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 17: 56%|█████▌ | 5/9 [00:06<00:05, 1.37s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 20: 100%|██████████| 9/9 [00:03<00:00, 2.96it/s]
Cross Validation Time Series 21: 78%|███████▊ | 7/9 [00:05<00:02, 1.06s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 27: 100%|██████████| 9/9 [00:03<00:00, 2.37it/s]
Cross Validation Time Series 18: 100%|██████████| 9/9 [00:05<00:00, 1.55it/s]
Cross Validation Time Series 21: 100%|██████████| 9/9 [00:07<00:00, 1.24it/s]
Cross Validation Time Series 11: 67%|██████▋ | 6/9 [00:05<00:02, 1.18it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 28: 22%|██▏ | 2/9 [00:00<00:00, 7.95it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 20: 100%|██████████| 9/9 [00:05<00:00, 1.58it/s]
Cross Validation Time Series 11: 100%|██████████| 9/9 [00:06<00:00, 1.34it/s]
Cross Validation Time Series 28: 100%|██████████| 9/9 [00:01<00:00, 5.28it/s]
Cross Validation Time Series 19: 100%|██████████| 9/9 [00:10<00:00, 1.13s/it]
Cross Validation Time Series 17: 100%|██████████| 9/9 [00:11<00:00, 1.27s/it]
Cross Validation Time Series 18: 11%|█ | 1/9 [00:00<00:01, 4.42it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 22: 100%|██████████| 9/9 [00:03<00:00, 2.77it/s]
Cross Validation Time Series 20: 44%|████▍ | 4/9 [00:01<00:01, 3.00it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 19: 100%|██████████| 9/9 [00:03<00:00, 2.48it/s]
Cross Validation Time Series 21: 100%|██████████| 9/9 [00:06<00:00, 1.30it/s]
Cross Validation Time Series 18: 100%|██████████| 9/9 [00:01<00:00, 4.68it/s]
Cross Validation Time Series 29: 100%|██████████| 9/9 [00:03<00:00, 2.80it/s]
Cross Validation Time Series 20: 100%|██████████| 9/9 [00:03<00:00, 2.69it/s]
Cross Validation Time Series 22: 100%|██████████| 9/9 [00:02<00:00, 3.29it/s]
Cross Validation Time Series 19: 100%|██████████| 9/9 [00:04<00:00, 2.06it/s]
Cross Validation Time Series 23: 100%|██████████| 9/9 [00:02<00:00, 4.00it/s]
Cross Validation Time Series 21: 100%|██████████| 9/9 [00:09<00:00, 1.08s/it]
Cross Validation Time Series 20: 100%|██████████| 9/9 [00:07<00:00, 1.27it/s]
Cross Validation Time Series 21: 100%|██████████| 9/9 [00:06<00:00, 1.39it/s]
Cross Validation Time Series 23: 100%|██████████| 9/9 [00:08<00:00, 1.05it/s]
Cross Validation Time Series 12: 100%|██████████| 9/9 [00:10<00:00, 1.17s/it]
Cross Validation Time Series 22: 100%|██████████| 9/9 [00:02<00:00, 4.23it/s]
Cross Validation Time Series 22: 33%|███▎ | 3/9 [00:01<00:02, 2.52it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 13: 11%|█ | 1/9 [00:01<00:08, 1.05s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 24: 100%|██████████| 9/9 [00:04<00:00, 1.97it/s]
Cross Validation Time Series 30: 100%|██████████| 9/9 [00:09<00:00, 1.08s/it]
Cross Validation Time Series 23: 100%|██████████| 9/9 [00:02<00:00, 3.68it/s]
Cross Validation Time Series 22: 100%|██████████| 9/9 [00:04<00:00, 2.13it/s]
Cross Validation Time Series 21: 100%|██████████| 9/9 [00:05<00:00, 1.60it/s]
Cross Validation Time Series 24: 67%|██████▋ | 6/9 [00:01<00:00, 5.89it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 24: 89%|████████▉ | 8/9 [00:01<00:00, 5.36it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 24: 100%|██████████| 9/9 [00:01<00:00, 4.94it/s]
Cross Validation Time Series 20: 100%|██████████| 9/9 [00:07<00:00, 1.15it/s]
Cross Validation Time Series 21: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 24: 100%|██████████| 9/9 [00:05<00:00, 1.70it/s]
Cross Validation Time Series 25: 100%|██████████| 9/9 [00:05<00:00, 1.71it/s]
Cross Validation Time Series 21: 89%|████████▉ | 8/9 [00:03<00:00, 2.05it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 22: 100%|██████████| 9/9 [00:04<00:00, 1.90it/s]
Cross Validation Time Series 21: 100%|██████████| 9/9 [00:04<00:00, 2.23it/s]
Cross Validation Time Series 23: 89%|████████▉ | 8/9 [00:05<00:00, 1.25it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 25: 100%|██████████| 9/9 [00:04<00:00, 2.14it/s]
Cross Validation Time Series 23: 22%|██▏ | 2/9 [00:00<00:01, 4.13it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 31: 100%|██████████| 9/9 [00:07<00:00, 1.26it/s]
Cross Validation Time Series 32: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 23: 100%|██████████| 9/9 [00:05<00:00, 1.59it/s]
Cross Validation Time Series 26: 44%|████▍ | 4/9 [00:02<00:02, 1.89it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 26: 44%|████▍ | 4/9 [00:00<00:00, 6.55it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 26: 67%|██████▋ | 6/9 [00:00<00:00, 6.52it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 13: 100%|██████████| 9/9 [00:10<00:00, 1.13s/it]
Cross Validation Time Series 26: 78%|███████▊ | 7/9 [00:01<00:00, 6.19it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 26: 100%|██████████| 9/9 [00:01<00:00, 6.17it/s]
Cross Validation Time Series 23: 100%|██████████| 9/9 [00:02<00:00, 3.92it/s]
Cross Validation Time Series 26: 100%|██████████| 9/9 [00:04<00:00, 2.10it/s]
Cross Validation Time Series 25: 100%|██████████| 9/9 [00:06<00:00, 1.35it/s]
Cross Validation Time Series 32: 100%|██████████| 9/9 [00:02<00:00, 3.16it/s]
Cross Validation Time Series 27: 100%|██████████| 9/9 [00:02<00:00, 3.72it/s]
Cross Validation Time Series 27: 56%|█████▌ | 5/9 [00:01<00:01, 3.30it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 14: 89%|████████▉ | 8/9 [00:03<00:00, 2.41it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 24: 56%|█████▌ | 5/9 [00:04<00:03, 1.04it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 14: 100%|██████████| 9/9 [00:03<00:00, 2.39it/s]
Cross Validation Time Series 27: 100%|██████████| 9/9 [00:02<00:00, 3.15it/s]
Cross Validation Time Series 24: 100%|██████████| 9/9 [00:04<00:00, 2.13it/s]
Cross Validation Time Series 28: 100%|██████████| 9/9 [00:02<00:00, 3.10it/s]
Cross Validation Time Series 26: 100%|██████████| 9/9 [00:04<00:00, 2.13it/s]
Cross Validation Time Series 25: 100%|██████████| 9/9 [00:02<00:00, 3.82it/s]
Cross Validation Time Series 24: 100%|██████████| 9/9 [00:08<00:00, 1.03it/s]
Cross Validation Time Series 27: 100%|██████████| 9/9 [00:02<00:00, 3.27it/s]
Cross Validation Time Series 29: 100%|██████████| 9/9 [00:04<00:00, 1.92it/s]
Cross Validation Time Series 28: 56%|█████▌ | 5/9 [00:06<00:04, 1.24s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 26: 100%|██████████| 9/9 [00:03<00:00, 2.63it/s]
Cross Validation Time Series 22: 100%|██████████| 9/9 [00:13<00:00, 1.52s/it]
Cross Validation Time Series 25: 89%|████████▉ | 8/9 [00:04<00:00, 1.07it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 33: 100%|██████████| 9/9 [00:10<00:00, 1.17s/it]
Cross Validation Time Series 30: 33%|███▎ | 3/9 [00:02<00:05, 1.11it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 15: 89%|████████▉ | 8/9 [00:09<00:01, 1.34s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 25: 100%|██████████| 9/9 [00:05<00:00, 1.68it/s]
Cross Validation Time Series 15: 100%|██████████| 9/9 [00:10<00:00, 1.15s/it]
Cross Validation Time Series 30: 100%|██████████| 9/9 [00:04<00:00, 1.95it/s]
Cross Validation Time Series 27: 100%|██████████| 9/9 [00:04<00:00, 2.12it/s]
Cross Validation Time Series 23: 100%|██████████| 9/9 [00:03<00:00, 2.72it/s]
Cross Validation Time Series 28: 100%|██████████| 9/9 [00:13<00:00, 1.54s/it]
Cross Validation Time Series 28: 100%|██████████| 9/9 [00:09<00:00, 1.10s/it]
Cross Validation Time Series 34: 100%|██████████| 9/9 [00:06<00:00, 1.31it/s]
Cross Validation Time Series 24: 100%|██████████| 9/9 [00:05<00:00, 1.61it/s]
Cross Validation Time Series 29: 100%|██████████| 9/9 [00:02<00:00, 3.02it/s]
Cross Validation Time Series 16: 100%|██████████| 9/9 [00:09<00:00, 1.06s/it]
Cross Validation Time Series 25: 100%|██████████| 9/9 [00:02<00:00, 3.81it/s]
Cross Validation Time Series 26: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 28: 89%|████████▉ | 8/9 [00:08<00:01, 1.01s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 26: 100%|██████████| 9/9 [00:10<00:00, 1.21s/it]
Cross Validation Time Series 30: 100%|██████████| 9/9 [00:03<00:00, 2.54it/s]
Cross Validation Time Series 28: 100%|██████████| 9/9 [00:09<00:00, 1.11s/it]
Cross Validation Time Series 31: 100%|██████████| 9/9 [00:10<00:00, 1.16s/it]
Cross Validation Time Series 29: 100%|██████████| 9/9 [00:07<00:00, 1.22it/s]
Cross Validation Time Series 17: 100%|██████████| 9/9 [00:02<00:00, 4.28it/s]
Cross Validation Time Series 31: 22%|██▏ | 2/9 [00:00<00:03, 2.08it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 32: 100%|██████████| 9/9 [00:01<00:00, 5.01it/s]
Cross Validation Time Series 31: 100%|██████████| 9/9 [00:02<00:00, 3.28it/s]
Cross Validation Time Series 33: 33%|███▎ | 3/9 [00:00<00:01, 4.94it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 29: 100%|██████████| 9/9 [00:02<00:00, 3.28it/s]
Cross Validation Time Series 18: 100%|██████████| 9/9 [00:02<00:00, 3.81it/s]
Cross Validation Time Series 19: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 33: 67%|██████▋ | 6/9 [00:01<00:00, 4.99it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 27: 100%|██████████| 9/9 [00:05<00:00, 1.78it/s]
Cross Validation Time Series 32: 100%|██████████| 9/9 [00:01<00:00, 4.58it/s]
Cross Validation Time Series 33: 100%|██████████| 9/9 [00:03<00:00, 2.82it/s]
Cross Validation Time Series 26: 100%|██████████| 9/9 [00:09<00:00, 1.07s/it]
Cross Validation Time Series 30: 44%|████▍ | 4/9 [00:07<00:10, 2.02s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 28: 100%|██████████| 9/9 [00:04<00:00, 2.07it/s]
Cross Validation Time Series 34: 44%|████▍ | 4/9 [00:03<00:03, 1.29it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 33: 100%|██████████| 9/9 [00:04<00:00, 1.90it/s]
Cross Validation Time Series 34: 56%|█████▌ | 5/9 [00:03<00:02, 1.50it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 27: 100%|██████████| 9/9 [00:02<00:00, 3.45it/s]
Cross Validation Time Series 34: 89%|████████▉ | 8/9 [00:05<00:00, 1.73it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 28: 33%|███▎ | 3/9 [00:00<00:00, 8.40it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 34: 100%|██████████| 9/9 [00:06<00:00, 1.49it/s]
Cross Validation Time Series 19: 100%|██████████| 9/9 [00:08<00:00, 1.06it/s]
Cross Validation Time Series 30: 100%|██████████| 9/9 [00:11<00:00, 1.24s/it]
Cross Validation Time Series 31: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 35: 100%|██████████| 9/9 [00:17<00:00, 1.95s/it]
Cross Validation Time Series 28: 100%|██████████| 9/9 [00:01<00:00, 4.66it/s]
Cross Validation Time Series 20: 100%|██████████| 9/9 [00:02<00:00, 3.01it/s]
Cross Validation Time Series 30: 100%|██████████| 9/9 [00:12<00:00, 1.37s/it]
Cross Validation Time Series 21: 33%|███▎ | 3/9 [00:01<00:02, 2.86it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 31: 33%|███▎ | 3/9 [00:00<00:00, 6.28it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 31: 44%|████▍ | 4/9 [00:00<00:00, 6.59it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 34: 100%|██████████| 9/9 [00:06<00:00, 1.29it/s]
Cross Validation Time Series 35: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 31: 44%|████▍ | 4/9 [00:05<00:06, 1.28s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 35: 33%|███▎ | 3/9 [00:00<00:00, 6.54it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 36: 100%|██████████| 9/9 [00:04<00:00, 1.88it/s]
Cross Validation Time Series 31: 100%|██████████| 9/9 [00:01<00:00, 4.58it/s]
Cross Validation Time Series 35: 100%|██████████| 9/9 [00:06<00:00, 1.49it/s]
Cross Validation Time Series 29: 89%|████████▉ | 8/9 [00:04<00:00, 1.65it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 35: 78%|███████▊ | 7/9 [00:01<00:00, 4.69it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 21: 100%|██████████| 9/9 [00:03<00:00, 2.78it/s]
Cross Validation Time Series 31: 56%|█████▌ | 5/9 [00:06<00:04, 1.24s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 29: 100%|██████████| 9/9 [00:05<00:00, 1.74it/s]
Cross Validation Time Series 37: 56%|█████▌ | 5/9 [00:01<00:00, 4.47it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 35: 100%|██████████| 9/9 [00:01<00:00, 4.86it/s]
Cross Validation Time Series 36: 100%|██████████| 9/9 [00:01<00:00, 5.72it/s]
Cross Validation Time Series 32: 89%|████████▉ | 8/9 [00:01<00:00, 4.07it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 37: 100%|██████████| 9/9 [00:02<00:00, 3.74it/s]
Cross Validation Time Series 32: 100%|██████████| 9/9 [00:02<00:00, 4.39it/s]
Cross Validation Time Series 33: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 38: 33%|███▎ | 3/9 [00:00<00:01, 4.94it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 38: 56%|█████▌ | 5/9 [00:01<00:00, 4.63it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 37: 100%|██████████| 9/9 [00:01<00:00, 6.02it/s]
Cross Validation Time Series 38: 11%|█ | 1/9 [00:00<00:01, 4.15it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 29: 100%|██████████| 9/9 [00:12<00:00, 1.36s/it]
Cross Validation Time Series 38: 100%|██████████| 9/9 [00:01<00:00, 4.71it/s]
Cross Validation Time Series 38: 44%|████▍ | 4/9 [00:01<00:02, 1.99it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 31: 100%|██████████| 9/9 [00:11<00:00, 1.30s/it]
Cross Validation Time Series 33: 100%|██████████| 9/9 [00:04<00:00, 2.16it/s]
Cross Validation Time Series 36: 100%|██████████| 9/9 [00:05<00:00, 1.60it/s]
Cross Validation Time Series 38: 100%|██████████| 9/9 [00:04<00:00, 2.24it/s]
Cross Validation Time Series 39: 100%|██████████| 9/9 [00:05<00:00, 1.64it/s]
Cross Validation Time Series 32: 100%|██████████| 9/9 [00:03<00:00, 2.59it/s]
Cross Validation Time Series 30: 78%|███████▊ | 7/9 [00:08<00:02, 1.21s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 37: 78%|███████▊ | 7/9 [00:03<00:00, 2.34it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 30: 67%|██████▋ | 6/9 [00:06<00:02, 1.16it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 34: 100%|██████████| 9/9 [00:03<00:00, 2.28it/s]
Cross Validation Time Series 30: 89%|████████▉ | 8/9 [00:09<00:01, 1.15s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 37: 100%|██████████| 9/9 [00:04<00:00, 2.08it/s]
Cross Validation Time Series 40: 100%|██████████| 9/9 [00:01<00:00, 6.10it/s]
Cross Validation Time Series 35: 44%|████▍ | 4/9 [00:00<00:00, 5.09it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 39: 100%|██████████| 9/9 [00:03<00:00, 2.27it/s]
Cross Validation Time Series 40: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 30: 100%|██████████| 9/9 [00:10<00:00, 1.20s/it]
Cross Validation Time Series 22: 100%|██████████| 9/9 [00:11<00:00, 1.23s/it]
Cross Validation Time Series 30: 100%|██████████| 9/9 [00:08<00:00, 1.07it/s]
Cross Validation Time Series 35: 100%|██████████| 9/9 [00:02<00:00, 3.66it/s]
Cross Validation Time Series 33: 100%|██████████| 9/9 [00:03<00:00, 2.85it/s]
Cross Validation Time Series 41: 100%|██████████| 9/9 [00:02<00:00, 3.08it/s]
Cross Validation Time Series 38: 100%|██████████| 9/9 [00:03<00:00, 2.58it/s]
Cross Validation Time Series 40: 100%|██████████| 9/9 [00:03<00:00, 2.92it/s]
Cross Validation Time Series 31: 100%|██████████| 9/9 [00:03<00:00, 2.88it/s]
Cross Validation Time Series 39: 22%|██▏ | 2/9 [00:00<00:01, 3.92it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 32: 33%|███▎ | 3/9 [00:00<00:01, 5.77it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 41: 44%|████▍ | 4/9 [00:00<00:01, 4.60it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 36: 67%|██████▋ | 6/9 [00:02<00:01, 1.70it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 41: 56%|█████▌ | 5/9 [00:01<00:01, 4.00it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 42: 89%|████████▉ | 8/9 [00:01<00:00, 3.71it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 42: 100%|██████████| 9/9 [00:02<00:00, 3.96it/s]
Cross Validation Time Series 32: 100%|██████████| 9/9 [00:01<00:00, 4.86it/s]
Cross Validation Time Series 41: 100%|██████████| 9/9 [00:02<00:00, 4.27it/s]
Cross Validation Time Series 39: 100%|██████████| 9/9 [00:02<00:00, 4.03it/s]
Cross Validation Time Series 40: 11%|█ | 1/9 [00:00<00:02, 3.51it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 34: 100%|██████████| 9/9 [00:04<00:00, 2.10it/s]
Cross Validation Time Series 42: 22%|██▏ | 2/9 [00:00<00:01, 3.73it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 35: 11%|█ | 1/9 [00:00<00:02, 3.07it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 35: 44%|████▍ | 4/9 [00:01<00:01, 3.73it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 23: 78%|███████▊ | 7/9 [00:06<00:01, 1.04it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 33: 100%|██████████| 9/9 [00:02<00:00, 3.51it/s]
Cross Validation Time Series 36: 100%|██████████| 9/9 [00:06<00:00, 1.38it/s]
Cross Validation Time Series 34: 11%|█ | 1/9 [00:00<00:01, 4.80it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 35: 100%|██████████| 9/9 [00:02<00:00, 3.39it/s]
Cross Validation Time Series 43: 100%|██████████| 9/9 [00:04<00:00, 2.19it/s]
Cross Validation Time Series 23: 100%|██████████| 9/9 [00:08<00:00, 1.04it/s]
Cross Validation Time Series 40: 100%|██████████| 9/9 [00:03<00:00, 2.28it/s]
Cross Validation Time Series 36: 22%|██▏ | 2/9 [00:01<00:07, 1.01s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 37: 78%|███████▊ | 7/9 [00:03<00:01, 1.65it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 36: 44%|████▍ | 4/9 [00:03<00:04, 1.24it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 37: 100%|██████████| 9/9 [00:04<00:00, 2.10it/s]
Cross Validation Time Series 42: 100%|██████████| 9/9 [00:07<00:00, 1.24it/s]
Cross Validation Time Series 43: 11%|█ | 1/9 [00:00<00:02, 3.24it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 41: 78%|███████▊ | 7/9 [00:03<00:00, 2.21it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 38: 56%|█████▌ | 5/9 [00:01<00:00, 4.92it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 36: 100%|██████████| 9/9 [00:05<00:00, 1.67it/s]
Cross Validation Time Series 41: 100%|██████████| 9/9 [00:04<00:00, 1.94it/s]
Cross Validation Time Series 44: 100%|██████████| 9/9 [00:05<00:00, 1.79it/s]
Cross Validation Time Series 34: 100%|██████████| 9/9 [00:06<00:00, 1.43it/s]
Cross Validation Time Series 38: 100%|██████████| 9/9 [00:01<00:00, 4.55it/s]
Cross Validation Time Series 24: 67%|██████▋ | 6/9 [00:05<00:01, 1.51it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 37: 44%|████▍ | 4/9 [00:00<00:01, 4.88it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 43: 78%|███████▊ | 7/9 [00:02<00:00, 2.90it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 43: 100%|██████████| 9/9 [00:02<00:00, 3.16it/s]
Cross Validation Time Series 31: 100%|██████████| 9/9 [00:14<00:00, 1.61s/it]
Cross Validation Time Series 35: 100%|██████████| 9/9 [00:01<00:00, 6.22it/s]
Cross Validation Time Series 36: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 32: 11%|█ | 1/9 [00:00<00:02, 3.82it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 24: 100%|██████████| 9/9 [00:06<00:00, 1.31it/s]
Cross Validation Time Series 39: 100%|██████████| 9/9 [00:02<00:00, 3.35it/s]
Cross Validation Time Series 45: 89%|████████▉ | 8/9 [00:03<00:00, 2.63it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 44: 100%|██████████| 9/9 [00:01<00:00, 4.75it/s]
Cross Validation Time Series 45: 100%|██████████| 9/9 [00:03<00:00, 2.64it/s]
Cross Validation Time Series 42: 78%|███████▊ | 7/9 [00:03<00:01, 1.86it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 45: 11%|█ | 1/9 [00:00<00:04, 1.83it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 32: 100%|██████████| 9/9 [00:03<00:00, 2.76it/s]
Cross Validation Time Series 42: 100%|██████████| 9/9 [00:04<00:00, 1.81it/s]
Cross Validation Time Series 40: 100%|██████████| 9/9 [00:02<00:00, 3.60it/s]
Cross Validation Time Series 37: 100%|██████████| 9/9 [00:07<00:00, 1.25it/s]
Cross Validation Time Series 41: 100%|██████████| 9/9 [00:03<00:00, 2.44it/s]
Cross Validation Time Series 46: 100%|██████████| 9/9 [00:06<00:00, 1.41it/s]
Cross Validation Time Series 33: 100%|██████████| 9/9 [00:05<00:00, 1.54it/s]
Cross Validation Time Series 36: 100%|██████████| 9/9 [00:10<00:00, 1.13s/it]
Cross Validation Time Series 42: 89%|████████▉ | 8/9 [00:02<00:00, 3.29it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 42: 100%|██████████| 9/9 [00:02<00:00, 3.00it/s]
Cross Validation Time Series 47: 100%|██████████| 9/9 [00:02<00:00, 3.67it/s]
Cross Validation Time Series 38: 100%|██████████| 9/9 [00:05<00:00, 1.67it/s]
Cross Validation Time Series 39: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 43: 100%|██████████| 9/9 [00:07<00:00, 1.16it/s]
Cross Validation Time Series 25: 100%|██████████| 9/9 [00:10<00:00, 1.21s/it]
Cross Validation Time Series 26: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 48: 22%|██▏ | 2/9 [00:00<00:02, 3.02it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 37: 100%|██████████| 9/9 [00:02<00:00, 3.52it/s]
Cross Validation Time Series 26: 100%|██████████| 9/9 [00:01<00:00, 5.95it/s]
Cross Validation Time Series 48: 100%|██████████| 9/9 [00:02<00:00, 3.59it/s]
Cross Validation Time Series 45: 100%|██████████| 9/9 [00:11<00:00, 1.30s/it]
Cross Validation Time Series 39: 100%|██████████| 9/9 [00:02<00:00, 3.54it/s]
Cross Validation Time Series 40: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 43: 100%|██████████| 9/9 [00:06<00:00, 1.32it/s]
Cross Validation Time Series 46: 78%|███████▊ | 7/9 [00:04<00:01, 1.66it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 44: 100%|██████████| 9/9 [00:07<00:00, 1.26it/s]
Cross Validation Time Series 46: 100%|██████████| 9/9 [00:05<00:00, 1.75it/s]
Cross Validation Time Series 47: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 49: 100%|██████████| 9/9 [00:05<00:00, 1.64it/s]
Cross Validation Time Series 50: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 27: 100%|██████████| 9/9 [00:06<00:00, 1.40it/s]
Cross Validation Time Series 34: 100%|██████████| 9/9 [00:10<00:00, 1.16s/it]
Cross Validation Time Series 44: 100%|██████████| 9/9 [00:02<00:00, 4.25it/s]
Cross Validation Time Series 38: 100%|██████████| 9/9 [00:06<00:00, 1.31it/s]
Cross Validation Time Series 45: 44%|████▍ | 4/9 [00:01<00:02, 2.41it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 50: 44%|████▍ | 4/9 [00:01<00:02, 2.39it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 40: 100%|██████████| 9/9 [00:07<00:00, 1.28it/s]
Cross Validation Time Series 47: 78%|███████▊ | 7/9 [00:02<00:00, 2.84it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 45: 22%|██▏ | 2/9 [00:01<00:05, 1.26it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 47: 100%|██████████| 9/9 [00:03<00:00, 2.87it/s]
Cross Validation Time Series 39: 78%|███████▊ | 7/9 [00:02<00:00, 2.96it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 39: 100%|██████████| 9/9 [00:02<00:00, 3.29it/s]
Cross Validation Time Series 45: 100%|██████████| 9/9 [00:04<00:00, 2.20it/s]
Cross Validation Time Series 35: 100%|██████████| 9/9 [00:03<00:00, 2.80it/s]
Cross Validation Time Series 48: 33%|███▎ | 3/9 [00:00<00:01, 3.12it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 41: 100%|██████████| 9/9 [00:02<00:00, 3.88it/s]
Cross Validation Time Series 50: 100%|██████████| 9/9 [00:04<00:00, 2.13it/s]
Cross Validation Time Series 40: 33%|███▎ | 3/9 [00:00<00:01, 4.30it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 42: 11%|█ | 1/9 [00:00<00:04, 1.71it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 46: 44%|████▍ | 4/9 [00:01<00:03, 1.61it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 40: 56%|█████▌ | 5/9 [00:02<00:02, 1.56it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 42: 22%|██▏ | 2/9 [00:02<00:10, 1.50s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 51: 22%|██▏ | 2/9 [00:03<00:11, 1.69s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 48: 100%|██████████| 9/9 [00:05<00:00, 1.57it/s]
Cross Validation Time Series 40: 100%|██████████| 9/9 [00:05<00:00, 1.72it/s]
Cross Validation Time Series 46: 100%|██████████| 9/9 [00:05<00:00, 1.69it/s]
Cross Validation Time Series 47: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 49: 33%|███▎ | 3/9 [00:01<00:02, 2.87it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 49: 56%|█████▌ | 5/9 [00:01<00:01, 3.33it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 49: 67%|██████▋ | 6/9 [00:02<00:01, 2.87it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 28: 100%|██████████| 9/9 [00:10<00:00, 1.17s/it]
Cross Validation Time Series 36: 100%|██████████| 9/9 [00:07<00:00, 1.27it/s]
Cross Validation Time Series 49: 100%|██████████| 9/9 [00:03<00:00, 2.93it/s]
Cross Validation Time Series 42: 100%|██████████| 9/9 [00:07<00:00, 1.15it/s]
Cross Validation Time Series 45: 89%|████████▉ | 8/9 [00:11<00:01, 1.74s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 47: 100%|██████████| 9/9 [00:03<00:00, 2.88it/s]
Cross Validation Time Series 48: 11%|█ | 1/9 [00:00<00:01, 4.40it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 43: 22%|██▏ | 2/9 [00:00<00:02, 2.79it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 41: 44%|████▍ | 4/9 [00:04<00:05, 1.15s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 43: 44%|████▍ | 4/9 [00:01<00:01, 3.16it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 37: 100%|██████████| 9/9 [00:02<00:00, 3.19it/s]
Cross Validation Time Series 41: 56%|█████▌ | 5/9 [00:05<00:04, 1.11s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 45: 100%|██████████| 9/9 [00:13<00:00, 1.50s/it]
Cross Validation Time Series 38: 33%|███▎ | 3/9 [00:00<00:00, 6.76it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 48: 100%|██████████| 9/9 [00:02<00:00, 3.61it/s]
Cross Validation Time Series 51: 78%|███████▊ | 7/9 [00:10<00:02, 1.40s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 43: 100%|██████████| 9/9 [00:03<00:00, 2.47it/s]
Cross Validation Time Series 38: 100%|██████████| 9/9 [00:01<00:00, 5.12it/s]
Cross Validation Time Series 50: 100%|██████████| 9/9 [00:04<00:00, 2.19it/s]
Cross Validation Time Series 29: 67%|██████▋ | 6/9 [00:05<00:02, 1.01it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 46: 56%|█████▌ | 5/9 [00:04<00:04, 1.14s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 51: 22%|██▏ | 2/9 [00:02<00:10, 1.47s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 51: 100%|██████████| 9/9 [00:15<00:00, 1.78s/it]
Cross Validation Time Series 46: 100%|██████████| 9/9 [00:06<00:00, 1.40it/s]
Cross Validation Time Series 39: 100%|██████████| 9/9 [00:05<00:00, 1.63it/s]
Cross Validation Time Series 44: 100%|██████████| 9/9 [00:05<00:00, 1.52it/s]
Cross Validation Time Series 52: 67%|██████▋ | 6/9 [00:01<00:00, 4.39it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 29: 100%|██████████| 9/9 [00:10<00:00, 1.18s/it]
Cross Validation Time Series 49: 100%|██████████| 9/9 [00:07<00:00, 1.26it/s]
Cross Validation Time Series 45: 11%|█ | 1/9 [00:00<00:03, 2.42it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 41: 100%|██████████| 9/9 [00:13<00:00, 1.46s/it]
Cross Validation Time Series 47: 56%|█████▌ | 5/9 [00:01<00:01, 3.59it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 40: 67%|██████▋ | 6/9 [00:01<00:00, 5.46it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 51: 100%|██████████| 9/9 [00:06<00:00, 1.42it/s]
Cross Validation Time Series 50: 22%|██▏ | 2/9 [00:01<00:03, 1.95it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 40: 100%|██████████| 9/9 [00:01<00:00, 5.06it/s]
Cross Validation Time Series 52: 100%|██████████| 9/9 [00:02<00:00, 3.14it/s]
Cross Validation Time Series 47: 100%|██████████| 9/9 [00:02<00:00, 3.70it/s]
Cross Validation Time Series 48: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 45: 100%|██████████| 9/9 [00:04<00:00, 1.99it/s]
Cross Validation Time Series 30: 100%|██████████| 9/9 [00:04<00:00, 2.01it/s]
Cross Validation Time Series 42: 100%|██████████| 9/9 [00:04<00:00, 1.82it/s]
Cross Validation Time Series 53: 100%|██████████| 9/9 [00:05<00:00, 1.75it/s]
Cross Validation Time Series 48: 100%|██████████| 9/9 [00:05<00:00, 1.61it/s]
Cross Validation Time Series 46: 100%|██████████| 9/9 [00:03<00:00, 2.41it/s]
Cross Validation Time Series 50: 100%|██████████| 9/9 [00:09<00:00, 1.04s/it]
Cross Validation Time Series 31: 33%|███▎ | 3/9 [00:05<00:09, 1.62s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 43: 100%|██████████| 9/9 [00:04<00:00, 2.07it/s]
Cross Validation Time Series 47: 67%|██████▋ | 6/9 [00:01<00:00, 3.39it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 49: 100%|██████████| 9/9 [00:02<00:00, 3.12it/s]
Cross Validation Time Series 50: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 54: 100%|██████████| 9/9 [00:03<00:00, 2.50it/s]
Cross Validation Time Series 41: 78%|███████▊ | 7/9 [00:09<00:02, 1.16s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 47: 100%|██████████| 9/9 [00:02<00:00, 3.36it/s]
Cross Validation Time Series 31: 44%|████▍ | 4/9 [00:06<00:07, 1.47s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 48: 11%|█ | 1/9 [00:00<00:01, 4.67it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 51: 100%|██████████| 9/9 [00:01<00:00, 4.96it/s]
Cross Validation Time Series 50: 100%|██████████| 9/9 [00:04<00:00, 2.14it/s]
Cross Validation Time Series 41: 100%|██████████| 9/9 [00:13<00:00, 1.53s/it]
Cross Validation Time Series 55: 100%|██████████| 9/9 [00:04<00:00, 1.83it/s]
Cross Validation Time Series 42: 22%|██▏ | 2/9 [00:00<00:00, 10.49it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 42: 44%|████▍ | 4/9 [00:00<00:00, 5.34it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 48: 100%|██████████| 9/9 [00:05<00:00, 1.75it/s]
Cross Validation Time Series 56: 22%|██▏ | 2/9 [00:00<00:02, 2.53it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 52: 100%|██████████| 9/9 [00:16<00:00, 1.80s/it]
Cross Validation Time Series 42: 100%|██████████| 9/9 [00:01<00:00, 4.80it/s]
Cross Validation Time Series 43: 11%|█ | 1/9 [00:00<00:01, 6.87it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 44: 78%|███████▊ | 7/9 [00:07<00:02, 1.02s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 53: 33%|███▎ | 3/9 [00:00<00:01, 4.37it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 43: 33%|███▎ | 3/9 [00:00<00:01, 4.73it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 49: 22%|██▏ | 2/9 [00:01<00:06, 1.01it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 51: 100%|██████████| 9/9 [00:04<00:00, 2.19it/s]
Cross Validation Time Series 43: 100%|██████████| 9/9 [00:02<00:00, 4.23it/s]
Cross Validation Time Series 44: 100%|██████████| 9/9 [00:09<00:00, 1.07s/it]
Cross Validation Time Series 56: 100%|██████████| 9/9 [00:04<00:00, 2.17it/s]
Cross Validation Time Series 49: 44%|████▍ | 4/9 [00:03<00:04, 1.21it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 31: 100%|██████████| 9/9 [00:15<00:00, 1.77s/it]
Cross Validation Time Series 53: 100%|██████████| 9/9 [00:03<00:00, 2.49it/s]
Cross Validation Time Series 49: 56%|█████▌ | 5/9 [00:04<00:03, 1.09it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 49: 100%|██████████| 9/9 [00:08<00:00, 1.12it/s]
Cross Validation Time Series 44: 100%|██████████| 9/9 [00:05<00:00, 1.54it/s]
Cross Validation Time Series 57: 100%|██████████| 9/9 [00:05<00:00, 1.56it/s]
Cross Validation Time Series 52: 100%|██████████| 9/9 [00:13<00:00, 1.54s/it]
Cross Validation Time Series 53: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 45: 100%|██████████| 9/9 [00:06<00:00, 1.44it/s]
Cross Validation Time Series 53: 22%|██▏ | 2/9 [00:00<00:01, 6.48it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 58: 11%|█ | 1/9 [00:00<00:05, 1.35it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 53: 44%|████▍ | 4/9 [00:00<00:00, 5.25it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 32: 44%|████▍ | 4/9 [00:05<00:06, 1.31s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 50: 33%|███▎ | 3/9 [00:02<00:05, 1.12it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 53: 100%|██████████| 9/9 [00:01<00:00, 4.50it/s]
Cross Validation Time Series 45: 100%|██████████| 9/9 [00:02<00:00, 3.43it/s]
Cross Validation Time Series 46: 100%|██████████| 9/9 [00:02<00:00, 4.27it/s]
Cross Validation Time Series 54: 100%|██████████| 9/9 [00:04<00:00, 2.16it/s]
Cross Validation Time Series 47: 56%|█████▌ | 5/9 [00:03<00:02, 1.64it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 50: 100%|██████████| 9/9 [00:07<00:00, 1.15it/s]
Cross Validation Time Series 47: 67%|██████▋ | 6/9 [00:04<00:01, 2.01it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 47: 100%|██████████| 9/9 [00:04<00:00, 1.81it/s]
Cross Validation Time Series 52: 100%|██████████| 9/9 [00:14<00:00, 1.61s/it]
Cross Validation Time Series 51: 11%|█ | 1/9 [00:01<00:08, 1.04s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 46: 100%|██████████| 9/9 [00:06<00:00, 1.41it/s]
Cross Validation Time Series 32: 100%|██████████| 9/9 [00:13<00:00, 1.51s/it]
Cross Validation Time Series 55: 100%|██████████| 9/9 [00:02<00:00, 3.35it/s]
Cross Validation Time Series 54: 100%|██████████| 9/9 [00:14<00:00, 1.59s/it]
Cross Validation Time Series 53: 78%|███████▊ | 7/9 [00:02<00:00, 3.28it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 33: 33%|███▎ | 3/9 [00:01<00:02, 2.64it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 56: 11%|█ | 1/9 [00:01<00:09, 1.16s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 48: 100%|██████████| 9/9 [00:04<00:00, 1.99it/s]
Cross Validation Time Series 53: 100%|██████████| 9/9 [00:04<00:00, 1.86it/s]
Cross Validation Time Series 54: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 49: 33%|███▎ | 3/9 [00:00<00:01, 3.42it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 55: 100%|██████████| 9/9 [00:03<00:00, 2.51it/s]
Cross Validation Time Series 33: 78%|███████▊ | 7/9 [00:04<00:01, 1.40it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 33: 100%|██████████| 9/9 [00:05<00:00, 1.77it/s]
Cross Validation Time Series 49: 100%|██████████| 9/9 [00:02<00:00, 3.70it/s]
Cross Validation Time Series 51: 67%|██████▋ | 6/9 [00:08<00:03, 1.32s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 58: 78%|███████▊ | 7/9 [00:15<00:04, 2.29s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 51: 78%|███████▊ | 7/9 [00:08<00:02, 1.02s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 51: 100%|██████████| 9/9 [00:09<00:00, 1.04s/it]
Cross Validation Time Series 50: 100%|██████████| 9/9 [00:01<00:00, 5.93it/s]
Cross Validation Time Series 56: 100%|██████████| 9/9 [00:03<00:00, 2.96it/s]
Cross Validation Time Series 47: 100%|██████████| 9/9 [00:08<00:00, 1.04it/s]
Cross Validation Time Series 51: 100%|██████████| 9/9 [00:02<00:00, 4.08it/s]
Cross Validation Time Series 58: 100%|██████████| 9/9 [00:18<00:00, 2.03s/it]
Cross Validation Time Series 59: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 48: 44%|████▍ | 4/9 [00:01<00:01, 3.74it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 52: 11%|█ | 1/9 [00:00<00:03, 2.06it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 52: 100%|██████████| 9/9 [00:02<00:00, 3.11it/s]
Cross Validation Time Series 59: 33%|███▎ | 3/9 [00:00<00:01, 4.90it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 57: 100%|██████████| 9/9 [00:03<00:00, 2.91it/s]
Cross Validation Time Series 52: 22%|██▏ | 2/9 [00:01<00:04, 1.50it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 58: 11%|█ | 1/9 [00:01<00:12, 1.53s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 48: 100%|██████████| 9/9 [00:04<00:00, 2.12it/s]
Cross Validation Time Series 34: 100%|██████████| 9/9 [00:07<00:00, 1.15it/s]
Cross Validation Time Series 59: 100%|██████████| 9/9 [00:05<00:00, 1.71it/s]
Cross Validation Time Series 53: 100%|██████████| 9/9 [00:04<00:00, 1.85it/s]
Cross Validation Time Series 35: 100%|██████████| 9/9 [00:02<00:00, 3.03it/s]
Cross Validation Time Series 49: 100%|██████████| 9/9 [00:03<00:00, 2.87it/s]
Cross Validation Time Series 56: 100%|██████████| 9/9 [00:16<00:00, 1.86s/it]
Cross Validation Time Series 52: 100%|██████████| 9/9 [00:07<00:00, 1.19it/s]
Cross Validation Time Series 54: 100%|██████████| 9/9 [00:13<00:00, 1.48s/it]
Cross Validation Time Series 53: 11%|█ | 1/9 [00:00<00:01, 5.01it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 54: 100%|██████████| 9/9 [00:02<00:00, 3.49it/s]
Cross Validation Time Series 50: 33%|███▎ | 3/9 [00:01<00:02, 2.10it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 60: 100%|██████████| 9/9 [00:04<00:00, 2.12it/s]
Cross Validation Time Series 36: 100%|██████████| 9/9 [00:04<00:00, 1.95it/s]
Cross Validation Time Series 57: 100%|██████████| 9/9 [00:04<00:00, 1.84it/s]
Cross Validation Time Series 53: 100%|██████████| 9/9 [00:05<00:00, 1.79it/s]
Cross Validation Time Series 55: 100%|██████████| 9/9 [00:05<00:00, 1.52it/s]
Cross Validation Time Series 58: 100%|██████████| 9/9 [00:12<00:00, 1.39s/it]
Cross Validation Time Series 55: 100%|██████████| 9/9 [00:05<00:00, 1.55it/s]
Cross Validation Time Series 50: 100%|██████████| 9/9 [00:07<00:00, 1.16it/s]
Cross Validation Time Series 61: 100%|██████████| 9/9 [00:04<00:00, 1.82it/s]
Cross Validation Time Series 58: 100%|██████████| 9/9 [00:03<00:00, 2.79it/s]
Cross Validation Time Series 54: 100%|██████████| 9/9 [00:04<00:00, 1.90it/s]
Cross Validation Time Series 51: 100%|██████████| 9/9 [00:02<00:00, 3.10it/s]
Cross Validation Time Series 59: 56%|█████▌ | 5/9 [00:01<00:01, 2.86it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 52: 11%|█ | 1/9 [00:00<00:01, 7.89it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 52: 100%|██████████| 9/9 [00:04<00:00, 2.01it/s]
Cross Validation Time Series 59: 100%|██████████| 9/9 [00:06<00:00, 1.37it/s]
Cross Validation Time Series 55: 100%|██████████| 9/9 [00:05<00:00, 1.57it/s]
Cross Validation Time Series 56: 100%|██████████| 9/9 [00:09<00:00, 1.06s/it]
Cross Validation Time Series 62: 100%|██████████| 9/9 [00:09<00:00, 1.02s/it]
Cross Validation Time Series 37: 100%|██████████| 9/9 [00:12<00:00, 1.43s/it]
Cross Validation Time Series 56: 100%|██████████| 9/9 [00:11<00:00, 1.25s/it]
Cross Validation Time Series 63: 100%|██████████| 9/9 [00:01<00:00, 4.54it/s]
Cross Validation Time Series 53: 100%|██████████| 9/9 [00:06<00:00, 1.43it/s]
Cross Validation Time Series 57: 22%|██▏ | 2/9 [00:02<00:10, 1.55s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 56: 89%|████████▉ | 8/9 [00:06<00:00, 1.10it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 59: 100%|██████████| 9/9 [00:15<00:00, 1.78s/it]
Cross Validation Time Series 60: 67%|██████▋ | 6/9 [00:07<00:04, 1.44s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 56: 100%|██████████| 9/9 [00:07<00:00, 1.26it/s]
Cross Validation Time Series 64: 67%|██████▋ | 6/9 [00:04<00:01, 1.58it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 57: 56%|█████▌ | 5/9 [00:06<00:04, 1.17s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 64: 100%|██████████| 9/9 [00:05<00:00, 1.51it/s]
Cross Validation Time Series 54: 67%|██████▋ | 6/9 [00:03<00:02, 1.34it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 60: 100%|██████████| 9/9 [00:10<00:00, 1.20s/it]
Cross Validation Time Series 38: 100%|██████████| 9/9 [00:09<00:00, 1.05s/it]
Cross Validation Time Series 57: 100%|██████████| 9/9 [00:03<00:00, 2.52it/s]
Cross Validation Time Series 54: 100%|██████████| 9/9 [00:07<00:00, 1.26it/s]
Cross Validation Time Series 57: 100%|██████████| 9/9 [00:13<00:00, 1.48s/it]
Cross Validation Time Series 57: 100%|██████████| 9/9 [00:11<00:00, 1.27s/it]
Cross Validation Time Series 60: 100%|██████████| 9/9 [00:07<00:00, 1.26it/s]
Cross Validation Time Series 58: 22%|██▏ | 2/9 [00:00<00:02, 3.32it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 58: 33%|███▎ | 3/9 [00:01<00:02, 2.23it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 61: 56%|█████▌ | 5/9 [00:01<00:00, 4.32it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 58: 100%|██████████| 9/9 [00:04<00:00, 2.03it/s]
Cross Validation Time Series 39: 100%|██████████| 9/9 [00:04<00:00, 1.86it/s]
Cross Validation Time Series 61: 78%|███████▊ | 7/9 [00:01<00:00, 4.21it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 61: 89%|████████▉ | 8/9 [00:01<00:00, 3.79it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 61: 100%|██████████| 9/9 [00:02<00:00, 4.19it/s]
Cross Validation Time Series 40: 11%|█ | 1/9 [00:00<00:05, 1.42it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 58: 100%|██████████| 9/9 [00:04<00:00, 2.23it/s]
Cross Validation Time Series 59: 100%|██████████| 9/9 [00:02<00:00, 3.66it/s]
Cross Validation Time Series 55: 100%|██████████| 9/9 [00:05<00:00, 1.60it/s]
Cross Validation Time Series 58: 100%|██████████| 9/9 [00:05<00:00, 1.71it/s]
Cross Validation Time Series 62: 100%|██████████| 9/9 [00:05<00:00, 1.74it/s]
Cross Validation Time Series 59: 56%|█████▌ | 5/9 [00:03<00:02, 1.44it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 59: 78%|███████▊ | 7/9 [00:04<00:01, 1.77it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 40: 100%|██████████| 9/9 [00:07<00:00, 1.22it/s]
Cross Validation Time Series 59: 89%|████████▉ | 8/9 [00:05<00:00, 1.92it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 59: 100%|██████████| 9/9 [00:05<00:00, 1.56it/s]
Cross Validation Time Series 60: 100%|██████████| 9/9 [00:05<00:00, 1.56it/s]
Cross Validation Time Series 63: 100%|██████████| 9/9 [00:03<00:00, 2.68it/s]
Cross Validation Time Series 41: 100%|██████████| 9/9 [00:02<00:00, 3.08it/s]
Cross Validation Time Series 61: 100%|██████████| 9/9 [00:15<00:00, 1.74s/it]
Cross Validation Time Series 61: 89%|████████▉ | 8/9 [00:02<00:00, 2.75it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 65: 100%|██████████| 9/9 [00:17<00:00, 1.96s/it]
Cross Validation Time Series 59: 78%|███████▊ | 7/9 [00:07<00:01, 1.07it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 61: 100%|██████████| 9/9 [00:02<00:00, 3.07it/s]
Cross Validation Time Series 62: 33%|███▎ | 3/9 [00:00<00:01, 4.67it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 60: 100%|██████████| 9/9 [00:04<00:00, 2.18it/s]
Cross Validation Time Series 59: 89%|████████▉ | 8/9 [00:08<00:01, 1.08s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 62: 44%|████▍ | 4/9 [00:02<00:03, 1.29it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 61: 33%|███▎ | 3/9 [00:01<00:03, 1.67it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 59: 100%|██████████| 9/9 [00:11<00:00, 1.22s/it]
Cross Validation Time Series 62: 100%|██████████| 9/9 [00:04<00:00, 2.07it/s]
Cross Validation Time Series 56: 100%|██████████| 9/9 [00:13<00:00, 1.49s/it]
Cross Validation Time Series 61: 100%|██████████| 9/9 [00:03<00:00, 2.33it/s]
Cross Validation Time Series 62: 100%|██████████| 9/9 [00:05<00:00, 1.78it/s]
Cross Validation Time Series 63: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 64: 100%|██████████| 9/9 [00:07<00:00, 1.28it/s]
Cross Validation Time Series 57: 11%|█ | 1/9 [00:00<00:03, 2.41it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 60: 100%|██████████| 9/9 [00:04<00:00, 2.04it/s]
Cross Validation Time Series 57: 100%|██████████| 9/9 [00:03<00:00, 2.97it/s]
Cross Validation Time Series 62: 100%|██████████| 9/9 [00:02<00:00, 3.03it/s]
Cross Validation Time Series 63: 100%|██████████| 9/9 [00:06<00:00, 1.34it/s]
Cross Validation Time Series 42: 100%|██████████| 9/9 [00:13<00:00, 1.47s/it]
Cross Validation Time Series 58: 100%|██████████| 9/9 [00:04<00:00, 1.91it/s]
Cross Validation Time Series 65: 100%|██████████| 9/9 [00:08<00:00, 1.02it/s]
Cross Validation Time Series 66: 100%|██████████| 9/9 [00:15<00:00, 1.72s/it]
Cross Validation Time Series 63: 100%|██████████| 9/9 [00:13<00:00, 1.48s/it]
Cross Validation Time Series 66: 100%|██████████| 9/9 [00:03<00:00, 2.83it/s]
Cross Validation Time Series 43: 67%|██████▋ | 6/9 [00:04<00:02, 1.05it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 64: 11%|█ | 1/9 [00:01<00:12, 1.62s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 64: 100%|██████████| 9/9 [00:06<00:00, 1.29it/s]
Cross Validation Time Series 63: 78%|███████▊ | 7/9 [00:11<00:03, 1.73s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 67: 100%|██████████| 9/9 [00:04<00:00, 1.81it/s]
Cross Validation Time Series 43: 100%|██████████| 9/9 [00:07<00:00, 1.13it/s]
Cross Validation Time Series 67: 100%|██████████| 9/9 [00:03<00:00, 2.66it/s]
Cross Validation Time Series 59: 100%|██████████| 9/9 [00:09<00:00, 1.01s/it]
Cross Validation Time Series 61: 100%|██████████| 9/9 [00:13<00:00, 1.55s/it]
Cross Validation Time Series 63: 100%|██████████| 9/9 [00:14<00:00, 1.59s/it]
Cross Validation Time Series 68: 100%|██████████| 9/9 [00:02<00:00, 3.65it/s]
Cross Validation Time Series 65: 100%|██████████| 9/9 [00:05<00:00, 1.66it/s]
Cross Validation Time Series 44: 100%|██████████| 9/9 [00:07<00:00, 1.13it/s]
Cross Validation Time Series 64: 100%|██████████| 9/9 [00:11<00:00, 1.28s/it]
Cross Validation Time Series 60: 100%|██████████| 9/9 [00:06<00:00, 1.31it/s]
Cross Validation Time Series 61: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 66: 78%|███████▊ | 7/9 [00:04<00:01, 1.71it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 45: 44%|████▍ | 4/9 [00:00<00:00, 8.02it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 69: 100%|██████████| 9/9 [00:06<00:00, 1.45it/s]
Cross Validation Time Series 68: 100%|██████████| 9/9 [00:08<00:00, 1.05it/s]
Cross Validation Time Series 45: 100%|██████████| 9/9 [00:01<00:00, 6.55it/s]
Cross Validation Time Series 66: 100%|██████████| 9/9 [00:05<00:00, 1.57it/s]
Cross Validation Time Series 61: 100%|██████████| 9/9 [00:04<00:00, 2.04it/s]
Cross Validation Time Series 69: 100%|██████████| 9/9 [00:05<00:00, 1.56it/s]
Cross Validation Time Series 70: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 62: 33%|███▎ | 3/9 [00:02<00:04, 1.50it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 65: 100%|██████████| 9/9 [00:07<00:00, 1.26it/s]
Cross Validation Time Series 67: 100%|██████████| 9/9 [00:06<00:00, 1.42it/s]
Cross Validation Time Series 66: 22%|██▏ | 2/9 [00:00<00:02, 3.19it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 70: 100%|██████████| 9/9 [00:07<00:00, 1.20it/s]
Cross Validation Time Series 71: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 70: 100%|██████████| 9/9 [00:01<00:00, 4.80it/s]
Cross Validation Time Series 62: 100%|██████████| 9/9 [00:17<00:00, 1.95s/it]
Cross Validation Time Series 63: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 63: 22%|██▏ | 2/9 [00:00<00:02, 2.42it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 68: 100%|██████████| 9/9 [00:04<00:00, 2.21it/s]
Cross Validation Time Series 62: 89%|████████▉ | 8/9 [00:07<00:01, 1.42s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 71: 100%|██████████| 9/9 [00:04<00:00, 2.13it/s]
Cross Validation Time Series 62: 100%|██████████| 9/9 [00:08<00:00, 1.00it/s]
Cross Validation Time Series 63: 100%|██████████| 9/9 [00:02<00:00, 3.23it/s]
Cross Validation Time Series 66: 100%|██████████| 9/9 [00:06<00:00, 1.35it/s]
Cross Validation Time Series 69: 100%|██████████| 9/9 [00:02<00:00, 3.01it/s]
Cross Validation Time Series 64: 100%|██████████| 9/9 [00:21<00:00, 2.36s/it]
Cross Validation Time Series 71: 100%|██████████| 9/9 [00:07<00:00, 1.28it/s]
Cross Validation Time Series 72: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 46: 100%|██████████| 9/9 [00:14<00:00, 1.56s/it]
Cross Validation Time Series 64: 44%|████▍ | 4/9 [00:02<00:02, 1.92it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 70: 11%|█ | 1/9 [00:01<00:08, 1.08s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 72: 100%|██████████| 9/9 [00:03<00:00, 2.47it/s]
Cross Validation Time Series 63: 33%|███▎ | 3/9 [00:02<00:05, 1.04it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 72: 33%|███▎ | 3/9 [00:01<00:02, 2.62it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 70: 22%|██▏ | 2/9 [00:02<00:10, 1.49s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 65: 100%|██████████| 9/9 [00:05<00:00, 1.74it/s]
Cross Validation Time Series 47: 100%|██████████| 9/9 [00:05<00:00, 1.56it/s]
Cross Validation Time Series 72: 100%|██████████| 9/9 [00:05<00:00, 1.50it/s]
Cross Validation Time Series 73: 100%|██████████| 9/9 [00:06<00:00, 1.46it/s]
Cross Validation Time Series 63: 100%|██████████| 9/9 [00:09<00:00, 1.05s/it]
Cross Validation Time Series 64: 100%|██████████| 9/9 [00:09<00:00, 1.07s/it]
Cross Validation Time Series 73: 100%|██████████| 9/9 [00:03<00:00, 2.61it/s]
Cross Validation Time Series 48: 100%|██████████| 9/9 [00:03<00:00, 2.48it/s]
Cross Validation Time Series 64: 100%|██████████| 9/9 [00:05<00:00, 1.66it/s]
Cross Validation Time Series 70: 100%|██████████| 9/9 [00:14<00:00, 1.66s/it]
Cross Validation Time Series 49: 100%|██████████| 9/9 [00:05<00:00, 1.63it/s]
Cross Validation Time Series 65: 100%|██████████| 9/9 [00:02<00:00, 3.33it/s]
Cross Validation Time Series 67: 100%|██████████| 9/9 [00:17<00:00, 1.98s/it]
Cross Validation Time Series 66: 100%|██████████| 9/9 [00:11<00:00, 1.31s/it]
Cross Validation Time Series 74: 100%|██████████| 9/9 [00:11<00:00, 1.32s/it]
Cross Validation Time Series 65: 100%|██████████| 9/9 [00:10<00:00, 1.20s/it]
Cross Validation Time Series 74: 100%|██████████| 9/9 [00:12<00:00, 1.38s/it]
Cross Validation Time Series 68: 100%|██████████| 9/9 [00:07<00:00, 1.24it/s]
Cross Validation Time Series 50: 100%|██████████| 9/9 [00:10<00:00, 1.13s/it]
Cross Validation Time Series 75: 100%|██████████| 9/9 [00:04<00:00, 2.17it/s]
Cross Validation Time Series 71: 100%|██████████| 9/9 [00:11<00:00, 1.29s/it]
Cross Validation Time Series 72: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 69: 100%|██████████| 9/9 [00:02<00:00, 3.45it/s]
Cross Validation Time Series 66: 100%|██████████| 9/9 [00:13<00:00, 1.45s/it]
Cross Validation Time Series 76: 100%|██████████| 9/9 [00:04<00:00, 2.07it/s]
Cross Validation Time Series 72: 100%|██████████| 9/9 [00:04<00:00, 1.87it/s]
Cross Validation Time Series 73: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 67: 78%|███████▊ | 7/9 [00:15<00:04, 2.33s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 73: 22%|██▏ | 2/9 [00:01<00:03, 1.86it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 67: 100%|██████████| 9/9 [00:04<00:00, 1.80it/s]
Cross Validation Time Series 77: 100%|██████████| 9/9 [00:03<00:00, 2.55it/s]
Cross Validation Time Series 70: 67%|██████▋ | 6/9 [00:07<00:03, 1.23s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 66: 100%|██████████| 9/9 [00:15<00:00, 1.73s/it]
Cross Validation Time Series 68: 11%|█ | 1/9 [00:00<00:04, 1.86it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 78: 33%|███▎ | 3/9 [00:00<00:01, 5.05it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 73: 100%|██████████| 9/9 [00:03<00:00, 2.39it/s]
Cross Validation Time Series 78: 44%|████▍ | 4/9 [00:00<00:01, 4.74it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 67: 100%|██████████| 9/9 [00:18<00:00, 2.05s/it]
Cross Validation Time Series 67: 56%|█████▌ | 5/9 [00:01<00:01, 2.47it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 51: 100%|██████████| 9/9 [00:12<00:00, 1.34s/it]
Cross Validation Time Series 67: 78%|███████▊ | 7/9 [00:03<00:01, 1.54it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 78: 100%|██████████| 9/9 [00:04<00:00, 1.89it/s]
Cross Validation Time Series 79: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 67: 100%|██████████| 9/9 [00:04<00:00, 1.86it/s]
Cross Validation Time Series 74: 100%|██████████| 9/9 [00:04<00:00, 1.89it/s]
Cross Validation Time Series 68: 67%|██████▋ | 6/9 [00:04<00:01, 1.68it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 70: 100%|██████████| 9/9 [00:13<00:00, 1.49s/it]
Cross Validation Time Series 68: 78%|███████▊ | 7/9 [00:04<00:01, 1.99it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 52: 78%|███████▊ | 7/9 [00:02<00:00, 2.77it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 68: 100%|██████████| 9/9 [00:05<00:00, 1.64it/s]
Cross Validation Time Series 52: 100%|██████████| 9/9 [00:03<00:00, 2.61it/s]
Cross Validation Time Series 68: 100%|██████████| 9/9 [00:07<00:00, 1.28it/s]
Cross Validation Time Series 75: 100%|██████████| 9/9 [00:22<00:00, 2.49s/it]
Cross Validation Time Series 68: 100%|██████████| 9/9 [00:01<00:00, 4.56it/s]
Cross Validation Time Series 76: 11%|█ | 1/9 [00:00<00:04, 2.00it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 76: 22%|██▏ | 2/9 [00:00<00:03, 2.19it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 75: 100%|██████████| 9/9 [00:03<00:00, 2.29it/s]
Cross Validation Time Series 79: 100%|██████████| 9/9 [00:05<00:00, 1.53it/s]
Cross Validation Time Series 71: 100%|██████████| 9/9 [00:06<00:00, 1.50it/s]
Cross Validation Time Series 69: 100%|██████████| 9/9 [00:06<00:00, 1.42it/s]
Cross Validation Time Series 69: 100%|██████████| 9/9 [00:06<00:00, 1.35it/s]
Cross Validation Time Series 70: 11%|█ | 1/9 [00:00<00:00, 8.49it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 69: 56%|█████▌ | 5/9 [00:06<00:05, 1.40s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 80: 100%|██████████| 9/9 [00:03<00:00, 2.69it/s]
Cross Validation Time Series 70: 67%|██████▋ | 6/9 [00:01<00:00, 5.66it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 70: 44%|████▍ | 4/9 [00:01<00:02, 2.46it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 70: 100%|██████████| 9/9 [00:01<00:00, 5.28it/s]
Cross Validation Time Series 70: 56%|█████▌ | 5/9 [00:02<00:01, 2.35it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 81: 100%|██████████| 9/9 [00:03<00:00, 2.63it/s]
Cross Validation Time Series 70: 100%|██████████| 9/9 [00:05<00:00, 1.60it/s]
Cross Validation Time Series 69: 100%|██████████| 9/9 [00:12<00:00, 1.44s/it]
Cross Validation Time Series 72: 67%|██████▋ | 6/9 [00:08<00:04, 1.54s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 70: 33%|███▎ | 3/9 [00:00<00:01, 5.30it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 71: 89%|████████▉ | 8/9 [00:05<00:00, 1.56it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 82: 100%|██████████| 9/9 [00:03<00:00, 2.48it/s]
Cross Validation Time Series 53: 100%|██████████| 9/9 [00:14<00:00, 1.62s/it]
Cross Validation Time Series 71: 89%|████████▉ | 8/9 [00:02<00:00, 2.97it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 76: 100%|██████████| 9/9 [00:12<00:00, 1.37s/it]
Cross Validation Time Series 71: 100%|██████████| 9/9 [00:02<00:00, 3.16it/s]
Cross Validation Time Series 71: 100%|██████████| 9/9 [00:06<00:00, 1.31it/s]
Cross Validation Time Series 77: 11%|█ | 1/9 [00:00<00:03, 2.60it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 72: 89%|████████▉ | 8/9 [00:10<00:01, 1.38s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 70: 100%|██████████| 9/9 [00:03<00:00, 2.41it/s]
Cross Validation Time Series 76: 100%|██████████| 9/9 [00:16<00:00, 1.87s/it]
Cross Validation Time Series 77: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 54: 22%|██▏ | 2/9 [00:03<00:12, 1.72s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 71: 11%|█ | 1/9 [00:01<00:11, 1.47s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 72: 100%|██████████| 9/9 [00:14<00:00, 1.61s/it]
Cross Validation Time Series 72: 100%|██████████| 9/9 [00:05<00:00, 1.62it/s]
Cross Validation Time Series 83: 100%|██████████| 9/9 [00:06<00:00, 1.31it/s]
Cross Validation Time Series 54: 56%|█████▌ | 5/9 [00:07<00:05, 1.42s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 77: 100%|██████████| 9/9 [00:07<00:00, 1.21it/s]
Cross Validation Time Series 73: 33%|███▎ | 3/9 [00:02<00:06, 1.01s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 73: 22%|██▏ | 2/9 [00:02<00:07, 1.04s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 84: 100%|██████████| 9/9 [00:05<00:00, 1.79it/s]
Cross Validation Time Series 77: 100%|██████████| 9/9 [00:09<00:00, 1.09s/it]
Cross Validation Time Series 73: 100%|██████████| 9/9 [00:09<00:00, 1.08s/it]
Cross Validation Time Series 72: 100%|██████████| 9/9 [00:14<00:00, 1.63s/it]
Cross Validation Time Series 71: 100%|██████████| 9/9 [00:15<00:00, 1.70s/it]
Cross Validation Time Series 54: 100%|██████████| 9/9 [00:18<00:00, 2.03s/it]
Cross Validation Time Series 78: 78%|███████▊ | 7/9 [00:06<00:02, 1.27s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 73: 100%|██████████| 9/9 [00:05<00:00, 1.74it/s]
Cross Validation Time Series 78: 100%|██████████| 9/9 [00:08<00:00, 1.05it/s]
Cross Validation Time Series 85: 100%|██████████| 9/9 [00:09<00:00, 1.04s/it]
Cross Validation Time Series 55: 100%|██████████| 9/9 [00:02<00:00, 3.39it/s]
Cross Validation Time Series 74: 100%|██████████| 9/9 [00:06<00:00, 1.42it/s]
Cross Validation Time Series 73: 100%|██████████| 9/9 [00:17<00:00, 1.98s/it]
Cross Validation Time Series 74: 11%|█ | 1/9 [00:00<00:03, 2.46it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 56: 78%|███████▊ | 7/9 [00:04<00:00, 2.13it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 86: 22%|██▏ | 2/9 [00:04<00:14, 2.13s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 56: 100%|██████████| 9/9 [00:05<00:00, 1.77it/s]
Cross Validation Time Series 78: 100%|██████████| 9/9 [00:19<00:00, 2.14s/it]
Cross Validation Time Series 74: 100%|██████████| 9/9 [00:03<00:00, 2.72it/s]
Cross Validation Time Series 57: 56%|█████▌ | 5/9 [00:01<00:01, 3.01it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 79: 78%|███████▊ | 7/9 [00:07<00:01, 1.11it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 57: 100%|██████████| 9/9 [00:04<00:00, 1.85it/s]
Cross Validation Time Series 79: 100%|██████████| 9/9 [00:10<00:00, 1.15s/it]
Cross Validation Time Series 86: 56%|█████▌ | 5/9 [00:10<00:08, 2.14s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 72: 67%|██████▋ | 6/9 [00:14<00:08, 2.81s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 79: 100%|██████████| 9/9 [00:05<00:00, 1.80it/s]
Cross Validation Time Series 80: 22%|██▏ | 2/9 [00:00<00:00, 7.07it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 80: 33%|███▎ | 3/9 [00:01<00:02, 2.15it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 58: 67%|██████▋ | 6/9 [00:01<00:00, 3.01it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 58: 100%|██████████| 9/9 [00:02<00:00, 3.14it/s]
Cross Validation Time Series 80: 100%|██████████| 9/9 [00:01<00:00, 5.19it/s]
Cross Validation Time Series 81: 22%|██▏ | 2/9 [00:00<00:01, 6.94it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 74: 100%|██████████| 9/9 [00:14<00:00, 1.58s/it]
Cross Validation Time Series 59: 11%|█ | 1/9 [00:00<00:06, 1.15it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 81: 56%|█████▌ | 5/9 [00:00<00:00, 4.42it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 80: 100%|██████████| 9/9 [00:06<00:00, 1.49it/s]
Cross Validation Time Series 81: 100%|██████████| 9/9 [00:03<00:00, 2.25it/s]
Cross Validation Time Series 86: 100%|██████████| 9/9 [00:18<00:00, 2.04s/it]
Cross Validation Time Series 72: 100%|██████████| 9/9 [00:21<00:00, 2.44s/it]
Cross Validation Time Series 82: 100%|██████████| 9/9 [00:03<00:00, 2.83it/s]
Cross Validation Time Series 81: 100%|██████████| 9/9 [00:04<00:00, 2.17it/s]
Cross Validation Time Series 75: 100%|██████████| 9/9 [00:21<00:00, 2.38s/it]
Cross Validation Time Series 76: 22%|██▏ | 2/9 [00:01<00:04, 1.54it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 59: 100%|██████████| 9/9 [00:10<00:00, 1.22s/it]
Cross Validation Time Series 76: 44%|████▍ | 4/9 [00:02<00:03, 1.58it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 75: 89%|████████▉ | 8/9 [00:18<00:02, 2.54s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 75: 100%|██████████| 9/9 [00:11<00:00, 1.32s/it]
Cross Validation Time Series 76: 100%|██████████| 9/9 [00:04<00:00, 2.00it/s]
Cross Validation Time Series 82: 67%|██████▋ | 6/9 [00:05<00:02, 1.07it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 87: 89%|████████▉ | 8/9 [00:08<00:00, 1.33it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 75: 100%|██████████| 9/9 [00:20<00:00, 2.25s/it]
Cross Validation Time Series 73: 100%|██████████| 9/9 [00:07<00:00, 1.14it/s]
Cross Validation Time Series 87: 100%|██████████| 9/9 [00:08<00:00, 1.04it/s]
Cross Validation Time Series 82: 100%|██████████| 9/9 [00:09<00:00, 1.07s/it]
Cross Validation Time Series 76: 100%|██████████| 9/9 [00:05<00:00, 1.64it/s]
Cross Validation Time Series 77: 100%|██████████| 9/9 [00:06<00:00, 1.40it/s]
Cross Validation Time Series 83: 100%|██████████| 9/9 [00:04<00:00, 1.93it/s]
Cross Validation Time Series 74: 100%|██████████| 9/9 [00:09<00:00, 1.01s/it]
Cross Validation Time Series 60: 100%|██████████| 9/9 [00:15<00:00, 1.78s/it]
Cross Validation Time Series 76: 100%|██████████| 9/9 [00:13<00:00, 1.54s/it]
Cross Validation Time Series 84: 100%|██████████| 9/9 [00:06<00:00, 1.35it/s]
Cross Validation Time Series 83: 100%|██████████| 9/9 [00:21<00:00, 2.40s/it]
Cross Validation Time Series 77: 78%|███████▊ | 7/9 [00:10<00:03, 1.65s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 75: 100%|██████████| 9/9 [00:06<00:00, 1.42it/s]
Cross Validation Time Series 76: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 88: 100%|██████████| 9/9 [00:16<00:00, 1.81s/it]
Cross Validation Time Series 84: 100%|██████████| 9/9 [00:04<00:00, 2.01it/s]
Cross Validation Time Series 77: 100%|██████████| 9/9 [00:16<00:00, 1.81s/it]
Cross Validation Time Series 78: 11%|█ | 1/9 [00:00<00:01, 6.36it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 78: 44%|████▍ | 4/9 [00:00<00:00, 6.62it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 76: 33%|███▎ | 3/9 [00:06<00:11, 1.97s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 78: 100%|██████████| 9/9 [00:01<00:00, 4.90it/s]
Cross Validation Time Series 61: 100%|██████████| 9/9 [00:09<00:00, 1.10s/it]
Cross Validation Time Series 78: 100%|██████████| 9/9 [00:19<00:00, 2.20s/it]
Cross Validation Time Series 77: 100%|██████████| 9/9 [00:12<00:00, 1.40s/it]
Cross Validation Time Series 79: 100%|██████████| 9/9 [00:02<00:00, 3.25it/s]
Cross Validation Time Series 78: 100%|██████████| 9/9 [00:02<00:00, 3.04it/s]
Cross Validation Time Series 80: 11%|█ | 1/9 [00:01<00:10, 1.27s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 85: 100%|██████████| 9/9 [00:15<00:00, 1.72s/it]
Cross Validation Time Series 89: 67%|██████▋ | 6/9 [00:14<00:06, 2.07s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 76: 100%|██████████| 9/9 [00:16<00:00, 1.79s/it]
Cross Validation Time Series 77: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 62: 56%|█████▌ | 5/9 [00:09<00:07, 1.92s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 86: 33%|███▎ | 3/9 [00:02<00:04, 1.20it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 79: 56%|█████▌ | 5/9 [00:03<00:03, 1.14it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 79: 100%|██████████| 9/9 [00:05<00:00, 1.65it/s]
Cross Validation Time Series 86: 100%|██████████| 9/9 [00:04<00:00, 1.96it/s]
Cross Validation Time Series 89: 100%|██████████| 9/9 [00:20<00:00, 2.27s/it]
Cross Validation Time Series 85: 100%|██████████| 9/9 [00:18<00:00, 2.01s/it]
Cross Validation Time Series 80: 33%|███▎ | 3/9 [00:02<00:05, 1.02it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 80: 56%|█████▌ | 5/9 [00:09<00:06, 1.73s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 62: 100%|██████████| 9/9 [00:15<00:00, 1.69s/it]
Cross Validation Time Series 87: 44%|████▍ | 4/9 [00:03<00:04, 1.02it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 86: 100%|██████████| 9/9 [00:04<00:00, 2.03it/s]
Cross Validation Time Series 79: 100%|██████████| 9/9 [00:19<00:00, 2.21s/it]
Cross Validation Time Series 90: 100%|██████████| 9/9 [00:07<00:00, 1.24it/s]
Cross Validation Time Series 91: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 80: 22%|██▏ | 2/9 [00:02<00:07, 1.07s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 77: 100%|██████████| 9/9 [00:13<00:00, 1.53s/it]
Cross Validation Time Series 80: 100%|██████████| 9/9 [00:18<00:00, 2.09s/it]
Cross Validation Time Series 91: 100%|██████████| 9/9 [00:04<00:00, 2.02it/s]
Cross Validation Time Series 80: 100%|██████████| 9/9 [00:14<00:00, 1.64s/it]
Cross Validation Time Series 87: 100%|██████████| 9/9 [00:14<00:00, 1.67s/it]
Cross Validation Time Series 78: 89%|████████▉ | 8/9 [00:04<00:00, 2.18it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 87: 100%|██████████| 9/9 [00:08<00:00, 1.08it/s]
Cross Validation Time Series 88: 44%|████▍ | 4/9 [00:00<00:00, 7.97it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 88: 56%|█████▌ | 5/9 [00:00<00:00, 6.50it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 63: 100%|██████████| 9/9 [00:12<00:00, 1.41s/it]
Cross Validation Time Series 81: 33%|███▎ | 3/9 [00:01<00:02, 2.10it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 88: 100%|██████████| 9/9 [00:01<00:00, 6.52it/s]
Cross Validation Time Series 78: 100%|██████████| 9/9 [00:06<00:00, 1.50it/s]
Cross Validation Time Series 81: 44%|████▍ | 4/9 [00:01<00:02, 2.29it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 92: 56%|█████▌ | 5/9 [00:02<00:02, 1.95it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 81: 100%|██████████| 9/9 [00:03<00:00, 2.63it/s]
Cross Validation Time Series 64: 100%|██████████| 9/9 [00:02<00:00, 3.87it/s]
Cross Validation Time Series 92: 100%|██████████| 9/9 [00:04<00:00, 1.94it/s]
Cross Validation Time Series 93: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 82: 33%|███▎ | 3/9 [00:00<00:01, 5.75it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 89: 100%|██████████| 9/9 [00:03<00:00, 2.98it/s]
Cross Validation Time Series 90: 11%|█ | 1/9 [00:00<00:05, 1.38it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 80: 89%|████████▉ | 8/9 [00:13<00:01, 1.89s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 65: 33%|███▎ | 3/9 [00:02<00:05, 1.03it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 90: 22%|██▏ | 2/9 [00:01<00:06, 1.10it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 65: 44%|████▍ | 4/9 [00:03<00:04, 1.18it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 80: 100%|██████████| 9/9 [00:15<00:00, 1.70s/it]
Cross Validation Time Series 82: 100%|██████████| 9/9 [00:04<00:00, 1.84it/s]
Cross Validation Time Series 88: 100%|██████████| 9/9 [00:07<00:00, 1.19it/s]
Cross Validation Time Series 65: 100%|██████████| 9/9 [00:05<00:00, 1.77it/s]
Cross Validation Time Series 90: 100%|██████████| 9/9 [00:04<00:00, 1.99it/s]
Cross Validation Time Series 66: 100%|██████████| 9/9 [00:05<00:00, 1.60it/s]
Cross Validation Time Series 91: 100%|██████████| 9/9 [00:05<00:00, 1.71it/s]
Cross Validation Time Series 89: 100%|██████████| 9/9 [00:06<00:00, 1.42it/s]
Cross Validation Time Series 81: 100%|██████████| 9/9 [00:07<00:00, 1.20it/s]
Cross Validation Time Series 67: 100%|██████████| 9/9 [00:02<00:00, 3.52it/s]
Cross Validation Time Series 79: 100%|██████████| 9/9 [00:15<00:00, 1.72s/it]
Cross Validation Time Series 90: 100%|██████████| 9/9 [00:02<00:00, 3.17it/s]
Cross Validation Time Series 68: 33%|███▎ | 3/9 [00:00<00:01, 3.64it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 80: 11%|█ | 1/9 [00:01<00:08, 1.09s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 83: 100%|██████████| 9/9 [00:12<00:00, 1.40s/it]
Cross Validation Time Series 81: 100%|██████████| 9/9 [00:23<00:00, 2.62s/it]
Cross Validation Time Series 68: 100%|██████████| 9/9 [00:04<00:00, 1.91it/s]
Cross Validation Time Series 91: 100%|██████████| 9/9 [00:04<00:00, 1.91it/s]
Cross Validation Time Series 92: 100%|██████████| 9/9 [00:07<00:00, 1.13it/s]
Cross Validation Time Series 82: 100%|██████████| 9/9 [00:07<00:00, 1.23it/s]
Cross Validation Time Series 92: 100%|██████████| 9/9 [00:02<00:00, 3.52it/s]
Cross Validation Time Series 69: 100%|██████████| 9/9 [00:03<00:00, 2.86it/s]
Cross Validation Time Series 80: 100%|██████████| 9/9 [00:07<00:00, 1.19it/s]
Cross Validation Time Series 70: 22%|██▏ | 2/9 [00:02<00:08, 1.21s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 83: 89%|████████▉ | 8/9 [00:04<00:00, 1.20it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 70: 44%|████▍ | 4/9 [00:03<00:04, 1.12it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 83: 100%|██████████| 9/9 [00:05<00:00, 1.51it/s]
Cross Validation Time Series 93: 100%|██████████| 9/9 [00:24<00:00, 2.77s/it]
Cross Validation Time Series 81: 56%|█████▌ | 5/9 [00:03<00:02, 1.48it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 70: 56%|█████▌ | 5/9 [00:04<00:03, 1.21it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 84: 22%|██▏ | 2/9 [00:00<00:01, 3.60it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 84: 33%|███▎ | 3/9 [00:00<00:01, 3.57it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 70: 67%|██████▋ | 6/9 [00:04<00:02, 1.38it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 81: 100%|██████████| 9/9 [00:05<00:00, 1.71it/s]
Cross Validation Time Series 82: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 94: 100%|██████████| 9/9 [00:02<00:00, 3.81it/s]
Cross Validation Time Series 70: 100%|██████████| 9/9 [00:08<00:00, 1.04it/s]
Cross Validation Time Series 84: 100%|██████████| 9/9 [00:04<00:00, 1.81it/s]
Cross Validation Time Series 71: 11%|█ | 1/9 [00:00<00:03, 2.51it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 82: 67%|██████▋ | 6/9 [00:04<00:02, 1.32it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 82: 100%|██████████| 9/9 [00:05<00:00, 1.57it/s]
Cross Validation Time Series 93: 100%|██████████| 9/9 [00:11<00:00, 1.26s/it]
Cross Validation Time Series 95: 100%|██████████| 9/9 [00:05<00:00, 1.72it/s]
Cross Validation Time Series 71: 100%|██████████| 9/9 [00:03<00:00, 2.92it/s]
Cross Validation Time Series 85: 22%|██▏ | 2/9 [00:03<00:10, 1.51s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 82: 100%|██████████| 9/9 [00:15<00:00, 1.78s/it]
Cross Validation Time Series 83: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 94: 100%|██████████| 9/9 [00:04<00:00, 2.09it/s]
Cross Validation Time Series 93: 100%|██████████| 9/9 [00:18<00:00, 2.02s/it]
Cross Validation Time Series 96: 100%|██████████| 9/9 [00:06<00:00, 1.49it/s]
Cross Validation Time Series 84: 100%|██████████| 9/9 [00:21<00:00, 2.40s/it]
Cross Validation Time Series 72: 67%|██████▋ | 6/9 [00:06<00:02, 1.09it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 83: 100%|██████████| 9/9 [00:07<00:00, 1.27it/s]
Cross Validation Time Series 97: 44%|████▍ | 4/9 [00:00<00:00, 7.82it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 84: 11%|█ | 1/9 [00:00<00:01, 5.42it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 95: 89%|████████▉ | 8/9 [00:03<00:00, 2.77it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 85: 56%|█████▌ | 5/9 [00:01<00:00, 4.55it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 84: 33%|███▎ | 3/9 [00:00<00:01, 4.04it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 95: 100%|██████████| 9/9 [00:03<00:00, 2.70it/s]
Cross Validation Time Series 96: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 97: 100%|██████████| 9/9 [00:02<00:00, 3.86it/s]
Cross Validation Time Series 83: 100%|██████████| 9/9 [00:07<00:00, 1.17it/s]
Cross Validation Time Series 85: 78%|███████▊ | 7/9 [00:02<00:01, 1.79it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 85: 100%|██████████| 9/9 [00:04<00:00, 1.96it/s]
Cross Validation Time Series 84: 100%|██████████| 9/9 [00:04<00:00, 1.92it/s]
Cross Validation Time Series 72: 100%|██████████| 9/9 [00:12<00:00, 1.34s/it]
Cross Validation Time Series 85: 100%|██████████| 9/9 [00:15<00:00, 1.76s/it]
Cross Validation Time Series 98: 100%|██████████| 9/9 [00:04<00:00, 1.84it/s]
Cross Validation Time Series 85: 100%|██████████| 9/9 [00:03<00:00, 2.51it/s]
Cross Validation Time Series 73: 100%|██████████| 9/9 [00:03<00:00, 2.87it/s]
Cross Validation Time Series 84: 89%|████████▉ | 8/9 [00:06<00:00, 1.43it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 74: 11%|█ | 1/9 [00:01<00:08, 1.05s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 84: 100%|██████████| 9/9 [00:09<00:00, 1.04s/it]
Cross Validation Time Series 86: 67%|██████▋ | 6/9 [00:07<00:04, 1.47s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 86: 78%|███████▊ | 7/9 [00:06<00:02, 1.13s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 94: 67%|██████▋ | 6/9 [00:15<00:07, 2.61s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 86: 100%|██████████| 9/9 [00:05<00:00, 1.68it/s]
Cross Validation Time Series 99: 100%|██████████| 9/9 [00:07<00:00, 1.28it/s]
Cross Validation Time Series 86: 89%|████████▉ | 8/9 [00:09<00:01, 1.27s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 74: 78%|███████▊ | 7/9 [00:05<00:01, 1.59it/s]]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 86: 100%|██████████| 9/9 [00:07<00:00, 1.16it/s]]
Cross Validation Time Series 86: 100%|██████████| 9/9 [00:10<00:00, 1.17s/it]]
Cross Validation Time Series 74: 100%|██████████| 9/9 [00:06<00:00, 1.42it/s]]
Cross Validation Time Series 100: 100%|██████████| 9/9 [00:03<00:00, 2.46it/s]
Cross Validation Time Series 96: 100%|██████████| 9/9 [00:18<00:00, 2.06s/it]]
Cross Validation Time Series 94: 100%|██████████| 9/9 [00:21<00:00, 2.44s/it]]
Cross Validation Time Series 87: 100%|██████████| 9/9 [00:05<00:00, 1.70it/s]
Cross Validation Time Series 87: 100%|██████████| 9/9 [00:06<00:00, 1.40it/s]
Cross Validation Time Series 101: 33%|███▎ | 3/9 [00:02<00:04, 1.23it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 75: 100%|██████████| 9/9 [00:05<00:00, 1.63it/s]
Cross Validation Time Series 88: 33%|███▎ | 3/9 [00:01<00:02, 2.70it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 88: 22%|██▏ | 2/9 [00:00<00:03, 2.23it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 95: 100%|██████████| 9/9 [00:02<00:00, 3.67it/s]]
Cross Validation Time Series 88: 100%|██████████| 9/9 [00:03<00:00, 2.65it/s]]
Cross Validation Time Series 85: 100%|██████████| 9/9 [00:14<00:00, 1.63s/it]]
Cross Validation Time Series 101: 100%|██████████| 9/9 [00:08<00:00, 1.04it/s]
Cross Validation Time Series 97: 89%|████████▉ | 8/9 [00:06<00:01, 1.04s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 76: 100%|██████████| 9/9 [00:05<00:00, 1.55it/s]
Cross Validation Time Series 102: 11%|█ | 1/9 [00:00<00:03, 2.12it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 88: 100%|██████████| 9/9 [00:06<00:00, 1.39it/s]
Cross Validation Time Series 97: 100%|██████████| 9/9 [00:07<00:00, 1.18it/s]
Cross Validation Time Series 87: 100%|██████████| 9/9 [00:12<00:00, 1.43s/it]]
Cross Validation Time Series 96: 100%|██████████| 9/9 [00:05<00:00, 1.60it/s]
Cross Validation Time Series 86: 56%|█████▌ | 5/9 [00:02<00:01, 2.66it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 86: 89%|████████▉ | 8/9 [00:02<00:00, 3.15it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 77: 100%|██████████| 9/9 [00:02<00:00, 3.60it/s]
Cross Validation Time Series 86: 100%|██████████| 9/9 [00:03<00:00, 2.51it/s]]
Cross Validation Time Series 89: 67%|██████▋ | 6/9 [00:03<00:01, 1.57it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 102: 100%|██████████| 9/9 [00:04<00:00, 1.96it/s]
Cross Validation Time Series 103: 0%| | 0/9 [00:00<?, ?it/s]06it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 89: 100%|██████████| 9/9 [00:06<00:00, 1.45it/s]]
Cross Validation Time Series 97: 100%|██████████| 9/9 [00:06<00:00, 1.39it/s]]
Cross Validation Time Series 78: 100%|██████████| 9/9 [00:05<00:00, 1.66it/s]]
Cross Validation Time Series 103: 100%|██████████| 9/9 [00:03<00:00, 2.32it/s]
Cross Validation Time Series 90: 67%|██████▋ | 6/9 [00:02<00:01, 2.96it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 87: 89%|████████▉ | 8/9 [00:05<00:00, 2.05it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 87: 100%|██████████| 9/9 [00:06<00:00, 1.42it/s]]
Cross Validation Time Series 90: 100%|██████████| 9/9 [00:04<00:00, 1.98it/s]]
Cross Validation Time Series 79: 100%|██████████| 9/9 [00:04<00:00, 2.06it/s]]
Cross Validation Time Series 80: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 80: 11%|█ | 1/9 [00:00<00:02, 3.04it/s]]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 104: 100%|██████████| 9/9 [00:04<00:00, 1.80it/s]
Cross Validation Time Series 105: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 88: 100%|██████████| 9/9 [00:12<00:00, 1.34s/it]
Cross Validation Time Series 88: 11%|█ | 1/9 [00:03<00:31, 3.95s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 89: 11%|█ | 1/9 [00:00<00:02, 3.35it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 98: 56%|█████▌ | 5/9 [00:13<00:10, 2.73s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 98: 100%|██████████| 9/9 [00:06<00:00, 1.32it/s]]
Cross Validation Time Series 89: 67%|██████▋ | 6/9 [00:01<00:00, 3.71it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 89: 78%|███████▊ | 7/9 [00:01<00:00, 3.71it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 91: 100%|██████████| 9/9 [00:04<00:00, 2.04it/s]
Cross Validation Time Series 89: 100%|██████████| 9/9 [00:18<00:00, 2.07s/it]]
Cross Validation Time Series 89: 100%|██████████| 9/9 [00:02<00:00, 3.71it/s]
Cross Validation Time Series 80: 100%|██████████| 9/9 [00:03<00:00, 2.57it/s]
Cross Validation Time Series 105: 78%|███████▊ | 7/9 [00:02<00:00, 2.49it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 99: 11%|█ | 1/9 [00:01<00:12, 1.52s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 90: 33%|███▎ | 3/9 [00:00<00:01, 3.64it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 105: 100%|██████████| 9/9 [00:03<00:00, 2.52it/s]
Cross Validation Time Series 81: 100%|██████████| 9/9 [00:01<00:00, 5.08it/s]]
Cross Validation Time Series 82: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 92: 100%|██████████| 9/9 [00:02<00:00, 3.13it/s]]
Cross Validation Time Series 90: 100%|██████████| 9/9 [00:02<00:00, 3.19it/s]
Cross Validation Time Series 82: 22%|██▏ | 2/9 [00:01<00:05, 1.27it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 82: 56%|█████▌ | 5/9 [00:03<00:02, 1.60it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 106: 78%|███████▊ | 7/9 [00:04<00:01, 1.40it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 98: 100%|██████████| 9/9 [00:21<00:00, 2.41s/it]]
Cross Validation Time Series 106: 100%|██████████| 9/9 [00:05<00:00, 1.54it/s]
Cross Validation Time Series 82: 100%|██████████| 9/9 [00:05<00:00, 1.77it/s]
Cross Validation Time Series 93: 100%|██████████| 9/9 [00:05<00:00, 1.54it/s]
Cross Validation Time Series 91: 100%|██████████| 9/9 [00:05<00:00, 1.54it/s]
Cross Validation Time Series 83: 100%|██████████| 9/9 [00:02<00:00, 4.06it/s]
Cross Validation Time Series 84: 11%|█ | 1/9 [00:00<00:00, 9.83it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 90: 100%|██████████| 9/9 [00:10<00:00, 1.18s/it]
Cross Validation Time Series 94: 56%|█████▌ | 5/9 [00:02<00:02, 1.71it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 84: 44%|████▍ | 4/9 [00:01<00:02, 1.85it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 84: 100%|██████████| 9/9 [00:04<00:00, 2.17it/s]
Cross Validation Time Series 94: 100%|██████████| 9/9 [00:05<00:00, 1.74it/s]
Cross Validation Time Series 88: 100%|██████████| 9/9 [00:21<00:00, 2.39s/it]
Cross Validation Time Series 99: 100%|██████████| 9/9 [00:16<00:00, 1.84s/it]
Cross Validation Time Series 95: 67%|██████▋ | 6/9 [00:02<00:01, 2.94it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 100: 11%|█ | 1/9 [00:00<00:02, 3.28it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 85: 100%|██████████| 9/9 [00:02<00:00, 3.21it/s]]
Cross Validation Time Series 89: 11%|█ | 1/9 [00:00<00:07, 1.04it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 100: 44%|████▍ | 4/9 [00:01<00:01, 3.04it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 95: 100%|██████████| 9/9 [00:03<00:00, 2.60it/s]
Cross Validation Time Series 96: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 89: 33%|███▎ | 3/9 [00:01<00:03, 1.94it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 92: 100%|██████████| 9/9 [00:09<00:00, 1.01s/it]]
Cross Validation Time Series 86: 67%|██████▋ | 6/9 [00:01<00:00, 3.95it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 99: 100%|██████████| 9/9 [00:11<00:00, 1.25s/it]]
Cross Validation Time Series 100: 89%|████████▉ | 8/9 [00:02<00:00, 3.00it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 86: 89%|████████▉ | 8/9 [00:02<00:00, 3.75it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 100: 100%|██████████| 9/9 [00:02<00:00, 3.11it/s]
Cross Validation Time Series 86: 100%|██████████| 9/9 [00:02<00:00, 3.25it/s]]
Cross Validation Time Series 96: 100%|██████████| 9/9 [00:03<00:00, 2.66it/s]]
Cross Validation Time Series 93: 100%|██████████| 9/9 [00:04<00:00, 1.95it/s]]
Cross Validation Time Series 89: 100%|██████████| 9/9 [00:07<00:00, 1.20it/s]]
Cross Validation Time Series 100: 100%|██████████| 9/9 [00:05<00:00, 1.66it/s]
Cross Validation Time Series 101: 100%|██████████| 9/9 [00:05<00:00, 1.69it/s]
Cross Validation Time Series 87: 100%|██████████| 9/9 [00:04<00:00, 1.80it/s]
Cross Validation Time Series 91: 100%|██████████| 9/9 [00:13<00:00, 1.49s/it]]
Cross Validation Time Series 97: 100%|██████████| 9/9 [00:04<00:00, 1.94it/s]]
Cross Validation Time Series 88: 100%|██████████| 9/9 [00:02<00:00, 4.18it/s]]
Cross Validation Time Series 90: 100%|██████████| 9/9 [00:03<00:00, 2.77it/s]
Cross Validation Time Series 98: 56%|█████▌ | 5/9 [00:01<00:01, 3.91it/s]]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 98: 67%|██████▋ | 6/9 [00:01<00:00, 3.71it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 101: 100%|██████████| 9/9 [00:03<00:00, 2.52it/s]
Cross Validation Time Series 91: 33%|███▎ | 3/9 [00:00<00:02, 3.00it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 102: 22%|██▏ | 2/9 [00:01<00:03, 1.86it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 91: 44%|████▍ | 4/9 [00:02<00:03, 1.44it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 89: 56%|█████▌ | 5/9 [00:02<00:03, 1.29it/s]]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 98: 100%|██████████| 9/9 [00:04<00:00, 1.92it/s]]
Cross Validation Time Series 102: 56%|█████▌ | 5/9 [00:03<00:02, 1.40it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 102: 67%|██████▋ | 6/9 [00:03<00:01, 1.60it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 89: 100%|██████████| 9/9 [00:04<00:00, 1.80it/s]
Cross Validation Time Series 91: 78%|███████▊ | 7/9 [00:05<00:01, 1.21it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 99: 56%|█████▌ | 5/9 [00:01<00:01, 2.45it/s]]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 102: 100%|██████████| 9/9 [00:04<00:00, 1.81it/s]
Cross Validation Time Series 91: 100%|██████████| 9/9 [00:06<00:00, 1.47it/s]]
Cross Validation Time Series 99: 100%|██████████| 9/9 [00:04<00:00, 1.94it/s]
Cross Validation Time Series 90: 100%|██████████| 9/9 [00:06<00:00, 1.30it/s]]
Cross Validation Time Series 102: 100%|██████████| 9/9 [00:15<00:00, 1.71s/it]
Cross Validation Time Series 91: 44%|████▍ | 4/9 [00:01<00:02, 2.47it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 103: 22%|██▏ | 2/9 [00:00<00:01, 3.84it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 94: 100%|██████████| 9/9 [00:18<00:00, 2.05s/it]]
Cross Validation Time Series 91: 100%|██████████| 9/9 [00:03<00:00, 2.45it/s]]
Cross Validation Time Series 92: 11%|█ | 1/9 [00:00<00:02, 2.98it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 103: 100%|██████████| 9/9 [00:03<00:00, 2.49it/s]
Cross Validation Time Series 92: 33%|███▎ | 3/9 [00:01<00:02, 2.40it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 104: 11%|█ | 1/9 [00:00<00:06, 1.17it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 100: 100%|██████████| 9/9 [00:09<00:00, 1.07s/it]
Cross Validation Time Series 92: 100%|██████████| 9/9 [00:12<00:00, 1.37s/it]]
Cross Validation Time Series 103: 100%|██████████| 9/9 [00:13<00:00, 1.48s/it]
Cross Validation Time Series 92: 100%|██████████| 9/9 [00:03<00:00, 2.60it/s]
Cross Validation Time Series 101: 44%|████▍ | 4/9 [00:01<00:02, 2.41it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 95: 100%|██████████| 9/9 [00:05<00:00, 1.68it/s]]
Cross Validation Time Series 104: 100%|██████████| 9/9 [00:03<00:00, 2.59it/s]
Cross Validation Time Series 104: 44%|████▍ | 4/9 [00:01<00:01, 3.85it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 93: 44%|████▍ | 4/9 [00:01<00:01, 3.40it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 96: 22%|██▏ | 2/9 [00:00<00:01, 4.47it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 93: 56%|█████▌ | 5/9 [00:01<00:01, 3.49it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 92: 100%|██████████| 9/9 [00:22<00:00, 2.51s/it]
Cross Validation Time Series 93: 67%|██████▋ | 6/9 [00:01<00:00, 3.33it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 93: 33%|███▎ | 3/9 [00:01<00:02, 2.19it/s]]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 104: 100%|██████████| 9/9 [00:03<00:00, 2.58it/s]
Cross Validation Time Series 93: 100%|██████████| 9/9 [00:04<00:00, 1.94it/s]]
Cross Validation Time Series 96: 78%|███████▊ | 7/9 [00:03<00:01, 1.39it/s]]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 101: 100%|██████████| 9/9 [00:06<00:00, 1.49it/s]
Cross Validation Time Series 94: 22%|██▏ | 2/9 [00:00<00:01, 4.63it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 105: 0%| | 0/9 [00:02<?, ?it/s]21it/s]
Cross Validation Time Series 96: 100%|██████████| 9/9 [00:05<00:00, 1.74it/s]]
Cross Validation Time Series 105: 100%|██████████| 9/9 [00:05<00:00, 1.78it/s]
Cross Validation Time Series 93: 100%|██████████| 9/9 [00:04<00:00, 1.94it/s]]
Cross Validation Time Series 94: 89%|████████▉ | 8/9 [00:02<00:00, 3.94it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 94: 100%|██████████| 9/9 [00:02<00:00, 3.63it/s]]
Cross Validation Time Series 95: 11%|█ | 1/9 [00:00<00:02, 3.88it/s]]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 93: 100%|██████████| 9/9 [00:08<00:00, 1.09it/s]]
Cross Validation Time Series 94: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 102: 100%|██████████| 9/9 [00:04<00:00, 2.10it/s]
Cross Validation Time Series 94: 100%|██████████| 9/9 [00:04<00:00, 2.00it/s]]
Cross Validation Time Series 97: 78%|███████▊ | 7/9 [00:04<00:01, 1.19it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 95: 100%|██████████| 9/9 [00:04<00:00, 1.85it/s]]
Cross Validation Time Series 96: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 97: 100%|██████████| 9/9 [00:06<00:00, 1.39it/s]]
Cross Validation Time Series 98: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 103: 100%|██████████| 9/9 [00:04<00:00, 2.24it/s]
Cross Validation Time Series 95: 100%|██████████| 9/9 [00:02<00:00, 3.06it/s]
Cross Validation Time Series 96: 100%|██████████| 9/9 [00:01<00:00, 5.06it/s]]
Cross Validation Time Series 98: 100%|██████████| 9/9 [00:02<00:00, 4.46it/s]]
Cross Validation Time Series 99: 100%|██████████| 9/9 [00:03<00:00, 2.97it/s]]
Cross Validation Time Series 94: 100%|██████████| 9/9 [00:10<00:00, 1.20s/it]]
Cross Validation Time Series 95: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 96: 100%|██████████| 9/9 [00:06<00:00, 1.44it/s]]
Cross Validation Time Series 104: 100%|██████████| 9/9 [00:06<00:00, 1.31it/s]
Cross Validation Time Series 97: 100%|██████████| 9/9 [00:09<00:00, 1.04s/it]]
Cross Validation Time Series 100: 100%|██████████| 9/9 [00:05<00:00, 1.52it/s]
Cross Validation Time Series 95: 100%|██████████| 9/9 [00:05<00:00, 1.78it/s]
Cross Validation Time Series 98: 100%|██████████| 9/9 [00:03<00:00, 2.54it/s]]
Cross Validation Time Series 97: 100%|██████████| 9/9 [00:08<00:00, 1.01it/s]]
Cross Validation Time Series 98: 100%|██████████| 9/9 [00:03<00:00, 2.81it/s]]
Cross Validation Time Series 96: 100%|██████████| 9/9 [00:08<00:00, 1.05it/s]
Cross Validation Time Series 97: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 105: 89%|████████▉ | 8/9 [00:12<00:01, 1.29s/it]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 99: 100%|██████████| 9/9 [00:06<00:00, 1.35it/s]
Cross Validation Time Series 105: 100%|██████████| 9/9 [00:14<00:00, 1.64s/it]
Cross Validation Time Series 99: 100%|██████████| 9/9 [00:03<00:00, 2.99it/s]]
Cross Validation Time Series 100: 0%| | 0/9 [00:00<?, ?it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 101: 100%|██████████| 9/9 [00:12<00:00, 1.34s/it]
Cross Validation Time Series 100: 33%|███▎ | 3/9 [00:00<00:01, 4.11it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 100: 44%|████▍ | 4/9 [00:00<00:01, 4.46it/s]/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ets.py:1038: RuntimeWarning: divide by zero encountered in double_scalars
sigma2 = np.sum(e**2) / (ny - np_ - 1)
Cross Validation Time Series 100: 100%|██████████| 9/9 [00:01<00:00, 4.72it/s]
Cross Validation Time Series 106: 100%|██████████| 9/9 [00:02<00:00, 3.67it/s]
Cross Validation Time Series 102: 100%|██████████| 9/9 [00:05<00:00, 1.75it/s]
Cross Validation Time Series 100: 100%|██████████| 9/9 [00:07<00:00, 1.21it/s]
Cross Validation Time Series 101: 100%|██████████| 9/9 [00:04<00:00, 1.97it/s]
Cross Validation Time Series 103: 100%|██████████| 9/9 [00:01<00:00, 5.16it/s]
Cross Validation Time Series 97: 100%|██████████| 9/9 [00:10<00:00, 1.20s/it]]
Cross Validation Time Series 101: 100%|██████████| 9/9 [00:03<00:00, 2.27it/s]
Cross Validation Time Series 102: 100%|██████████| 9/9 [00:04<00:00, 2.16it/s]
Cross Validation Time Series 102: 100%|██████████| 9/9 [00:02<00:00, 4.22it/s]
Cross Validation Time Series 104: 100%|██████████| 9/9 [00:04<00:00, 1.90it/s]
Cross Validation Time Series 103: 100%|██████████| 9/9 [00:03<00:00, 3.00it/s]
Cross Validation Time Series 103: 100%|██████████| 9/9 [00:03<00:00, 2.41it/s]
Cross Validation Time Series 105: 100%|██████████| 9/9 [00:04<00:00, 1.90it/s]
Cross Validation Time Series 104: 100%|██████████| 9/9 [00:02<00:00, 3.85it/s]
Cross Validation Time Series 98: 100%|██████████| 9/9 [00:10<00:00, 1.14s/it]]
Cross Validation Time Series 104: 100%|██████████| 9/9 [00:05<00:00, 1.62it/s]
Cross Validation Time Series 106: 100%|██████████| 9/9 [00:02<00:00, 3.77it/s]
Cross Validation Time Series 105: 100%|██████████| 9/9 [00:03<00:00, 2.67it/s]
Cross Validation Time Series 99: 100%|██████████| 9/9 [00:05<00:00, 1.58it/s]]
Cross Validation Time Series 105: 100%|██████████| 9/9 [00:05<00:00, 1.56it/s]
Cross Validation Time Series 106: 100%|██████████| 9/9 [00:02<00:00, 3.03it/s]
Cross Validation Time Series 100: 100%|██████████| 9/9 [00:05<00:00, 1.63it/s]
Cross Validation Time Series 101: 100%|██████████| 9/9 [00:05<00:00, 1.53it/s]
Cross Validation Time Series 102: 100%|██████████| 9/9 [00:02<00:00, 3.43it/s]
Cross Validation Time Series 103: 100%|██████████| 9/9 [00:04<00:00, 2.15it/s]
Cross Validation Time Series 104: 100%|██████████| 9/9 [00:05<00:00, 1.52it/s]
Cross Validation Time Series 105: 100%|██████████| 9/9 [00:03<00:00, 2.48it/s] ---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/Users/tungnguyen/anaconda3/lib/python3.10/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/core.py", line 322, in cross_validation
raise error
File "/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/core.py", line 319, in cross_validation
res_i = model.forecast(**forecast_kwargs)
File "/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/models.py", line 1057, in forecast
mod = auto_ces(y, m=self.season_length, model=self.model)
File "/Users/tungnguyen/anaconda3/lib/python3.10/site-packages/statsforecast/ces.py", line 782, in auto_ces
raise Exception("no model able to be fitted")
Exception: no model able to be fitted
"""
The above exception was the direct cause of the following exception:
Exception Traceback (most recent call last)
Cell In[13], line 22
19 sf = StatsForecast(models=models, freq='1w', n_jobs=-1)
21 # Perform cross-validation with a step size of 1 to mimic an expanding window
---> 22 crossvalidation_df = sf.cross_validation(df=y_cl4_filtered, h=4, step_size=1, n_windows=9, sort_df=True)
24 # Calculate the ensemble mean
25 ensemble = crossvalidation_df[['AutoARIMA', 'AutoETS', 'CES', 'DynamicOptimizedTheta']].mean(axis=1)
File ~/anaconda3/lib/python3.10/site-packages/statsforecast/core.py:1616, in StatsForecast.cross_validation(self, h, df, n_windows, step_size, test_size, input_size, level, fitted, refit, sort_df, prediction_intervals, id_col, time_col, target_col)
1598 def cross_validation(
1599 self,
1600 h: int,
(...)
1613 target_col: str = "y",
1614 ):
1615 if self._is_native(df=df):
-> 1616 return super().cross_validation(
1617 h=h,
1618 df=df,
1619 n_windows=n_windows,
1620 step_size=step_size,
1621 test_size=test_size,
1622 input_size=input_size,
1623 level=level,
1624 fitted=fitted,
1625 refit=refit,
1626 sort_df=sort_df,
1627 prediction_intervals=prediction_intervals,
1628 id_col=id_col,
1629 time_col=time_col,
1630 target_col=target_col,
1631 )
1632 assert df is not None
1633 engine = make_execution_engine(infer_by=[df])
File ~/anaconda3/lib/python3.10/site-packages/statsforecast/core.py:1026, in _StatsForecast.cross_validation(self, h, df, n_windows, step_size, test_size, input_size, level, fitted, refit, sort_df, prediction_intervals, id_col, time_col, target_col)
1012 res_fcsts = self.ga.cross_validation(
1013 models=self.models,
1014 h=h,
(...)
1023 target_col=target_col,
1024 )
1025 else:
-> 1026 res_fcsts = self._cross_validation_parallel(
1027 h=h,
1028 test_size=test_size,
1029 step_size=step_size,
1030 input_size=input_size,
1031 fitted=fitted,
1032 level=level,
1033 refit=refit,
1034 target_col=target_col,
1035 )
1036 if fitted:
1037 self.cv_fitted_values_ = res_fcsts["fitted"]
File ~/anaconda3/lib/python3.10/site-packages/statsforecast/core.py:1248, in _StatsForecast._cross_validation_parallel(self, h, test_size, step_size, input_size, fitted, level, refit, target_col)
1232 future = executor.apply_async(
1233 ga.cross_validation,
1234 (
(...)
1245 ),
1246 )
1247 futures.append(future)
-> 1248 out = [f.get() for f in futures]
1249 fcsts = [d["forecasts"] for d in out]
1250 fcsts = np.vstack(fcsts)
File ~/anaconda3/lib/python3.10/site-packages/statsforecast/core.py:1248, in <listcomp>(.0)
1232 future = executor.apply_async(
1233 ga.cross_validation,
1234 (
(...)
1245 ),
1246 )
1247 futures.append(future)
-> 1248 out = [f.get() for f in futures]
1249 fcsts = [d["forecasts"] for d in out]
1250 fcsts = np.vstack(fcsts)
File ~/anaconda3/lib/python3.10/multiprocessing/pool.py:774, in ApplyResult.get(self, timeout)
772 return self._value
773 else:
--> 774 raise self._value
Exception: no model able to be fitted
|
79651eacd732a7300447e35cd372fe4d
|
{
"intermediate": 0.30097049474716187,
"beginner": 0.46484318375587463,
"expert": 0.23418638110160828
}
|
40,314
|
how do i check what scipy version I have on visual studio code
|
f32809e60100d64d6b707db76f4b6d95
|
{
"intermediate": 0.4437759816646576,
"beginner": 0.27392616868019104,
"expert": 0.282297819852829
}
|
40,315
|
how android run executable comperison with windows
|
d93a0d2c04e9b50a009cfac779bbd49f
|
{
"intermediate": 0.3794935643672943,
"beginner": 0.391889750957489,
"expert": 0.2286166399717331
}
|
40,316
|
can a android app be made in pure rust
|
01d3ae389398a909d8e6e1b906e95749
|
{
"intermediate": 0.3146372139453888,
"beginner": 0.30509108304977417,
"expert": 0.38027170300483704
}
|
40,317
|
private void setCommsHubReturnOrderNumber(KafkaBeamRequestDTO kafkaBeamRequestDTO, CommsHubReq commsHubReq) {
RefundPromiseDateDTO refundPromiseDateDTO = getRefundPromiseDateDTO(kafkaBeamRequestDTO);
// CarrierData carrierData = wismrDetailsService.getCarrierDataForPromiseDateAndKiosk(refundPromiseDateDTO);
List<CarrierData> carrierDataList = null;
List<AttributeList> attributeList = commsHubReq.getPayload().getCommshub().getMessageDetails().getAttributeList();
AttributeList attribute = attributeList.get(0);
// attribute.setRefundPromisedDate(refundPromiseDate);
String returnNumber = null;
if (kafkaBeamRequestDTO.getReturnOrderNo() != null) {
returnNumber = kafkaBeamRequestDTO.getReturnOrderNo();
} else if (kafkaBeamRequestDTO.getParcelCode() != null) {
if ("STS001".equals(refundPromiseDateDTO.getStatusCode())) {
carrierDataList = wismrDetailsService.getCarrierDataByONoParCdStatCodeOldWism(refundPromiseDateDTO, true);
} else if ("STS002".equals(refundPromiseDateDTO.getStatusCode())) {
carrierDataList = wismrDetailsService.getCarrierDataByONoParCdStatCodeOldWism(refundPromiseDateDTO, false);
}
//Get the latest return number based status date
if (carrierDataList.size() > 0) {
returnNumber = carrierDataList.get(carrierDataList.size() - 1).getReturnOrderNo();
}
}
if (returnNumber == null) {
return;
}
attribute.setReturnNumber(returnNumber);
}....refactor this code
|
932bf061631b2ed1608c3939cf8a3274
|
{
"intermediate": 0.40099576115608215,
"beginner": 0.36514174938201904,
"expert": 0.23386253416538239
}
|
40,318
|
How would I store a name, birthday, and last name for each user in SQL
|
564fe24e9f2be02133bf8b02c6c34307
|
{
"intermediate": 0.45362573862075806,
"beginner": 0.27380165457725525,
"expert": 0.27257269620895386
}
|
40,319
|
How would I store a bunch of classes for each user in SQL
|
24c486917342bbeba0cc2ec566e00be8
|
{
"intermediate": 0.2714460790157318,
"beginner": 0.6050046682357788,
"expert": 0.1235491931438446
}
|
40,320
|
I have a SQL table with an int primary key called user_id. What SQL command do I need to get the associated "classes" text with that primary key
|
7cff00db8f486569f411553f6f9e28d9
|
{
"intermediate": 0.3406795263290405,
"beginner": 0.4269774854183197,
"expert": 0.23234301805496216
}
|
40,321
|
Insert or replace command SQL
|
5ec1402262c7b73b70e9419ae231b78b
|
{
"intermediate": 0.2941569983959198,
"beginner": 0.37170207500457764,
"expert": 0.3341408967971802
}
|
40,322
|
How do I turn this into an upsert command where I dont need to worry about if theres a value already existing or not: INSERT INTO user_classes (user_id, classes) VALUES (0, '{"balls":"balls"}')`
|
bd74f5f4277a3ac995d8dd94707ea3f0
|
{
"intermediate": 0.45395204424858093,
"beginner": 0.3015865981578827,
"expert": 0.24446135759353638
}
|
40,323
|
How do I resolve the duplicate entry problems in SQL with this command: INSERT INTO user_classes (user_id, classes) VALUES (0, '{"balls":"balls"}')
|
5d962fba1dbc74aa128d14cfe93933cc
|
{
"intermediate": 0.3406381905078888,
"beginner": 0.4297139644622803,
"expert": 0.22964787483215332
}
|
40,324
|
"How do I resolve the duplicate entry problems in SQL with this command: INSERT INTO user_classes (user_id, classes) VALUES (0, '{"balls":"balls"}')"
? If there's already an entry I just want the new data to override it
|
5e8ce7bcf576bba2f11dfc822ac36d0a
|
{
"intermediate": 0.5080468058586121,
"beginner": 0.2965191900730133,
"expert": 0.19543395936489105
}
|
40,325
|
Can I import all of these from MUI?
import Table from "@mui/material/Table";
import TableBody from "@mui/material/TableBody";
import TableCell from "@mui/material/TableCell";
import TableContainer from "@mui/material/TableContainer";
import TableHead from "@mui/material/TableHead";
import TableRow from "@mui/material/TableRow";
|
0d7a40202d25810ce58ad5ca173c713e
|
{
"intermediate": 0.5771895051002502,
"beginner": 0.3313823640346527,
"expert": 0.09142816811800003
}
|
40,326
|
Can I import all of these components from MUI in a one liner so I dont have to write this out?
import Table from "@mui/material/Table";
import TableBody from "@mui/material/TableBody";
import TableCell from "@mui/material/TableCell";
import TableContainer from "@mui/material/TableContainer";
import TableHead from "@mui/material/TableHead";
import TableRow from "@mui/material/TableRow";
|
d6a9859a3635c9cc13be40f5e4599ba6
|
{
"intermediate": 0.6486419439315796,
"beginner": 0.21895982325077057,
"expert": 0.13239827752113342
}
|
40,327
|
hi, chatgpt. can you please help me with my unity project, i want handle crouching state but a cant make raycast work as i want it to. there is my code snippet using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class FPSController : MonoBehaviour
{
[Header("Movement Speeds")]
[SerializeField] private float _walkSpeed = 5.0f;
[SerializeField] private float _sprintMultiplier = 2.0f;
[SerializeField] private float _crouchingSpeed = 3.0f;
[Header("Jump Parameters")]
[SerializeField] private float _jumpForce = 5.0f;
[SerializeField] private float _gravity = 9.81f;
[Header("Crouch Parameters")]
[SerializeField] private float _crouhingRaycastHeight = 1f;
[SerializeField] private LayerMask _layersToHit;
[SerializeField] private Transform _rayCastPoint;
[Header("Camera Parameters")]
[SerializeField] private float _mouseSensitivity = 1.0f;
[SerializeField] private float _upDownClampRange = 89f;
private Rigidbody _rigidbody;
private CharacterController characterController;
private Camera mainCamera;
private PlayerInputHandler inputHandler;
private Vector3 currentMovement;
private float verticalRotation;
private void Start()
{
characterController = GetComponent<CharacterController>();
mainCamera = Camera.main;
inputHandler = PlayerInputHandler.Instance;
if (inputHandler == null)
{
Debug.LogError("Input handler is not set!");
}
}
private void Update()
{
HandleMovement();
HandleRotation();
HandleJumping();
HandleCrouching();
}
void HandleMovement()
{
float speed = _walkSpeed * (inputHandler.SprintValue > 0 ? _sprintMultiplier : 1f);
Vector3 inputDirection = new Vector3(inputHandler.MoveInput.x, 0f, inputHandler.MoveInput.y);
Vector3 worldDirection = transform.TransformDirection(inputDirection);
worldDirection.Normalize();
// Handling crouching
if (inputHandler.CrouchTriggered)
{
currentMovement.x = worldDirection.x * _crouchingSpeed;
currentMovement.z = worldDirection.z * _crouchingSpeed;
}
else
{
currentMovement.x = worldDirection.x * speed;
currentMovement.z = worldDirection.z * speed;
}
characterController.Move(currentMovement * Time.deltaTime);
}
void HandleJumping()
{
if(characterController.isGrounded)
{
currentMovement.y = -0.5f;
if(inputHandler.JumpTriggered)
{
currentMovement.y = _jumpForce;
}
}
else
{
currentMovement.y -= _gravity * Time.deltaTime;
}
}
void HandleRotation()
{
float mouseXRotation = inputHandler.LookInput.x * _mouseSensitivity;
transform.Rotate(0f, mouseXRotation, 0f);
verticalRotation -= inputHandler.LookInput.y * _mouseSensitivity;
verticalRotation = Mathf.Clamp(verticalRotation, -_upDownClampRange, _upDownClampRange);
mainCamera.transform.localRotation = Quaternion.Euler(verticalRotation, 0f, 0f);
}
void HandleCrouching()
{
RaycastHit hit;
if (characterController.isGrounded & inputHandler.CrouchTriggered)
{
characterController.height = 1f;
}
else if (!inputHandler.CrouchTriggered & !Physics.Raycast(transform.position, _rayCastPoint.transform.up, out hit, _crouhingRaycastHeight))
{
characterController.height = 2f;
}
}
|
33eb5f43ce3239e3139c1680fcf5704b
|
{
"intermediate": 0.5031343698501587,
"beginner": 0.40787503123283386,
"expert": 0.08899064362049103
}
|
40,328
|
I have a variable in my ReactJS project. How do I display that variable temporaily to the screen
|
94af8b8b37021494cabec2641d6b996f
|
{
"intermediate": 0.42027443647384644,
"beginner": 0.457088828086853,
"expert": 0.12263676524162292
}
|
40,329
|
new Date() -> 9:00 AM, or 9:10 PM, etc. I want this formatting in JS
|
b4a3ee860c365e289a18bf4a0073a04e
|
{
"intermediate": 0.4002532660961151,
"beginner": 0.30546247959136963,
"expert": 0.2942841947078705
}
|
40,330
|
as a csr enhance this to a more friendly tone approach
" I understand you're considering canceling your VIP membership, Before you make your final decision, i would love to remind you of the exclusive perks you can enjoy as a VIP member:
such as exclusive VIP prices,
option to skip the month
Exclusive sales and promotions
Sending out monthly curated selections
Loyalty program <depends on the brand>
Free shipping and exchanges.
I'd hate for you to miss out on all that. But I'm here to listen – what's got you leaning towards canceling? Maybe we can sort it out together.
|
dcad50686cf32bba016f03ac33f08299
|
{
"intermediate": 0.3554961681365967,
"beginner": 0.3305346965789795,
"expert": 0.31396910548210144
}
|
40,331
|
I have a node / npm API with this JS configuration file:
const express = require('express');
const app = express();
const PORT = 8080;
// Connect to SQL
const mysql = require('mysql');
const db = mysql.createConnection({
host: 'localhost',
user: 'root',
password: '',
database: 'TransitU',
});
db.connect(err => {
if(err) { throw err; }
console.log("Connected to MySQL");
})
app.use(express.json());
app.listen(
PORT,
() => console.log(`API running on ${PORT}`)
);
// Get schedules
app.get('/classes/:id', (req, res) => {
const { id } = req.params;
const { data } = req.body;
console.log(req.params);
db.query(`SELECT * FROM user_classes WHERE user_id = ${id}`, (err, results) => {
if(err) res.status(400).send(err);
else res.status(200).send(results);
});
});
// Update schedules
app.post('/classes/:id', (req, res) => {
let data = req.body;
const { id } = req.params;
if(!data) res.status(400).send("You need to supply a JSON object");
else {
db.query(`INSERT INTO user_classes (user_id, classes) VALUES (${id}, '${JSON.stringify(data)}') ON DUPLICATE KEY UPDATE classes=VALUES(classes)`, (err, results) => {
console.log(err);
if(err) res.status(400).send(err);
else res.status(200).send(results);
});
}
});
How do I enable CORS
|
9d1577ff8493ece51c0f8f5be49a8872
|
{
"intermediate": 0.5915569067001343,
"beginner": 0.2306765615940094,
"expert": 0.17776650190353394
}
|
40,332
|
Does this function return the JSON object or the promise
export function getSchedule(userID) {
return fetch(`${API_URL}/classes/1`).then(async response => await response.json());
}
|
8d812f85da6f8f80acae86bae3716f50
|
{
"intermediate": 0.6507217288017273,
"beginner": 0.2951582968235016,
"expert": 0.05411997437477112
}
|
40,333
|
read thru this website carefully and learn from it https://nixtlaverse.nixtla.io/statsforecast/docs/tutorials/intermittentdata.html, see if I have implemented the code correctly from the website for my ensemble model from statsforecast import StatsForecast
from statsforecast.models import ADIDA, CrostonClassic, IMAPA, TSB
from statsforecast.utils import ConformalIntervals
import numpy as np
import polars as pl
# Polars option to display all rows
pl.Config.set_tbl_rows(None)
# Initialize the models
models = [
ADIDA(),
CrostonClassic(),
IMAPA(),
TSB(alpha_d = 0.2, alpha_p = 0.2)
]
# Initialize the StatsForecast model
sf = StatsForecast(models=models, freq='1w', n_jobs=-1)
# Perform cross-validation with a step size of 1 to mimic an expanding window
crossvalidation_df = sf.cross_validation(df=y_cl4_filtered, h=2, step_size=1, n_windows=8, sort_df=True)
# Calculate the ensemble mean
ensemble = crossvalidation_df[['ADIDA', 'CrostonClassic', 'IMAPA', 'TSB']].mean(axis=1)
# Create a Series for the ensemble mean
ensemble_series = pl.Series('Ensemble', ensemble)
# Add the ensemble mean as a new column to the DataFrame
crossvalidation_df = crossvalidation_df.with_columns(ensemble_series)
def wmape(y_true, y_pred):
return np.abs(y_true - y_pred).sum() / np.abs(y_true).sum()
# Calculate the WMAPE for the ensemble model
wmape_value = wmape(crossvalidation_df['y'], crossvalidation_df['Ensemble'])
print('Average WMAPE for Ensemble: ', round(wmape_value, 4))
# Calculate the errors for the ensemble model
errors = crossvalidation_df['y'] - crossvalidation_df['Ensemble']
# For an individual forecast
individual_accuracy = 1 - (abs(crossvalidation_df['y'] - crossvalidation_df['Ensemble']) / crossvalidation_df['y'])
individual_bias = (crossvalidation_df['Ensemble'] / crossvalidation_df['y']) - 1
# Add these calculations as new columns to DataFrame
crossvalidation_df = crossvalidation_df.with_columns([
individual_accuracy.alias("individual_accuracy"),
individual_bias.alias("individual_bias")
])
# Print the individual accuracy and bias for each week
for row in crossvalidation_df.to_dicts():
id = row['unique_id']
date = row['ds']
accuracy = row['individual_accuracy']
bias = row['individual_bias']
print(f"{id}, {date}, Individual Accuracy: {accuracy:.4f}, Individual Bias: {bias:.4f}")
# For groups of forecasts
group_accuracy = 1 - (errors.abs().sum() / crossvalidation_df['y'].sum())
group_bias = (crossvalidation_df['Ensemble'].sum() / crossvalidation_df['y'].sum()) - 1
# Print the average group accuracy and group bias over all folds for the ensemble model
print('Average Group Accuracy: ', round(group_accuracy, 4))
print('Average Group Bias: ', round(group_bias, 4))
# Fit the models on the entire dataset
sf.fit(y_cl4_fit_1)
# Instantiate the ConformalIntervals class
prediction_intervals = ConformalIntervals()
# Generate 24 months forecasts
forecasts_df = sf.forecast(h=52*2, prediction_intervals=prediction_intervals, level=[95], id_col='unique_id', sort_df=True)
# Apply the non-negative constraint to the forecasts of individual models
forecasts_df = forecasts_df.with_columns([
pl.when(pl.col('ADIDA') < 0).then(0).otherwise(pl.col('ADIDA')).alias('ADIDA'),
pl.when(pl.col('CrostonClassic') < 0).then(0).otherwise(pl.col('CrostonClassic')).alias('CrostonClassic'),
pl.when(pl.col('IMAPA') < 0).then(0).otherwise(pl.col('IMAPA')).alias('IMAPA'),
pl.when(pl.col('TSB') < 0).then(0).otherwise(pl.col('TSB')).alias('TSB'),
])
# Calculate the ensemble forecast
ensemble_forecast = forecasts_df[['ADIDA', 'CrostonClassic', 'IMAPA', 'TSB']].mean(axis=1)
# Calculate the lower and upper prediction intervals for the ensemble forecast
ensemble_lo_95 = forecasts_df.select(
[
pl.when(pl.col('ADIDA-lo-95') < 0).then(0).otherwise(pl.col('ADIDA-lo-95')).alias('ADIDA-lo-95'),
pl.when(pl.col('CrostonClassic-lo-95') < 0).then(0).otherwise(pl.col('CrostonClassic-lo-95')).alias('CrostonClassic-lo-95'),
pl.when(pl.col('IMAPA-lo-95') < 0).then(0).otherwise(pl.col('IMAPA-lo-95')).alias('IMAPA-lo-95'),
pl.when(pl.col('TSB-lo-95') < 0).then(0).otherwise(pl.col('TSB-lo-95')).alias('TSB-lo-95'),
]
).mean(axis=1)
ensemble_hi_95 = forecasts_df[['ADIDA-hi-95', 'CrostonClassic-hi-95', 'IMAPA-hi-95', 'TSB-hi-95']].mean(axis=1)
# Create Series for the ensemble forecast and its prediction intervals
ensemble_forecast_series = pl.Series('EnsembleForecast', ensemble_forecast)
ensemble_lo_95_series = pl.Series('Ensemble-lo-95', ensemble_lo_95)
ensemble_hi_95_series = pl.Series('Ensemble-hi-95', ensemble_hi_95)
# Add the ensemble forecast and its prediction intervals as new columns to the DataFrame
forecasts_df = forecasts_df.with_columns([ensemble_forecast_series, ensemble_lo_95_series, ensemble_hi_95_series])
# Round the ensemble forecast and prediction intervals and convert to integer
forecasts_df = forecasts_df.with_columns([
pl.col("EnsembleForecast").round().cast(pl.Int32),
pl.col("Ensemble-lo-95").round().cast(pl.Int32),
pl.col("Ensemble-hi-95").round().cast(pl.Int32)
])
# Reorder the columns
forecasts_df = forecasts_df.select([
"unique_id",
"ds",
"EnsembleForecast",
"Ensemble-lo-95",
"Ensemble-hi-95",
"ADIDA",
"ADIDA-lo-95",
"ADIDA-hi-95",
"CrostonClassic",
"CrostonClassic-lo-95",
"CrostonClassic-hi-95",
"IMAPA",
"IMAPA-lo-95",
"IMAPA-hi-95",
"TSB",
"TSB-lo-95",
"TSB-hi-95"
])
# Create an empty list
forecasts_list = []
# Append each row to the list
for row in forecasts_df.to_dicts():
forecasts_list.append(row)
# Print the list
for forecast in forecasts_list:
print(forecast)
|
d0551cf2b84de900620f257a8b2b1e10
|
{
"intermediate": 0.32175567746162415,
"beginner": 0.44684383273124695,
"expert": 0.2314005047082901
}
|
40,334
|
show me the perfect steps and full implementation CRUD in .Net Core
|
b8582daa2b56d7b2b2147d57fa75ccc3
|
{
"intermediate": 0.518501877784729,
"beginner": 0.22172915935516357,
"expert": 0.2597689926624298
}
|
40,335
|
import acrcloud
import os
import eyed3
import requests
import json
from acrcloud.recognizer import ACRCloudRecognizer
# ACRCloud API credentials
ACR_HOST = ""
ACR_ACCESS_KEY = ""
ACR_ACCESS_SECRET = ""
# ACR Cloud setup
config = {
'host': ACR_HOST,
'access_key': ACR_ACCESS_KEY,
'access_secret': ACR_ACCESS_SECRET,
'timeout': 10 # seconds
}
dir(acrcloud)
# Initialize the ACRCloud recognizer
recognizer = ACRCloudRecognizer(config)
# Function to recognize the song from an audio file
def recognize_song(audio_file_path):
buffer = open(audio_file_path, 'rb').read()
result = recognizer.recognize_by_filebuffer(buffer, 0)
try:
result_dict = json.loads(result) # Parse the JSON string into a dictionary
return result_dict['metadata']['music'][0]
except (KeyError, IndexError, json.JSONDecodeError) as e:
print(f"Error while parsing result: {e}")
return None
# Function to set ID3 tags
def set_id3_tags_mp3(audio_file_path, tags):
audio_file = eyed3.load(audio_file_path)
if not audio_file.tag:
audio_file.initTag()
audio_file.tag.artist = tags.get('artists')[0].get('name')
audio_file.tag.album = tags.get('album').get('name')
audio_file.tag.album_artist = tags.get('artists')[0].get('name')
audio_file.tag.title = tags.get('title')
# Set the release year (if available)
release_date = tags.get('release_date')
if release_date and len(release_date) >= 4: # Check if release_date contains at least the year
year_string = release_date[:4]
try:
year = int(year_string)
# Some versions of eyeD3 require a Date object if available
if hasattr(eyed3.id3.tag, 'Date'):
audio_file.tag.recording_date = eyed3.id3.tag.Date(year)
else:
# Otherwise, set it as text_frame
audio_file.tag.setTextFrame("TDRC", year_string)
except ValueError:
print(f"Invalid date format in the tag: {release_date}")
# Add more tags here
audio_file.tag.genre = tags.get('genres')[0].get('name') # Assuming there's at least one genre
audio_file.tag.publisher = "Karthik" # Publisher tag set as 'karthik'
# To save the copyright label:
audio_file.tag.copyright = tags.get('label', '')
# To save the album cover page, you would need to download the image from a source
# and then do something like this:
# with open("path_to_cover_image.jpg", "rb") as album_art:
# audio_file.tag.images.set(3, album_art.read(), "image/jpeg", u"Description")
# Example of setting explicit tag in the comments (if you have explicit info):
audio_file.tag.comments.set(u"Explicit: Yes")
audio_file.tag.save(version=eyed3.id3.ID3_V2_3)
audio_file.tag.save()
# Replace 'path_to_your_audio_file.mp3' with the actual file path of the unknown song
if __name__ == "__main__":
audio_file_path = 'C:/Users/ILEG-i5-11/Downloads/Music/Unknown_file.mp3'
song_tags = recognize_song(audio_file_path)
if song_tags:
print(f'Song identified: {song_tags}')
set_id3_tags_mp3(audio_file_path, song_tags)
# Renaming the file after identifying the song and setting tags
artist_name = song_tags.get('artists')[0].get('name')
song_title = song_tags.get('title')
if artist_name and song_title:
new_file_name = f"{artist_name} - {song_title}.mp3"
new_file_path = os.path.join(os.path.dirname(audio_file_path), new_file_name)
os.rename(audio_file_path, new_file_path)
print(f"File has been renamed to: {new_file_name}")
else:
print('Could not identify the song.')
I need to add album art to save externally at 1400x1400 and embedded too at 600x600
by using apple music
|
33132d160b5383a6e919b504b4adb924
|
{
"intermediate": 0.4354809522628784,
"beginner": 0.38658344745635986,
"expert": 0.17793557047843933
}
|
40,336
|
i want to use my camera instead of ipcam "import camerawrapperToF
from arena_sdk_helper import *
import ctypes
import numpy as np
import torch
import cv2
import sklearn
import torch.nn as nn
import torch.nn.functional as F
global scale_torch, offset_torch, device
def get_scale_and_offet_from_hlt(hlt):
coordinates = ["CoordinateA", "CoordinateB", "CoordinateC"]
scale = np.ones((hlt.nodemap["Height"].value, hlt.nodemap["Width"].value, 4))
offset = np.zeros((hlt.nodemap["Height"].value, hlt.nodemap["Width"].value, 4))
for c in range(len(coordinates)):
hlt.nodemap["Scan3dCoordinateSelector"].value = coordinates[c]
scale[:, :, c] = hlt.nodemap["Scan3dCoordinateScale"].value
offset[:, :, c] = hlt.nodemap["Scan3dCoordinateOffset"].value
global scale_torch, offset_torch
scale_torch = torch.from_numpy(scale).float()
offset_torch = torch.from_numpy(offset).float()
return scale, offset
def convert_raw_buffer_to_xyz_i(buffer_hlt):
global scale_torch, offset_torch
pdata_16bit = ctypes.cast(buffer_hlt.pdata, ctypes.POINTER(ctypes.c_uint16))
image = np.ctypeslib.as_array(pdata_16bit, shape=(480, 640, 4))
image_torch = torch.from_numpy(image.astype(np.float32)).float()
converted_image = image_torch * scale_torch + offset_torch
image_xyz = converted_image[:, :, :3].numpy()
image_i = (image_torch[:, :, 3] / image_torch[:, :, 3].max() * 255).numpy().astype(np.uint8)
#cv2.imshow("Some Man",image_i)
#cv2.waitKey(0)
#print('image_xyz shape:', image_xyz.shape, " \ndata:", image_xyz[:, :, 2])
#cv2.imshow("Some Man",image_xyz)
#cv2.waitKey(0)
return image_xyz, image_i
def convert_xy_to_depth(x, y):
global device
buffer = device.get_buffer()
image_xyz, image_show = convert_raw_buffer_to_xyz_i(buffer)
return image_xyz[y, x, 2]
def initialize():
global device
# identifier can be i.e. SN, MAC address, IP address, User Defined Name
device = create_devices(["223400146"])
#device = devices[i]
device.nodemap["PtpEnable"].value = True
device.nodemap["AcquisitionStartMode"].value = "PTPSync"
device.nodemap["PTPSyncFrameRate"].value = 5.0
device.nodemap["GevSCPD"].value = 800
device.nodemap["GevSCFTD"].value = 1 * 50000000 # 50 ms in ns
device.nodemap["Scan3dCommunicationChannel"].value = 1
device.nodemap["Scan3dConfidenceThresholdEnable"].value = False
print('Scan3dOperatingMode:', device.nodemap['Scan3dOperatingMode'].value)
get_scale_and_offet_from_hlt(device)
device.start_stream()
buffer = device.get_buffer()
print('buffer:',buffer.width, buffer.height, buffer.bits_per_pixel)
print(convert_raw_buffer_to_xyz_i(buffer))
#if i < 2:
# only available for Helios cameras:
#free() # this lets the cameras go again to make it available for other applications.
if "__main__" == __name__:
initialize()
while True:
buffer = device.get_buffer()
image_xyz, image_show = convert_raw_buffer_to_xyz_i(buffer)
#cv2.imshow("Some Man",image_show)
#cv2.waitKey(0)
image_show = cv2.cvtColor(image_show, cv2.COLOR_GRAY2RGB)
for y in range(3*0):
for x in range(2):
#line = ' '.join([str(image_xyz[(y+1)*100, (x+1)*100, 2]) for x in range(5)])
#print(line)
YY = (y+1) *100
XX = (x+1) *200
cv2.rectangle(image_show, (XX - 3, YY - 3), (XX + 3, YY + 3), (0, 255, 0), 2)
cv2.putText(image_show, "{:.1f}".format(image_xyz[YY, XX, 2]/10, image_xyz[YY, XX, 0]/10, image_xyz[YY, XX, 1]/10),
(XX + 10, YY + 10),
cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2)
XX = 310
YY = 100
for XX, YY in [[310, 100], [295, 105]]:
cv2.rectangle(image_show, (XX - 3, YY - 3), (XX + 3, YY + 3), (0, 255, 0), 2)
cv2.putText(image_show, "{:.1f}".format(image_xyz[YY, XX, 2]/10, image_xyz[YY, XX, 0]/10, image_xyz[YY, XX, 1]/10),
(XX + 10, YY + 10),
cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2)
print(f"{XX}/{YY} -> {image_xyz[YY, XX, 0]/10, image_xyz[YY, XX, 1]/10, image_xyz[YY, XX, 2]/10}")
device.requeue_buffer(buffer)
cv2.imshow("Depth Distance",image_show)
cv2.waitKey(1)
time.sleep(0.1)
free()
|
a87e8a31ca093ca6ed26b9c0fa92cb42
|
{
"intermediate": 0.3409009575843811,
"beginner": 0.4054265320301056,
"expert": 0.2536725699901581
}
|
40,337
|
Explain max() { $(echo -e $1 | tr " " "\n" | sort -n | head -1); };min() { $(echo -e $1
I tr " "| "\n" l sort -nl tail -1); }ilog_dir="/tmp/fio"; res_fle="/tmp/
fio_results.csv"; echo "Size, Operation, IOPS, Throughput (MiB/s) , Latency (usec)" > $res_file; for line in $(for i in 'ls -1v $log_dir/*/fio-**; do grep '^[rw].*_
[0-9]\{1,413k' -A 4 $i | tr '\n' ' ' | sed -e 's/_/ /' -e 's/: *jobs=/ /' -e 's/ ) "*IOPS=/ / ' -e 's/,\sBW=/ /' -e 's/[MK]iB.*\ lat\s(/ /' -e 's/):.*, \savg=//'
-e 's/, \ •*$/\n/' | awk -F\ '{print $2, $3, $1, $4~/k$/?$4*1000: $4, $5, $6, $6=="msec"?$7*1000" .00":$7}'); done | grep "\s\ (read\/write\)\s" | awk -F\ {print $1", "$3", "$4", "$5","$7}'; do echo "$line" » $res_file; done; for o in read write;do echo -e "OPERATION: $0"; for size in $(ls -1n $log_dir/*/fio-* sed -e 's/^"*-//'-e 's/_*$//' | sort -n | uniq); do unset iops; unset throughput; unset latency; iops=(); throughput=() ; latency=() ; res= grep "^$size, $0" /tmp/fio_results.csv' ;echo -e "\tBLOCK SIZE: $size"; for r in $res;do opsize=($
{r//,/ 3); iops+=(${opsize[21}) ;throughput+=(${opsize[3]}); latency+=(${opsize [4]}
);unset opsize; done; echo -e "\t\tMAX IOPS: $(echo -e ${iops[*]} | tr " " "\n"| sort -rn | head -1) \n\t\MAX THROUGHPUT: max #{throughput [*]}\n\t\MIN LATENCY: $ (echo -e ${latency[*]} | tr " " "\n" | sort -n | head -1) \n"; done; done
|
2892cc5df090815cd961f71215919f56
|
{
"intermediate": 0.37994584441185,
"beginner": 0.4232853949069977,
"expert": 0.19676876068115234
}
|
40,338
|
def get_album_artwork_url(apple_music_album_id):
“”“
This is a placeholder function. You should replace its contents with code
that interacts with the Apple Music API to retrieve the URL for the album artwork.
“””
# For illustration purposes only
# You need to implement this API call according to Apple Music API documentation
return f"https://your.apple.music.api/album_artwork/{apple_music_album_id}“
users token: i will get it from apple music subscribed account from web player
give me full function
I dontknow how to do it
|
f7bd41cc1a16007278f9fdcdd11dcf03
|
{
"intermediate": 0.6572792530059814,
"beginner": 0.25115135312080383,
"expert": 0.09156938642263412
}
|
40,339
|
max() { $(echo -e $1 | tr " " "\n" | sort -n | head -1); };min() { $(echo -e $1
I tr " "| "\n" l sort -nl tail -1); }ilog_dir="/tmp/fio"; res_fle="/tmp/
fio_results.csv"; echo "Size, Operation, IOPS, Throughput (MiB/s) , Latency (usec)" > $res_file; for line in $(for i in 'ls -1v $log_dir/*/fio-**; do grep '^[rw].*_
[0-9]\{1,413k' -A 4 $i | tr '\n' ' ' | sed -e 's/_/ /' -e 's/: *jobs=/ /' -e 's/ ) "*IOPS=/ / ' -e 's/,\sBW=/ /' -e 's/[MK]iB.*\ lat\s(/ /' -e 's/):.*, \savg=//'
-e 's/, \ •*$/\n/' | awk -F\ '{print $2, $3, $1, $4~/k$/?$4*1000: $4, $5, $6, $6=="msec"?$7*1000" .00":$7}'); done | grep "\s\ (read\/write\)\s" | awk -F\ {print $1", "$3", "$4", "$5","$7}'; do echo "$line" » $res_file; done; for o in read write;do echo -e "OPERATION: $0"; for size in $(ls -1n $log_dir/*/fio-* sed -e 's/^"*-//'-e 's/_*$//' | sort -n | uniq); do unset iops; unset throughput; unset latency; iops=(); throughput=() ; latency=() ; res= grep "^$size, $0" /tmp/fio_results.csv' ;echo -e "\tBLOCK SIZE: $size"; for r in $res;do opsize=($
{r//,/ 3); iops+=(${opsize[21}) ;throughput+=(${opsize[3]}); latency+=(${opsize [4]}
);unset opsize; done; echo -e "\t\tMAX IOPS: $(echo -e ${iops[*]} | tr " " "\n"| sort -rn | head -1) \n\t\MAX THROUGHPUT: max #{throughput [*]}\n\t\MIN LATENCY: $ (echo -e ${latency[*]} | tr " " "\n" | sort -n | head -1) \n"; done; done
|
591ab5386a08b5ebae55adcfc34c8d78
|
{
"intermediate": 0.4466013014316559,
"beginner": 0.33684808015823364,
"expert": 0.21655061841011047
}
|
40,340
|
private String getReturnOrderNoFromLatestStatusDate(List<CarrierData> carrierDataList) {
if (carrierDataList.size() == 1) {
return carrierDataList.get(0).getReturnOrderNo();
}
LocalDateTime latestDate = null;
CarrierData latestCarrierData = null;
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd");
for (CarrierData carrierData : carrierDataList) {
if (carrierData.getStatusDetails() != null && carrierData.getStatusDetails().getDate() != null) {
LocalDateTime statusDate;
try {
statusDate = LocalDateTime.parse(carrierData.getStatusDetails().getDate(), formatter);
} catch (DateTimeParseException e) {
log.debug("Invalid date format, cannot parse date '{}' for order ref {}", carrierData.getStatusDetails().getDate(),carrierData.getOrderRef());
continue;
}
if (statusDate != null && (latestDate == null || statusDate.isAfter(latestDate))) {
latestDate = statusDate;
latestCarrierData = carrierData;
}
}
}
return latestCarrierData == null ? null : latestCarrierData.getReturnOrderNo();
}
25-02-2024 16:32:14.711 DEBUG [http-nio-8080-exec-2] c.m.r.s.i.CommsHubForStatusCodeServiceImpl.getReturnOrderNoFromLatestStatusDate [CommsHubForStatusCodeServiceImpl.java : 170] : Invalid date format, cannot parse date '2023-08-28' for order ref 301-20230713-01362.....fix and update code
|
8bb4a3e788f059170ef43123b4de9023
|
{
"intermediate": 0.5723598003387451,
"beginner": 0.21069151163101196,
"expert": 0.21694862842559814
}
|
40,341
|
I want to wait for user to become a variable after loading so I can use it for backend.
import { useState } from "react";
import { useUser } from "@auth0/nextjs-auth0/client";
// Get the user
const useStyles = makeStyles({
field: {
marginTop: 20,
marginBottom: 20,
display: "block",
},
});
export default function Schedule() {
const { user } = useUser();
const classes = useStyles();
const [classTitle, setClassTitle] = useState("");
const [start, setStart] = useState("");
const [end, setEnd] = useState("");
const [address, setAddress] = useState("");
const [roomNum, setRoomNum] = useState("");
const [classList, setClassList] = useState([]);
// Load current schedule
setClassList(Backend.getSchedule(user.sid));
|
8744ad457bf00cd35fa4eb2a7521857e
|
{
"intermediate": 0.34395232796669006,
"beginner": 0.514305830001831,
"expert": 0.14174185693264008
}
|
40,342
|
private String getReturnOrderNoFromLatestStatusDate(List carrierDataList) { if (carrierDataList.size() == 1) { return carrierDataList.get(0).getReturnOrderNo(); } LocalDate latestDate = null; CarrierData latestCarrierData = null; DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyyMMdd"); for (CarrierData carrierData : carrierDataList) { if (carrierData.getStatusDetails() != null && carrierData.getStatusDetails().getDate() != null) { String handOverDateStr = carrierData.getStatusDetails().getDate(); LocalDate statusDate; try { statusDate = LocalDate.parse(handOverDateStr, formatter); } catch (DateTimeParseException e) { log.debug("Invalid date format, cannot parse date '{}' for order ref {}", handOverDateStr, carrierData.getOrderRef()); continue; } if (statusDate != null && (latestDate == null || statusDate.isAfter(latestDate))) { latestDate = statusDate; latestCarrierData = carrierData; } } } return latestCarrierData == null ? null : latestCarrierData.getReturnOrderNo(); },,,,format
|
4555fc21a2f3d6ded3e81bf5f09e580d
|
{
"intermediate": 0.47877150774002075,
"beginner": 0.3077932596206665,
"expert": 0.21343521773815155
}
|
40,343
|
private String getReturnOrderNoFromLatestStatusDate(List<CarrierData> carrierDataList) {
if (carrierDataList.size() == 1) {
return carrierDataList.get(0).getReturnOrderNo();
}
LocalDate latestDate = null;
CarrierData latestCarrierData = null;
for (CarrierData carrierData : carrierDataList) {
if (carrierData.getStatusDetails() != null && carrierData.getStatusDetails().getDate() != null) {
LocalDate statusDate;
try {
String handOverDateStr = carrierData.getStatusDetails().getDate();
statusDate= LocalDate.parse(handOverDateStr);
} catch (DateTimeParseException e) {
log.debug("Invalid date format, cannot parse date '{}' for order ref {}", carrierData.getStatusDetails().getDate(),carrierData.getOrderRef());
continue;
}
if (statusDate != null && (latestDate == null || statusDate.isAfter(latestDate))) {
latestDate = statusDate;
latestCarrierData = carrierData;
}
}
}
return latestCarrierData == null ? null : latestCarrierData.getReturnOrderNo();
}
refactor
|
e75ea198c03ea02c7396b875e1ed1407
|
{
"intermediate": 0.4209676682949066,
"beginner": 0.30959686636924744,
"expert": 0.26943546533584595
}
|
40,344
|
class NeutralInject(nn.Module):
def __init__(self, input_dim, encoding_dim):
super().__init__()
self.encoder = nn.Sequential(
nn.Linear(input_dim, encoding_dim),
nn.Linear(encoding_dim, encoding_dim//2),
nn.Linear(encoding_dim//2, encoding_dim//4),
)
self.encoder_parallel_neutral_inject = nn.Sequential(
nn.Linear(input_dim, encoding_dim),
nn.Linear(encoding_dim, encoding_dim//2),
nn.Linear(encoding_dim//2, encoding_dim//4),
)
self.decoder = nn.Sequential(
nn.Linear(encoding_dim//4, encoding_dim//2),
nn.Linear(encoding_dim//2, encoding_dim),
nn.Linear(encoding_dim, input_dim),
)
def forward(self, x):
x1 = self.encoder(x)
#
n = self.encoder_parallel_neutral_inject(x)
if self.train(): # во время обучения
x1 = x1 + n - n # это не даст правильный градиент посредством backward для encoder_parallel_neutral_inject.
x = self.decoder(x1)
return x
Необычная задача:
Обучается только слой model.encoder_parallel_neutra_linject. Надо, чтобы его target при обучении был == out self.encoder,
но при этом input self.decoder был тоже == out self.encoder
то есть чтобы model.encoder_parallel_neutra_linject ни как не влиял на пайплайн и forward во время обучения (или иминимальное влияние)
если его просто вставить, то во время обучения он влияет на пайплайн, меняет выход, а этого не должно быть (или иминимальное влияние)
то есть надо вставить слой, который не изменит работу модели во время Fit, но обучится.
Можно изменить модель, кроме def forward return x (менять выход модели и Fit нельзя.) Дело в том, что подобных encoder_parallel_neutra_linject слоев будет много, модель большая, а это отдельный её кусок.
Есть вариант:
x1 = x1 + n - n разве так делают ради того, чтобы граф - пайплайн не был разбит? Это будет работать? Не будет же торч сокращать + n - n?
Просто он скажет что ничего обновлять нету? Как он будет считать, если влияние слоя равно 0? это не даст правильный градиент посредством backward для encoder_parallel_neutral_inject.
Поясни, что ты сделаешь для этого в комментариях к коду. код:
|
be71e9ca0a1207995ba5c76f0b79022e
|
{
"intermediate": 0.19933465123176575,
"beginner": 0.5745478868484497,
"expert": 0.22611738741397858
}
|
40,345
|
import acrcloud
import os
import eyed3
import requests
import json
from acrcloud.recognizer import ACRCloudRecognizer
# ACRCloud API credentials
ACR_HOST = ""
ACR_ACCESS_KEY = ""
ACR_ACCESS_SECRET = ""
# ACR Cloud setup
config = {
'host': ACR_HOST,
'access_key': ACR_ACCESS_KEY,
'access_secret': ACR_ACCESS_SECRET,
'timeout': 10 # seconds
}
dir(acrcloud)
# Initialize the ACRCloud recognizer
recognizer = ACRCloudRecognizer(config)
# Function to recognize the song from an audio file
def recognize_song(audio_file_path):
buffer = open(audio_file_path, 'rb').read()
result = recognizer.recognize_by_filebuffer(buffer, 0)
try:
result_dict = json.loads(result) # Parse the JSON string into a dictionary
return result_dict['metadata']['music'][0]
except (KeyError, IndexError, json.JSONDecodeError) as e:
print(f"Error while parsing result: {e}")
return None
# Function to set ID3 tags
def set_id3_tags_mp3(audio_file_path, tags):
audio_file = eyed3.load(audio_file_path)
if not audio_file.tag:
audio_file.initTag()
audio_file.tag.artist = tags.get('artists')[0].get('name')
audio_file.tag.album = tags.get('album').get('name')
audio_file.tag.album_artist = tags.get('artists')[0].get('name')
audio_file.tag.title = tags.get('title')
# Set the release year (if available)
release_date = tags.get('release_date')
if release_date and len(release_date) >= 4: # Check if release_date contains at least the year
year_string = release_date[:4]
try:
year = int(year_string)
# Some versions of eyeD3 require a Date object if available
if hasattr(eyed3.id3.tag, 'Date'):
audio_file.tag.recording_date = eyed3.id3.tag.Date(year)
else:
# Otherwise, set it as text_frame
audio_file.tag.setTextFrame("TDRC", year_string)
except ValueError:
print(f"Invalid date format in the tag: {release_date}")
# Add more tags here
audio_file.tag.genre = tags.get('genres')[0].get('name') # Assuming there's at least one genre
audio_file.tag.publisher = "Karthik" # Publisher tag set as 'karthik'
# To save the copyright label:
audio_file.tag.copyright = tags.get('label', '')
# To save the album cover page, you would need to download the image from a source
# and then do something like this:
# with open("path_to_cover_image.jpg", "rb") as album_art:
# audio_file.tag.images.set(3, album_art.read(), "image/jpeg", u"Description")
# Example of setting explicit tag in the comments (if you have explicit info):
audio_file.tag.comments.set(u"Explicit: Yes")
audio_file.tag.save(version=eyed3.id3.ID3_V2_3)
audio_file.tag.save()
# Replace 'path_to_your_audio_file.mp3' with the actual file path of the unknown song
if __name__ == "__main__":
audio_file_path = 'C:/Users/ILEG-i5-11/Downloads/Music/Unknown_file.mp3'
song_tags = recognize_song(audio_file_path)
if song_tags:
print(f'Song identified: {song_tags}')
set_id3_tags_mp3(audio_file_path, song_tags)
# Renaming the file after identifying the song and setting tags
artist_name = song_tags.get('artists')[0].get('name')
song_title = song_tags.get('title')
if artist_name and song_title:
new_file_name = f"{artist_name} - {song_title}.mp3"
new_file_path = os.path.join(os.path.dirname(audio_file_path), new_file_name)
os.rename(audio_file_path, new_file_path)
print(f"File has been renamed to: {new_file_name}")
else:
print('Could not identify the song.')
I need to add the track number in the album
and still parametrs like all the artists who have written that song and sung that song
|
ff32a37eb75cb3ea918b7e7e95b101a1
|
{
"intermediate": 0.4354809522628784,
"beginner": 0.38658344745635986,
"expert": 0.17793557047843933
}
|
40,346
|
How do I make this react materials ui table have less width
<TableContainer component={Paper}>
<Table style={{ marginRight: "50%" }} aria-label="Classes">
<TableHead>
<TableRow>
<TableCell>Class</TableCell>
<TableCell align="right">Room Number</TableCell>
<TableCell align="right">Time</TableCell>
<TableCell align="right">Address</TableCell>
</TableRow>
</TableHead>
<TableBody>
{classList.map((item, index) => (
<TableRow key={index}>
<TableCell component="th" scope="row">
{index + 1}
</TableCell>
<TableCell align="right">{item.title}</TableCell>
<TableCell align="right">{item.time}</TableCell>
<TableCell align="right">{item.address}</TableCell>
<TableCell align="right">
<Button style={{width: "100%", marginTop: "15px", backgroundColor: "red", color: "white"}} className={styles.deleteButton} onClick={() => handleDelete(index)}>
Delete
</Button>
</TableCell>
</TableRow>
))}
</TableBody>
</Table>
</TableContainer>
|
8534a4fbfe1067f2299653c729a9155b
|
{
"intermediate": 0.261080801486969,
"beginner": 0.610818088054657,
"expert": 0.1281011402606964
}
|
40,347
|
import acrcloud
import os
import eyed3
import requests
import json
from acrcloud.recognizer import ACRCloudRecognizer
# ACRCloud API credentials
ACR_HOST = ""
ACR_ACCESS_KEY = ""
ACR_ACCESS_SECRET = ""
# ACR Cloud setup
config = {
'host': ACR_HOST,
'access_key': ACR_ACCESS_KEY,
'access_secret': ACR_ACCESS_SECRET,
'timeout': 10 # seconds
}
dir(acrcloud)
# Initialize the ACRCloud recognizer
recognizer = ACRCloudRecognizer(config)
# Function to recognize the song from an audio file
def recognize_song(audio_file_path):
buffer = open(audio_file_path, 'rb').read()
result = recognizer.recognize_by_filebuffer(buffer, 0)
try:
result_dict = json.loads(result) # Parse the JSON string into a dictionary
return result_dict['metadata']['music'][0]
except (KeyError, IndexError, json.JSONDecodeError) as e:
print(f"Error while parsing result: {e}")
return None
# Function to set ID3 tags
def set_id3_tags_mp3(audio_file_path, tags):
audio_file = eyed3.load(audio_file_path)
if not audio_file.tag:
audio_file.initTag()
audio_file.tag.artist = tags.get('artists')[0].get('name')
audio_file.tag.album = tags.get('album').get('name')
audio_file.tag.album_artist = tags.get('artists')[0].get('name')
audio_file.tag.title = tags.get('title')
# Assuming that track number information is present in the response
track_str = tags.get('album').get('track_no')
if track_str:
try:
track_num = int(track_str)
audio_file.tag.track_num = track_num
except ValueError:
print(f"Invalid track number format in the tag: {track_str}")
# Assuming all artists information like 'songwriters' and 'performers' is available under the 'artists' key
all_artists = tags.get('artists')
if all_artists:
audio_file.tag.composer = ', '.join(artist.get('name') for artist in all_artists if artist.get('role') == 'Composer')
audio_file.tag.performer = ', '.join(artist.get('name') for artist in all_artists if artist.get('role') == 'Performer')
# Set the release year (if available)
release_date = tags.get('release_date')
if release_date and len(release_date) >= 4: # Check if release_date contains at least the year
year_string = release_date[:4]
try:
year = int(year_string)
# Some versions of eyeD3 require a Date object if available
if hasattr(eyed3.id3.tag, 'Date'):
audio_file.tag.recording_date = eyed3.id3.tag.Date(year)
else:
# Otherwise, set it as text_frame
audio_file.tag.setTextFrame("TDRC", year_string)
except ValueError:
print(f"Invalid date format in the tag: {release_date}")
# Add more tags here
audio_file.tag.genre = tags.get('genres')[0].get('name') # Assuming there's at least one genre
audio_file.tag.publisher = "Karthik" # Publisher tag set as 'karthik'
# To save the copyright label:
audio_file.tag.copyright = tags.get('label', '')
# To save the album cover page, you would need to download the image from a source
# and then do something like this:
# with open("path_to_cover_image.jpg", "rb") as album_art:
# audio_file.tag.images.set(3, album_art.read(), "image/jpeg", u"Description")
# Example of setting explicit tag in the comments (if you have explicit info):
audio_file.tag.comments.set(u"Explicit: Yes")
audio_file.tag.save(version=eyed3.id3.ID3_V2_3)
audio_file.tag.save()
# Replace 'path_to_your_audio_file.mp3' with the actual file path of the unknown song
if __name__ == "__main__":
audio_file_path = 'C:/Users/ILEG-i5-11/Downloads/Music/Unknown_file.mp3'
song_tags = recognize_song(audio_file_path)
if song_tags:
print(f'Song identified: {song_tags}')
set_id3_tags_mp3(audio_file_path, song_tags)
# Renaming the file after identifying the song and setting tags
artist_name = song_tags.get('artists')[0].get('name')
song_title = song_tags.get('title')
if artist_name and song_title:
new_file_name = f"{artist_name} - {song_title}.mp3"
new_file_path = os.path.join(os.path.dirname(audio_file_path), new_file_name)
os.rename(audio_file_path, new_file_path)
print(f"File has been renamed to: {new_file_name}")
else:
print('Could not identify the song.')
the album covers save externally 1400x1400 and internally embed into song 600x600 from musicbrainz
I need this in albumcoversprovider.py
and it must integrate seemlessly with my main code
|
53cd05294dda40c2a9a59b95294e19c1
|
{
"intermediate": 0.45163407921791077,
"beginner": 0.41026192903518677,
"expert": 0.13810405135154724
}
|
40,348
|
import acrcloud
import os
import eyed3
import requests
import json
from acrcloud.recognizer import ACRCloudRecognizer
# ACRCloud API credentials
ACR_HOST = ""
ACR_ACCESS_KEY = ""
ACR_ACCESS_SECRET = ""
# ACR Cloud setup
config = {
'host': ACR_HOST,
'access_key': ACR_ACCESS_KEY,
'access_secret': ACR_ACCESS_SECRET,
'timeout': 10 # seconds
}
dir(acrcloud)
# Initialize the ACRCloud recognizer
recognizer = ACRCloudRecognizer(config)
# Function to recognize the song from an audio file
def recognize_song(audio_file_path):
buffer = open(audio_file_path, 'rb').read()
result = recognizer.recognize_by_filebuffer(buffer, 0)
try:
result_dict = json.loads(result) # Parse the JSON string into a dictionary
return result_dict['metadata']['music'][0]
except (KeyError, IndexError, json.JSONDecodeError) as e:
print(f"Error while parsing result: {e}")
return None
# Function to set ID3 tags
def set_id3_tags_mp3(audio_file_path, tags):
audio_file = eyed3.load(audio_file_path)
if not audio_file.tag:
audio_file.initTag()
audio_file.tag.artist = tags.get('artists')[0].get('name')
audio_file.tag.album = tags.get('album').get('name')
audio_file.tag.album_artist = tags.get('artists')[0].get('name')
audio_file.tag.title = tags.get('title')
# Set the release year (if available)
release_date = tags.get('release_date')
if release_date and len(release_date) >= 4: # Check if release_date contains at least the year
year_string = release_date[:4]
try:
year = int(year_string)
# Some versions of eyeD3 require a Date object if available
if hasattr(eyed3.id3.tag, 'Date'):
audio_file.tag.recording_date = eyed3.id3.tag.Date(year)
else:
# Otherwise, set it as text_frame
audio_file.tag.setTextFrame("TDRC", year_string)
except ValueError:
print(f"Invalid date format in the tag: {release_date}")
# Add more tags here
audio_file.tag.genre = tags.get('genres')[0].get('name') # Assuming there's at least one genre
audio_file.tag.publisher = "Karthik" # Publisher tag set as 'karthik'
# To save the copyright label:
audio_file.tag.copyright = tags.get('label', '')
# To save the album cover page, you would need to download the image from a source
# and then do something like this:
# with open("path_to_cover_image.jpg", "rb") as album_art:
# audio_file.tag.images.set(3, album_art.read(), "image/jpeg", u"Description")
# Example of setting explicit tag in the comments (if you have explicit info):
audio_file.tag.comments.set(u"Explicit: Yes")
audio_file.tag.save(version=eyed3.id3.ID3_V2_3)
audio_file.tag.save()
# Replace 'path_to_your_audio_file.mp3' with the actual file path of the unknown song
if __name__ == "__main__":
audio_file_path = 'C:/Users/ILEG-i5-11/Downloads/Music/Unknown_file.mp3'
song_tags = recognize_song(audio_file_path)
if song_tags:
print(f'Song identified: {song_tags}')
set_id3_tags_mp3(audio_file_path, song_tags)
# Renaming the file after identifying the song and setting tags
artist_name = song_tags.get('artists')[0].get('name')
song_title = song_tags.get('title')
if artist_name and song_title:
new_file_name = f"{artist_name} - {song_title}.mp3"
new_file_path = os.path.join(os.path.dirname(audio_file_path), new_file_name)
os.rename(audio_file_path, new_file_path)
print(f"File has been renamed to: {new_file_name}")
else:
print('Could not identify the song.')
give me a python program which must embed the internally album art of 600x600 and externally save 1400x1400 high-res maximum resolution.
dont use musicmbrainz
|
c4375448939c609f8e2de16f26e5898e
|
{
"intermediate": 0.4354809522628784,
"beginner": 0.38658344745635986,
"expert": 0.17793557047843933
}
|
40,349
|
how to run hugging face dall e model locally
|
d0d65aaf5ea66459efe95c0f9e86cedc
|
{
"intermediate": 0.12245701998472214,
"beginner": 0.1715681105852127,
"expert": 0.705974817276001
}
|
40,350
|
import acrcloud
import os
import eyed3
import requests
import json
from acrcloud.recognizer import ACRCloudRecognizer
# ACRCloud API credentials
ACR_HOST = ""
ACR_ACCESS_KEY = ""
ACR_ACCESS_SECRET = ""
# ACR Cloud setup
config = {
'host': ACR_HOST,
'access_key': ACR_ACCESS_KEY,
'access_secret': ACR_ACCESS_SECRET,
'timeout': 10 # seconds
}
dir(acrcloud)
# Initialize the ACRCloud recognizer
recognizer = ACRCloudRecognizer(config)
# Function to recognize the song from an audio file
def recognize_song(audio_file_path):
buffer = open(audio_file_path, 'rb').read()
result = recognizer.recognize_by_filebuffer(buffer, 0)
try:
result_dict = json.loads(result) # Parse the JSON string into a dictionary
return result_dict['metadata']['music'][0]
except (KeyError, IndexError, json.JSONDecodeError) as e:
print(f"Error while parsing result: {e}")
return None
# Function to set ID3 tags
def set_id3_tags_mp3(audio_file_path, tags):
audio_file = eyed3.load(audio_file_path)
if not audio_file.tag:
audio_file.initTag()
audio_file.tag.artist = tags.get('artists')[0].get('name')
audio_file.tag.album = tags.get('album').get('name')
audio_file.tag.album_artist = tags.get('artists')[0].get('name')
audio_file.tag.title = tags.get('title')
# Set the release year (if available)
release_date = tags.get('release_date')
if release_date and len(release_date) >= 4: # Check if release_date contains at least the year
year_string = release_date[:4]
try:
year = int(year_string)
# Some versions of eyeD3 require a Date object if available
if hasattr(eyed3.id3.tag, 'Date'):
audio_file.tag.recording_date = eyed3.id3.tag.Date(year)
else:
# Otherwise, set it as text_frame
audio_file.tag.setTextFrame("TDRC", year_string)
except ValueError:
print(f"Invalid date format in the tag: {release_date}")
# Add more tags here
audio_file.tag.genre = tags.get('genres')[0].get('name') # Assuming there's at least one genre
audio_file.tag.publisher = "Karthik" # Publisher tag set as 'karthik'
# To save the copyright label:
audio_file.tag.copyright = tags.get('label', '')
# To save the album cover page, you would need to download the image from a source
# and then do something like this:
# with open("path_to_cover_image.jpg", "rb") as album_art:
# audio_file.tag.images.set(3, album_art.read(), "image/jpeg", u"Description")
# Example of setting explicit tag in the comments (if you have explicit info):
audio_file.tag.comments.set(u"Explicit: Yes")
audio_file.tag.save(version=eyed3.id3.ID3_V2_3)
audio_file.tag.save()
# Replace 'path_to_your_audio_file.mp3' with the actual file path of the unknown song
if __name__ == "__main__":
audio_file_path = 'C:/Users/ILEG-i5-11/Downloads/Music/Unknown_file.mp3'
song_tags = recognize_song(audio_file_path)
if song_tags:
print(f'Song identified: {song_tags}')
set_id3_tags_mp3(audio_file_path, song_tags)
# Renaming the file after identifying the song and setting tags
artist_name = song_tags.get('artists')[0].get('name')
song_title = song_tags.get('title')
if artist_name and song_title:
new_file_name = f"{artist_name} - {song_title}.mp3"
new_file_path = os.path.join(os.path.dirname(audio_file_path), new_file_name)
os.rename(audio_file_path, new_file_path)
print(f"File has been renamed to: {new_file_name}")
else:
print('Could not identify the song.')
give me a python program which must get lyrics which must save externally in lrc format
use genius
idea: after recognizing song just pass track name to get lyrics
|
5b2893b5201ad4dbb5344a65ac4bb3c8
|
{
"intermediate": 0.4354809522628784,
"beginner": 0.38658344745635986,
"expert": 0.17793557047843933
}
|
40,351
|
How can I calculate the amount of time it takes a car to travel from point A to B through some free API online
|
58a4b712c25ebf83004206ea9e61bc9a
|
{
"intermediate": 0.711182713508606,
"beginner": 0.11668844521045685,
"expert": 0.1721287965774536
}
|
40,352
|
Write me a Quick sort algorithm in C with function that will generate array of random n numbers
|
43d4855d9a006d527cf1688df18a7355
|
{
"intermediate": 0.16314710676670074,
"beginner": 0.07568106055259705,
"expert": 0.761171817779541
}
|
40,353
|
عدل نفس السكريبت بحيث النشر يكون على تويتر وليس فايس بوك بنفس مبداء العمل import requests
from bs4 import BeautifulSoup
import facebook
import time
# URL لصفحة vidiprinter
VIDIPRINTER_URL = "https://www.footballwebpages.co.uk/vidiprinter"
# معلومات تطبيق فيسبوك
FACEBOOK_PAGE_ID = '105781962458015' # ضع معرّف صفحة فيسبوك هنا
FACEBOOK_ACCESS_TOKEN = 'EAALZAdTh2WD4BOZCVCFiuHNwRP1z43DGyHj8Pv4ggCneKd7x5vr4nOt9ivAgVJd3acBMam1IqyE3ZARE983nSv2NaRGR7k8CLtbe2crg6lpGqnYBvfTwW9Rl0nSq1JZBwmaZCeIeYDsUgFqr6vMQzYAbFsgFEMuqs45Xkirzo9rxOrsUHQaphNp3OrIhcmcjBRzF9Nh2LPH2YJPlJu8Kzz30ZD' # ضع رمز الوصول الخاص بصفحتك هنا
# إنشاء كائن الـ GraphAPI الخاص بـ facebook-sdk
graph = facebook.GraphAPI(access_token=FACEBOOK_ACCESS_TOKEN, version='3.1')
def get_first_news():
"""استرجاع أول خبر من الصفحة."""
session = requests.Session()
session.headers.update({'User-Agent': 'Mozilla/5.0'})
response = session.get(VIDIPRINTER_URL)
if response.ok:
soup = BeautifulSoup(response.text, 'html.parser')
first_news_item = soup.find('tbody').find('tr')
if first_news_item:
event_link = first_news_item.find('a', href=True)
event_text = event_link.get_text(strip=True) if event_link else 'رابط الحدث غير متاح'
link = f"https://www.footballwebpages.co.uk{event_link['href']}" if event_link else ""
return event_text, link
return None, None
def post_to_facebook(message, link):
"""نشر المحتوى على صفحة فيسبوك."""
try:
graph.put_object(parent_object=FACEBOOK_PAGE_ID, connection_name='feed',
message=message, )
print("تم نشر الحدث بنجاح على فيسبوك.")
except facebook.GraphAPIError as e:
print("حصل خطأ أثناء نشر الحدث على فيسبوك:", e)
def main_loop():
last_displayed_news = None
while True:
try:
news, link = get_first_news()
if news and news != last_displayed_news:
facebook_message = f"أحدث الأخبار: {news}"
print(f"أول حدث: {news}")
print('#sportlastnews')
post_to_facebook(facebook_message, link)
last_displayed_news = news
time.sleep(60)
except KeyboardInterrupt:
print("تم إيقاف السكريبت من قبل المستخدم.")
break
except requests.exceptions.RequestException as e:
print("حصل خطأ في الطلب:", e)
time.sleep(60)
except facebook.GraphAPIError as e:
print("حصل خطأ في استخدام Facebook API:", e)
time.sleep(60)
if __name__ == "__main__":
main_loop()
|
76a9d5b3ebeb7e731f3f710a0bcb48c6
|
{
"intermediate": 0.2953854203224182,
"beginner": 0.5228865742683411,
"expert": 0.18172794580459595
}
|
40,354
|
Why isn't this waiting until position.coords is returned
let coords = await (navigator.geolocation.getCurrentPosition((position) => {
console.log(position);
return position.coords;
}));
|
f4ff7daa2377c6b24e47d8372ca9000c
|
{
"intermediate": 0.3740217089653015,
"beginner": 0.43800753355026245,
"expert": 0.18797075748443604
}
|
40,355
|
Why isn't this waiting until position.coords is returned
getRunningBuses
|
54fd85008c998d16dbcd242bc63aca09
|
{
"intermediate": 0.39823031425476074,
"beginner": 0.3289079964160919,
"expert": 0.27286165952682495
}
|
40,356
|
привет, я хочу обновить DDoS защиту для своего вебсайта. я использую платформу Cloudflare, раздел WAF(Custom Rules). Вот пример одного из моих правил, для отражения DDoS атак: (http.user_agent contains “OPR” and not any(http.request.headers[“sec-ch-ua”][] contains “Opera”)) or (any(http.request.headers[“sec-ch-ua”][] contains “Google Chrome”) and http.user_agent contains “Edg”) or (any(http.request.headers[“sec-ch-ua”][] contains “Google Chrome”) and http.user_agent contains “OPR”) or (any(http.request.headers[“sec-ch-ua”][] contains “Google Chrome”) and http.user_agent contains “Firefox”) or (any(http.request.headers[“sec-ch-ua”][] contains “Google Chrome”) and http.user_agent contains “YaBrowser”) or (any(http.request.headers[“sec-ch-ua”][] contains “Microsoft Edge”) and not http.user_agent contains “Edg”) or (any(http.request.headers[“sec-ch-ua”][] contains “Microsoft Edge”) and http.user_agent contains “Firefox”) or (any(http.request.headers[“sec-ch-ua”][] contains “DuckDuckGo”) and not http.user_agent contains “Edg”) or (any(http.request.headers[“sec-ch-ua”][] contains “YaBrowser”) and not http.user_agent contains “YaBrowser”) or (any(http.request.headers[“sec-ch-ua”][] contains “Yowser”) and not http.user_agent contains “YaBrowser”) or (any(http.request.headers[“x-requested-with”][] contains “XMLHttpRequest”) and not http.request.full_uri contains “api” and not ip.geoip.country in {“AM” “AZ” “BY” “CA” “KZ” “KG” “MD” “PL” “RU” “UA” “UZ” “RO”}) or (http.user_agent contains “Firefox”) and any(http.request.headers.names[] == “sec-ch-ua”) or (http.user_agent contains “Firefox”) and any(http.request.headers.names[] == “content-dpr”) or (http.user_agent contains “Firefox”) and any(http.request.headers.names[] == “dpr”) or (http.user_agent contains “Firefox”) and any(http.request.headers.names[] == “device-memory”) or (http.user_agent contains “Firefox”) and any(http.request.headers.names[] == “sec-ch-ua-arch”) or (http.user_agent contains “Firefox”) and any(http.request.headers.names[] == “sec-ch-ua-model”) or (http.user_agent contains “Firefox”) and any(http.request.headers.names[] == “sec-ch-ua-platform”) or (http.user_agent contains “Firefox”) and any(http.request.headers.names[] == “width”) or (http.user_agent contains “Firefox”) and any(http.request.headers.names[] == “sec-ch-ua-bitness”) or (http.request.headers.truncated) or (cf.tls_client_auth.cert_revoked) or (ip.geoip.asnum eq 8075) or (http.request.uri contains “#”) or (http.request.uri contains “$”) or (http.request.uri contains “%”) or (http.request.uri contains “@”) or (http.request.uri eq “~”) or (cf.threat_score ge 5) or (http.user_agent eq “Mozilla/5.0 (Linux; U; Android 10; pl-pl; Mi MIX 2S Build/QKQ1.190828.002) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/89.0.4389.116 Mobile Safari/537.36 XiaoMi/MiuiBrowser/13.10.0-gn”) or (http.request.uri.query eq “d=1”) or (http.request.uri.query eq “–+”) or (http.user_agent contains “Chromium/4”) or (http.user_agent contains “Chromium/2”) or (http.user_agent contains “Chromium/3”) or (http.user_agent contains “Chromium/5”) or (http.user_agent contains “Chromium/6”) or (http.user_agent contains “; 1.0”) or (ip.geoip.asnum eq 213230) or ((any(http.request.headers[“sec-ch-ua”][] contains “(Not(A:Brand”)) and (not any(http.request.headers[“sec-ch-ua-platform”][] contains “Linux”))) or (((lower(http.user_agent) contains “chrome/”) and (any(http.request.headers.names[] == “sec-ch-ua-mobile”))) and not any(http.request.headers.names[] == “sec-gpc”)) and (not any(http.request.headers[“sec-ch-ua”][*] contains “Chromium”) and not lower(http.user_agent) contains “samsungbrowser” and not lower(http.user_agent) contains “mobile safari” and not lower(http.user_agent) contains “smart tv”) or (http.user_agent eq “Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36”) or (http.user_agent eq “Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36”)
я хочу, чтобы используя все доступные источники ты сделал мне ещё 1 такое правило, для отражения DDoS атак с помощью Cloudflare WAF.
|
350abe0fde2e9111028ce1b6c376a091
|
{
"intermediate": 0.34169530868530273,
"beginner": 0.49330341815948486,
"expert": 0.1650012731552124
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.