instruction
stringlengths
0
30k
βŒ€
To avoid too long lines, you can do the following. Instead of: ``` with open(filename, 'wb') as file, request.urlopen(image.url) as response: pass ``` I, do for example: ``` f = lambda: open(filename, 'wb') r = lambda: request.urlopen(image.url) with r() as response, f() as file: pass ```
Why is it that when I inject `ILogger<MyClassName>` and call `Logger.LogInformation` I don't see that in Application Insights, but when I call `Logger.LogError` I do? Here is my code ```c# public async ValueTask DeleteMessageAsync(string chatId, int messageId) { Logger.LogInformation( "Deleting message: {chatId} {messageId}", chatId, messageId); try { await TelegramBotClient.DeleteMessageAsync(chatId, messageId); } catch (ApiRequestException ex) when (ex.ErrorCode == (int)HttpStatusCode.BadRequest) { Logger.LogError(ex, "Could not Delete message"); } } ``` Here is the logging section of my `host.json` file ```json "logging": { "applicationInsights": { "samplingSettings": { "isEnabled": true, "excludedTypes": "Request" }, "enableLiveMetricsFilters": true }, "DurableTask.AzureStorage": "Information", "DurableTask.Core": "Information" } ``` Here is the host builder for my Azure functions app ```c# IHost host = new HostBuilder() .AddServiceDefaults(useOtlpExporter: false, isDevelopment: true) .ConfigureFunctionsWorkerDefaults() .ConfigureServices(services => { services.AddApplicationInsightsTelemetryWorkerService(); services.ConfigureFunctionsApplicationInsights(); services.AddSingleton<ITelegramBroadcastService, TelegramBroadcastService>(); services.AddSingleton<IAuthorizationService, AuthorizationService>(); services.AddDurableTaskClient(x => { }); services.AddAzureClients(clientBuilder => { clientBuilder.AddBlobServiceClient(Environment.GetEnvironmentVariable("AzureWebJobsStorage")); }); }) .ConfigureAppConfiguration((context, config) => { if (context.HostingEnvironment.IsDevelopment()) config.AddUserSecrets<Program>(); }) .Build(); host.Run(); ``` The `AddServiceDefaults` extension is the one added by default in a new Aspire app. ```c# using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Diagnostics.HealthChecks; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Diagnostics.HealthChecks; using Microsoft.Extensions.Logging; using OpenTelemetry.Logs; using OpenTelemetry.Metrics; using OpenTelemetry.Trace; namespace Microsoft.Extensions.Hosting; public static class Extensions { public static T AddServiceDefaults<T>(this T builder, bool useOtlpExporter, bool isDevelopment) where T : IHostBuilder { builder.ConfigureOpenTelemetry( useOtlpExporter: useOtlpExporter, isDevelopment: isDevelopment); builder.AddDefaultHealthChecks(); builder.AddServiceDiscovery(); return builder; } public static T AddServiceDiscovery<T>(this T builder) where T : IHostBuilder { builder.ConfigureServices(services => { services.AddServiceDiscovery(); }); return builder; } public static T ConfigureHttpClientDefaults<T>(this T builder) where T : IHostBuilder { builder.ConfigureServices(services => { services.ConfigureHttpClientDefaults(http => { // Turn on resilience by default http.AddStandardResilienceHandler(); // Turn on service discovery by default http.UseServiceDiscovery(); }); }); return builder; } public static T ConfigureOpenTelemetry<T>(this T builder, bool useOtlpExporter, bool isDevelopment) where T : IHostBuilder { builder.ConfigureLogging(logging => { logging.AddOpenTelemetry(telemetry => { telemetry.IncludeFormattedMessage = true; telemetry.IncludeScopes = true; }); }); builder.ConfigureServices(services => { services.AddOpenTelemetry() .WithMetrics(metrics => { metrics.AddRuntimeInstrumentation() .AddBuiltInMeters(); }) .WithTracing(tracing => { if (isDevelopment) { // We want to view all traces in development tracing.SetSampler(new AlwaysOnSampler()); } tracing.AddAspNetCoreInstrumentation() .AddGrpcClientInstrumentation() .AddHttpClientInstrumentation(); }); }); builder.AddOpenTelemetryExporters(useOtlpExporter); return builder; } private static T AddOpenTelemetryExporters<T>(this T builder, bool useOtlpExporter) where T : IHostBuilder { builder.ConfigureServices(services => { if (useOtlpExporter) { services.Configure<OpenTelemetryLoggerOptions>(logging => logging.AddOtlpExporter()); services.ConfigureOpenTelemetryMeterProvider(metrics => metrics.AddOtlpExporter()); services.ConfigureOpenTelemetryTracerProvider(tracing => tracing.AddOtlpExporter()); } // Uncomment the following lines to enable the Prometheus exporter (requires the OpenTelemetry.Exporter.Prometheus.AspNetCore package) //services // .AddOpenTelemetry(); // .WithMetrics(metrics => metrics.AddPrometheusExporter()); // Uncomment the following lines to enable the Azure Monitor exporter (requires the Azure.Monitor.OpenTelemetry.Exporter package) //services // .AddOpenTelemetry() // .UseAzureMonitor(); }); return builder; } public static T AddDefaultHealthChecks<T>(this T builder) where T : IHostBuilder { builder.ConfigureServices(services => { services.AddHealthChecks() // Add a default liveness check to ensure app is responsive .AddCheck("self", () => HealthCheckResult.Healthy(), ["live"]); }); return builder; } public static WebApplication MapDefaultHealthCheckEndpoints(this WebApplication app) { // Uncomment the following line to enable the Prometheus endpoint (requires the OpenTelemetry.Exporter.Prometheus.AspNetCore package) // app.MapPrometheusScrapingEndpoint(); // All health checks must pass for app to be considered ready to accept traffic after starting app.MapHealthChecks("/health"); // Only health checks tagged with the "live" tag must pass for app to be considered alive app.MapHealthChecks("/alive", new HealthCheckOptions { Predicate = r => r.Tags.Contains("live") }); return app; } private static MeterProviderBuilder AddBuiltInMeters(this MeterProviderBuilder meterProviderBuilder) => meterProviderBuilder.AddMeter( "Microsoft.AspNetCore.Hosting", "Microsoft.AspNetCore.Server.Kestrel", "System.Net.Http"); } ``` Here are the app settings that are applied according to `portal.azure.com` ```json { "deployment_branch": "master", "SCM_TRACE_LEVEL": "Verbose", "SCM_COMMAND_IDLE_TIMEOUT": "60", "SCM_LOGSTREAM_TIMEOUT": "7200", "SCM_BUILD_ARGS": "", "AzureWebJobsStorage": "DefaultEndpointsProtocol=https;AccountName=...AccountKey=...=;EndpointSuffix=core.windows.net", "AzureWebJobsSecretStorageType": "Blob", "WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED": "1", "APPLICATIONINSIGHTS_CONNECTION_STRING": "InstrumentationKey=...IngestionEndpoint=https://eastus-8.in.applicationinsights.azure.com/;LiveEndpoint=https://eastus.livediagnostics.monitor.azure.com/", "WEBSITE_SLOT_NAME": "Production", "SCM_USE_LIBGIT2SHARP_REPOSITORY": "0", "WEBSITE_SITE_NAME": "...", "FUNCTIONS_EXTENSION_VERSION": "~4", "WEBSITE_AUTH_ENABLED": "False", "ScmType": "None", "WEBSITE_RUN_FROM_PACKAGE": "1", "FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated" } ```
Why is ILogger.LogError working but not ILogger.LogInformation?
|c#|azure|azure-functions|azure-application-insights|
A repository can have any number of image tags. There is no stated limit for size. In practice, if an image layer gets much larger than 10 or 20 gigabytes, you may experience problems with push and pull. Your mileage may vary. There is no limit for builds or tags for public images. (This can be changed over a period of time) As long as you wanted to play around docker public repository will do. But, if you are looking for a private repository I would recommend hosting your own binary repository managers JFfrog or Nexus to maintain all kinds of artifacts including docker images. Source: https://forums.docker.com/t/does-docker-hub-have-a-size-limitation-on-repos-or-images/10154 **UPDATE 2** Inactive images are defined as images that have not been pulled or pushed in 6 months. Starting November 1, 2020: 1. Free accounts may retain inactive images for up to 6 months 2. Anonymous users will have an upper limit of 100 image pulls in a six hour period 3. Authenticated users will have an upper limit of 200 image pulls in a six hour period A pull is defined as up to two GET requests to the registry URL path β€˜/v2/*/manifests/*’. **PS:** It is all about the policies set by the Docker, Inc. It can change over time.
null
I wrote a small `struct` used to sort dependencies between projects (it could be used to sort anything really, the interface used is just a `std::string`) in a container/solution. The dependencies are defined and fed from a json object (parsing is done using `boost::json`). Example dependencies: "container_dependency_tree": { "abc": ["def","hello","world"], "def": ["xyz","x","y"], "xyz": [], }, Header: #pragma once #include <boost/json.hpp> #include <map> #include <string> #include <vector> #include <set> namespace tmake { struct container_dependency_tree_t { private: std::map<std::string, std::vector<std::string>> m_flat; bool m_compare(const std::string& lhs, const std::string& rhs) const;//returns true if lhs depends on rhs std::set<std::string, decltype(&m_compare)> m_sorted = decltype(m_sorted)(&container_dependency_tree_t::m_compare); public: container_dependency_tree_t() {} container_dependency_tree_t(const boost::json::object& container_dependency_tree); public: const decltype(m_flat)& flat() const; const decltype(m_sorted)& sorted() const; }; } Implementation: #include <tmake/container_dependency_tree_t.h> namespace tmake { container_dependency_tree_t::container_dependency_tree_t(const boost::json::object& container_dependency_tree) { for (const auto& kv : container_dependency_tree) { std::vector<std::string> dependencies; const boost::json::array& dd = kv.value().as_array(); for (const auto& d : dd) { dependencies.push_back(d.as_string().c_str()); } m_flat.emplace(kv.key(), dependencies); } for (const auto& f : m_flat) { m_sorted.insert(f.first);//***ISSUE HERE*** } } bool container_dependency_tree_t::m_compare(const std::string& lhs, const std::string& rhs) const { auto find = m_flat.find(lhs); if (find == m_flat.end()) return false; for (const auto& dependency : find->second) { if (rhs == dependency || m_compare(dependency, rhs)) { return true; } } return false; } const decltype(container_dependency_tree_t::m_flat)& container_dependency_tree_t::flat() const { return m_flat; } const decltype(container_dependency_tree_t::m_sorted)& container_dependency_tree_t::sorted() const { return m_sorted; } } The issue is with the instruction `m_sorted.insert(f.first);`. I get thrown some compiler mumbo-jumbo that I dont understand - the error is somewhere within the STL implementation files (MSVC xutility(1372,19): error C2064: le terme ne correspond pas Γ  une fonction qui prend 2 arguments). What am I doing wrong ?
Oracle setting up on k8s cluster using helm charts enterprise edition
It is recommended to rewrite the DTO with a constructor applying the required parameters. If rewriting is not possible, implement a custom denormalization tool. Following approach ensures strict validation during deserialization. final class SimpleDtoDenormalizer implements DenormalizerInterface { public function denormalize(mixed $data, string $type, ?string $format = null, array $context = []): mixed { if (!is_array($data)) { throw new \InvalidArgumentException('Expected an array, got ' . get_debug_type($data)); } $dto = new SimpleDto(); $dto->name = $data['name'] ?? throw new \InvalidArgumentException('Missing "name"'); $dto->value = $data['value'] ?? throw new \InvalidArgumentException('Missing "value"'); return $dto; } // Other methods as per Symfony documentation }
I'm trying to publish to the local repository but to set the artifact name explicitly. For example, say the org is "quick.fox" and the module is "core" with version being 1.1. What I get is: ```<repo>/quick.fox/core/1.1/core-1.1.jar``` What I'd like it to be: ```<repo>/quick.fox/core/1.1/prefix-core.jar``` a basic sample code would be: ``` apply plugin: 'maven-publish' publishing { publications { maven(MavenPublication) { group = "quick.fox" artifactId = "core" version = "1.1" } } } ``` How would I do that? I'm using Gradle 6.5.1, and I am open to using either the maven or ivy publish plugins.
Defining an artifact name explicitly when publishing with Gradle
|maven|gradle|ivy|
I created a stand alone soap webserver in delphi. I use javascript in the front end to connect to the webserver. I know how to send username and password in the header of each request in javascript but I don't know how to read them in server side by Delphi. Can anyone help me to read the header of the requests?
How can I read the header of request to webserver
|delphi|soap|header|webserver|
I have a simple console application, that references small 3rd party COM interop .NET assembly. I want to trim unused code when publishing. [![enter image description here][1]][1] However, when I run trimmed piblished the application I get error: > Unhandled exception. System.IO.FileNotFoundException: File name: > '3rdPartyInteropLibrary, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null' > at ClassLibrary1.Class1.Print() at Program.Main() Even when I manually copy the `3rdPartyInteropLibrary.dll` to the output folder I get the error. How to configure trimming so that it only trim my code and leave the external library untouched? [1]: https://i.stack.imgur.com/VhPr0.png
It appears that it's important to express that you want the 7 specific items extracted by calling for tokens=1-7. Token 1 will be %%G, with each successive, stipulated token using %%H, %%I, etc. Wanting to express one line for each token comes by echoing them 1 echo command at a time for each token's variable, either as one per line stacked or parenthetically grouped sequentially with &s: do (echo %%G) & (echo %%H) & (echo %%I)... etc In total, the following worked for me: `for /f "tokens=1-7" %G in ("1 2 7 16 21 26 688") do ( echo %G echo %H echo %I echo %J echo %K echo %L echo %M )`
Is there a way to uniquely identify the current Windows Terminal viewport within a WSL2 Ubuntu bash environment?
|gnu-screen|windows-terminal|
null
The program is placing the files to the path `Users/username/` instead of the `Oil and Cells folder`. Can I fix this issue with the compiling command? I used `gfortran -o Planar_Surfactant Planar_Surfactant.f` while inside of the `Oil and Cells` folder to compile the executable. `~ % /Users/username/Documents/BMEN\ Research/Oil\ And\ Cells/Planar_Surfactant ; exit;` this is what shows up when I run the program along with some information about the base values. I'm pretty new to fortran coding so I'm not quite sure what to try or how to resolve this issue.
My journey through creating a PDF form (upgrading old Zend PDF library to be able to do it) continues. I am struggling with checkbox and radio buttons, trying to get them displayed consistently. I did follow several guides and tips here and still there are differences in getting the button displayed in Chrome, Mozilla and Acrobat reader. It seems no matter what I do, Acrobat reader ignores defines content stream and replaces it with it's own for checked and unchecked state of the button. Strangely enough, it displays the defined stream when the checked button has focus. So when I set in the stream I want the checkbox to display let's say a cross (either through ZaDb character or just simply two crossed lines), Acrobat reader only displays the cross when I click and check the button. As soon as I click elsewhere, the cross is replaced with the default check mark. (Mozilla completely ignores any settings and just always displays their own icons so at least the user doesn't se two sets of icons.) So is there any way how I can force Acrobat reader to show the set stream content? Or is there a way to set the stream content exactly the same as the default acrobat reader? I have no problem using the default check mark but I can't get the position right with possibly dynamic size of the checkbox. Here is my source code with checkbox: ``` %PDF-1.5 %οΏ½οΏ½οΏ½οΏ½ 1 0 obj <</Type /Catalog /Version /1.5 /Pages 2 0 R /Acroform <</NeedAppearances false /Fields [7 0 R ] >> /Names 10 0 R >> endobj 2 0 obj <</Type /Pages /Kids [13 0 R ] /Count 1 >> endobj 3 0 obj <</Type /Font /Subtype /Type1 /Encoding /WinAnsiEncoding /BaseFont /Helvetica >> endobj 4 0 obj <</Type /Font /Subtype /Type1 /Encoding /WinAnsiEncoding /BaseFont /ZapfDingbats >> endobj 5 0 obj <</Length 58 /Type /XObject /Subtype /Form /BBox [0 0 15 15 ] /Resources <</ProcSet [/PDF /Text /ImageC /ImageB /ImageI ] /Font <</ZaDb 4 0 R >> >> /Matrix [1 0 0 1 0 0 ] >> stream /Tx BMC q BT 0 g /ZaDb 15 Tf 1 1 Td (5) Tj ET Q EMC endstream endobj 6 0 obj <</Length 17 /Type /XObject /Subtype /Form /BBox [0 0 15 15 ] /Resources <</ProcSet [/PDF /Text /ImageC /ImageB /ImageI ] /Font <</ZaDb 4 0 R >> >> /Matrix [1 0 0 1 0 0 ] >> stream Tx BMC q Q EMC endstream endobj 7 0 obj <</Type /Annot /Subtype /Widget /FT /Btn /Rect [200 350 215 365 ] /T (awesome) /DA (0 g /ZaDb 15 Tf) /F 4 /Ff 0 /V /Yes /AS /Yes /DR <</Font <</ZaDb 4 0 R >> >> /AP <</N <</Yes 5 0 R /Off 6 0 R >> >> /P 13 0 R >> endobj 8 0 obj [] endobj 9 0 obj <</Names 8 0 R >> endobj 10 0 obj <</Dests 12 0 R >> endobj 11 0 obj [] endobj 12 0 obj <</Names 11 0 R >> endobj 13 0 obj <</Type /Page /LastModified (D:20240331174748+02'00') /Resources <</ProcSet [/Text /PDF /ImageC /ImageB /ImageI ] /Font <</F1 15 0 R /ZaDb 4 0 R >> >> /MediaBox [0 0 595 842 ] /Contents [14 0 R ] /Annots [7 0 R ] /Parent 2 0 R >> endobj 14 0 obj <</Length 81 >> stream /F1 14 Tf 0 g 0 g BT 40 350 Td (Everything is awesome) Tj ET 200 350 15 15 re S endstream endobj 15 0 obj <</Type /Font /Encoding /WinAnsiEncoding /Subtype /Type1 /BaseFont /Helvetica >> endobj xref 0 16 0000000000 65535 f 0000000015 00000 n 0000000147 00000 n 0000000206 00000 n 0000000303 00000 n 0000000403 00000 n 0000000670 00000 n 0000000896 00000 n 0000001126 00000 n 0000001145 00000 n 0000001179 00000 n 0000001215 00000 n 0000001235 00000 n 0000001271 00000 n 0000001519 00000 n 0000001651 00000 n trailer <</ID [<61666330663231353135323634656134> <37623763336532323165643662643432> ] /Size 16 /Root 1 0 R >> startxref 1749 %%EOF ``` And here is the example PDF. https://filetransfer.io/data-package/eMwB3qGa#link Of course I have tried setting on and off the Needppeareance and followed several other guidelines about similar topics here.
Updated max input vars but table still shows error
|php|sql|database|phpmyadmin|
{"OriginalQuestionIds":[58177145],"Voters":[{"Id":5577765,"DisplayName":"Rabbid76","BindingReason":{"GoldTagBadge":"pygame"}}]}
I'm currently outlining a dialogue system in Unreal Blueprint. Im collapsing graphnodes per dialogue chunk to clean up the overview. I found that you can name Input and Output pins for those collapsed nodes and would like to use those to define dialogue lines and responses for convenience. To achieve this i have to somehow access the Output Pin names from a c++ function and then integrate it as a node to feed into my blueprint logic. How do I best do this? [Overview](https://i.stack.imgur.com/BGDcs.png) [Inside the Collapsed Graph](https://i.stack.imgur.com/pfi4W.png) If anyone can help me out it would be much appreciated! I have tried to write a c++ plugin but dont know the right libraries to grab pin names in such a specific context. I have tried looking it up but found nothing.
Found the answer in a related post. Took me hours of searching to find it... https://stackoverflow.com/questions/67423907/possible-to-use-lay-out-a-compose-view-in-an-activity-written-in-java/76612457#76612457?newreg=7e15da165ac149f2a61ff9685af9be59 > You don't necessarily need the AbstractComposeView. I was able to do > this just with the following: > > Add ComposeView to your layout.xml just as you would any other View: > > <androidx.compose.ui.platform.ComposeView android:id="@+id/compose_view" android:layout_width="match_parent" android:layout_height="match_parent"/> > Create a new kt file, for example ComposeFunctions.kt that has a function to set the content to > the ComposeView: > > @file:JvmName("ComposeFunctions") package (your package goes here) fun setContent(composeView: ComposeView) { composeView.setContent { composable kt function goes here } } > Now from your java Activity/Fragment > > ComposeView composeView = view.findViewById(R.id.compose_view); ComposeFunctions.setContent(composeView); > I have used this > successfully on WearOS for androidx.wear.compose.material.TimeText: > > composeView.setContent { TimeText() }
i am trying speed up my implementation of rolling zscore. What I'm doing: I've a matrix features (nxm). And each column of features I wanna apply a rolling window that are different from one column to the other. So right now, i 'm doing: - for each row >= maximum rollong window - for each column of feature - get the vector from i - rolling window to i - compute mean, std, zscore What i've done: #include <Eigen/Dense> using namespace std; using namespace Eigen; double computeMean(const VectorXf &v) { double sum = 0; for ( auto e : v) { sum += e; } return sum / v.rows(); } double computeStdDeviation(const VectorXf &v, double mean) { double varianceSum = 0; for ( auto x : v) { varianceSum += ( x - mean) * ( x - mean); } double variance = varianceSum / v.rows(); return sqrt(variance); } double zScore(double x, double stdDev, double mean) { if (stdDev > 0) return (x - mean) / stdDev; return -5; } MatrixXf NormalizeData(const MatrixXf& features, const vector<int> shifts) { int max = *max_element(shifts.begin(), shifts.end()); MatrixXf df(features.rows() - max, features.cols()); for (size_t i = max; i < features.rows(); ++i) { for (size_t j = 0; j < features.cols(); ++j) { VectorXf vec = features(seq(i-shifts[j]+1,i), j); // get a vector of size shifts[j] double elem = vec(last); double mean = computeMean(vec); double std = computeStdDeviation(vec, mean); df(i-max, j) = zScore(elem , std, mean); } } return df; } shifts being a vector [10, 20, 30, ...] But even with as low as 500k rows, this takes a very long time. Any one has an idea on how to speed this up please? Update: I tried std::for_each(std::execution::par, features.rowwise().begin()+max-1, features.rowwise().end(), [] (auto&& row) { int index = row.index - max + 1; if (index % 100000 == 0) std::cout << "NormalizeData Filename: " << filename << " - " << index << " / " << df.rows() << endl; df(index, 0) = row(0); for (size_t j = 0; j < shifts.size(); ++j) { ...}} But it says "max" is not captured, "df" is not captured "shifts" is not captured...
I had the same issue in one of my projects. I discovered that in the build process, cx_Freeze was taking all the packages installed in the computer, so I sat up a virtual-environment on the root directory of the project to install there only the packages needed for my project, once this was done cx_Freeze chose only the packages in the venv, and the error disappeared. I found about this in the common error section of the cx_Freeze documentation: https://cx-freeze.readthedocs.io/en/latest/faq.html Have a nice day.
why don't you create your json manually instead of using library. public String singleKeyValueToJson(){ JSONObject json=new JSONObject(); json.put(key,value); return json.toString(); }
I have two .3gp (or .wav) audio files that I have saved from the user's microphone. How can I concatenate these two audio files together in code into a single file?
In Hoppscotch v2024.3.0 how can I get or set the newly introduced request variables from a pre-request script? I tried using `pw.env.get("my_reqest_variable")` but it returns `undefined`.
get or set request variable in pre-request script in Hoppscotch
|hoppscotch|
add this on top of your controller: ```php /** * * @OA\Info( * version="1.0.0", * title="Organization", * description="test description", * @OA\Contact( * name="test", * email="test@test" * ), * ), * @OA\Server( * url="/api/v1", * ), */ class YourController extends Controller ``` then you have to set like this documentation for all of your controller method. suggestion: if you using PHP 8.1 or higher Laravel 8.x or higher instead of using `L5-Swagger` use `dedoc/scamble` it will generate document automatically for you and it's on top of swagger [more info][1] [1]: https://scramble.dedoc.co/installation
Interesting problem This would pull out the duration between two rows in your format [![enter image description here][1]][1] let Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content], #"Convert" = Table.TransformColumns(Source,{{"Column1", each let a=_, b = Text.Replace(a,"Days","*24*60*60+"), c = Text.Replace(b,"Day","*24*60*60+"), d = Text.Replace(c,"Hours","*60*60+"), e = Text.Replace(d,"Hour","*60*60+"), f = Text.Replace(e,"Minutes","*60+"), g = Text.Replace(f,"Minute","*60+"), h = Text.Replace(g,"Seconds","*1+"), i = Text.Replace(h,"Second","*1+"), j = i &"0" in Expression.Evaluate(j), type number}}), #"Added Custom" = Table.AddColumn(#"Convert", "Duration", each #duration(0,0,0,List.Max(#"Convert"[Column1])-List.Min(#"Convert"[Column1]))), #"CleanUp" = Table.FirstN(#"Added Custom",1), #"Removed Columns" = Table.RemoveColumns(CleanUp,{"Column1"}) in #"Removed Columns" [1]: https://i.stack.imgur.com/T5RfY.png
When a user right-clicks and drags a file into a different directory, they are presented with a small context menu that includes entries such as **Copy here** and **Move here**: ![popup](https://i.stack.imgur.com/r7Z0N.png) Is there a way to query these menu items yourself? I was able to trigger the default implementation for this right drag-and-drop popup by following these steps: - Register your app with `RegisterDragDrop`. - In the `IDropTarget::Drop` method: - Obtain the `IShellFolder` interface for the drop path. - Get the `IDropTarget` interface from this folder. - Forward the `IDataObject` to the `DragEnter` and `Drop` methods. - The `Drop` method will display the correct context menu. Here's the simplified version of the code: ``` HRESULT Win_DragDropTargetDrop(IDropTarget *iTarget, IDataObject *iData, DWORD keys, POINTL cursor, DWORD *effect) { HRESULT hr = 0; IShellFolder *folder = null; IDropTarget *folderDropTarget = null; POINTL pt = {0}; hr = Win_GetIShellFolder(hwnd, dragDropPath, &folder); hr = folder->lpVtbl->CreateViewObject(folder, hwnd, &IID_IDropTarget, &folderDropTarget); hr = folderDropTarget->lpVtbl->DragEnter(folderDropTarget, iData, MK_RBUTTON, pt, effect); hr = folderDropTarget->lpVtbl->Drop(folderDropTarget, iData, MK_RBUTTON, pt, effect); } ``` However, what I want is to enumerate these items within `IContextMenu` myself, using functions such as `CreatePopupMenu`, `QueryContextMenu`, and then iterating through that menu via `GetMenuItemCount` and `GetMenuItemInfo`. The reason is that I'm rendering the context menu myself (instead of using the Windows built-in UI framework). I'm already doing this for the regular right-click context menu on a directory background. ![context menu](https://i.stack.imgur.com/np8dt.png)
in the application I made using this library [react-native-tcp-socket](https://github.com/Rapsssito/react-native-tcp-socket), the server is created and the functions in the listen event work, but server.on('connection',() => {}) does not work **Server.js** ``` import TcpSocket from 'react-native-tcp-socket'; import NetInfo from '@react-native-community/netinfo'; export const server = new TcpSocket.Server(); export let address = null; const getWifi = async () => { try { const state = await NetInfo.fetch(); if (state.isConnected) { const netInfo = state; const newIpAddress = netInfo.details && netInfo.details.ipAddress; return newIpAddress; } } catch (error) { console.log(error); } }; export const closeServer = () => { if (server) { console.log(`server closed`); server.close(); } else { console.log('server not found'); } }; export const init = async () => { const ip = await getWifi(); server.on('connection', socket => { socket.on('data', () => { socket.write('Echo server\r\n'); }); }); server.listen({port: 0, host: `127.0.0.1`, reuseAddress: true}, () => { const port = server.address()?.port; if (!port) throw new Error('Server port not found'); address = server.address(); console.log(address); }); }; ``` **Client.js** ``` import TcpSocket from 'react-native-tcp-socket'; import NetInfo from '@react-native-community/netinfo'; export const client = new TcpSocket.Socket(); const getWifi = async () => { try { const state = await NetInfo.fetch(); if (state.isConnected) { const netInfo = state; const newIpAddress = netInfo.details && netInfo.details.ipAddress; return newIpAddress; } } catch (error) { console.log(error); } }; export const init = async port => { const ip = await getWifi(); const options = { port: port, host: `127.0.0.1`, localAddress: `127.0.0.1`, reuseAddress: true, localPort: port, interface: "wifi", }; client.connect(options, () => { client.write(`connected to server`); }); client.on('data', data => { console.log(`new data`, data.toString()); }); }; ``` **SocketProvder.jsx** to start the server ``` import React, {useEffect, useState} from 'react'; import {SocketContext} from './SocketContext'; import {closeServer, server, init} from '../server/Server'; export default function SocketProvider({children}) { const [isInitServer, setIsInitServer] = useState(false); useEffect(() => { if (isInitServer) { server.on('connection', socket => { console.log(socket.address()); socket.on('data', data => { console.log(`in server `, data.toString()); }); }); console.log(server.eventNames()); init(); } return () => { if (isInitServer) { closeServer(); setIsInitServer(false); } }; }, [isInitServer]); return ( <SocketContext.Provider value={{ isInitServer, setIsInitServer, }}> {children} </SocketContext.Provider> ); } ``` **ClientProvider.js** ``` import React, {useEffect, useState} from 'react'; import {ClientContext} from './ClientContext'; import {init, client} from '../server/Client'; export default function ClientProvider({children}) { const [isJoinedClient, setIsJoinedClient] = useState({ isConnectected: false, port: null, }); useEffect(() => { if (isJoinedClient.isConnectected && isJoinedClient.port !== null) { client.on('data', data => { console.log(`client data`, data.toString()); }); client.on('error', err => { console.log(`client error: `, err); client.destroy(); beginIsJoined(); }); init(isJoinedClient.port); } return () => { if (isJoinedClient.isConnectected && isJoinedClient.port !== null) { client.destroy(); beginIsJoined(); } }; }, [isJoinedClient]); const handleIsJoined = (boolVal, port) => { setIsJoinedClient(prev => ({ ...prev, isConnectected: boolVal, port: port, })); }; const beginIsJoined = () => { setIsJoinedClient(prev => ({ ...prev, isConnectected: false, port: null, })); }; return ( <ClientContext.Provider value={{isJoinedClient, setIsJoinedClient: handleIsJoined}}> {children} </ClientContext.Provider> ); } ``` ``` server.on('connection', socket => { console.log(socket.address()); socket.on('data', data => { console.log(`in server `, data.toString()); }); ``` In this method, it should write the address of the socket when there is a connection or log when there is data coming in, but it is not working, can you help me what I might be doing wrong? thanks in advance
react-native-tcp-socket connection event not working
|react-native|tcp|tcpserver|tcpsocket|
null
I am new to hadoop. So, I am assuming and it seems obvious that hadoop will have to keep track of the free space in each datanode. For the reason being, if new file saving request comes to hadoop, hadoop system will have to decide which datanodes to choose to store the incoming file. Now, each datanode keeps the data in location given by config : dfs.datanode.data.dir. So I can set any directory location in this config. Let us assume I set, D:/data/hadoop Now how hadoop calculate free space in this directory? As there is no concept called "free space in directory". Does hadoop consider free space available in D: drive in this example, as total free space in datanode?
how hadoop decide datanode free space
|hadoop|
I am using uuid as id for the MySQL db. But when I try to retrieve using id, it's returning null rather than the correct object. I crossed check the db using workbench, the record is there but its not retrieving when trying to do usig JPA. I have following classes BaseEntity: ``` @Getter(AccessLevel.PUBLIC) @MappedSuperclass public abstract class BaseEntity { @Id() @GeneratedValue(strategy = GenerationType.UUID) UUID id; @CreationTimestamp Date createdAt; } ``` ArticleRepo: ``` @Repository public interface ArticlesRepository extends JpaRepository<ArticleEntity, UUID> { ArticleEntity findBySlug(String slug); Optional<ArticleEntity> findById(UUID id); } ``` ArticleService: ``` @Service public class ArticlesService { public final ArticlesRepository articlesRepository; public final UsersRepository usersRepository; public ArticlesService( ArticlesRepository articlesRepository, UsersRepository usersRepository ) { this.articlesRepository = articlesRepository; this.usersRepository = usersRepository; } public List<ArticleEntity> getAllArticles(){ List<ArticleEntity> articlesIterable = articlesRepository.findAll(); return StreamSupport.stream(articlesIterable.spliterator(),false) .collect(Collectors.toList()); } public ArticleEntity getArticleBySlug(String slug){ var article = articlesRepository.findBySlug(slug); if(slug==null){ return null; } return article; } public ArticleEntity createArticle(CreateArticleDTO createArticleDTO, String authorUsername){ var author = usersRepository.findByUsername(authorUsername); return articlesRepository.save( ArticleEntity.builder() .title(createArticleDTO.getTitle()) .slug(createArticleDTO.getTitle().toLowerCase().replaceAll(" ","-")) .subtitle(createArticleDTO.getSubtitle()) .body(createArticleDTO.getBody()) .author(author) .build() ); } public Optional<ArticleEntity> getArticleById(UUID id){ return articlesRepository.findById(id); } } Whenever I try to get a record using the UUID, I am getting null as opposed to the record. ``` Tried to store the UUID AS VARCHAR in MySQL db, that didnt work either: ``` public abstract class BaseEntity { @Id() @GeneratedValue(strategy = GenerationType.UUID) @JdbcTypeCode(SqlTypes.BINARY) UUID id; @CreationTimestamp Date createdAt; } ```
I am using Polars (`{ version = "0.38.3", features = ["lazy", "streaming", "parquet", "fmt", "polars-io", "json"] }`) with Rust (`v1.77.0`) to process a large dataset (larger than available memory) inside a Docker container. The Docker container's memory is intentionally limited to 6GB using `--memory=20gb` and `--shm-size=20gb`. I am encountering an out of memory error while performing calculations on the dataset. Here's an overview of my workflow: 1- Load the dataset from a Parquet file using scan_parquet to create a LazyDataframe. 2- Perform transformations on the dataframe, which is unnesting. 4- Write the resulting data to disk as a Parquet file using sink_parquet. Here is a code snippet that demonstrates the relevant parts of my Rust code: ```rust use jemallocator::Jemalloc; use polars::{ prelude::*, }; use std::time::Instant; #[global_allocator] static GLOBAL: Jemalloc = Jemalloc; fn main() { let now = Instant::now(); let mut lf = LazyFrame::scan_parquet( "./dataset.parquet", ScanArgsParquet { low_memory: true, ..Default::default() }, ) .unwrap(); lf = lf.with_streaming(true).unnest(["fields"]); let query_plan = lf.clone().explain(true).unwrap(); println!("{}", query_plan); lf.sink_parquet("./result.parquet".into(), Default::default()) .unwrap(); let elapsed = now.elapsed(); println!("Elapsed: {:.2?}", elapsed); } ``` Despite using LazyFrame and enabling low_memory mode in ScanArgsParquet, I still encounter an out of memory error during the execution of the code. I have tried the following: - Using the jemallocator crate as the global allocator. - Enabling streaming mode using with_streaming(true) for the LazyFrame operations. - Using the `low_memory: true` in the scan_parquet function. The printed plan indicates that every operation should be run in the streaming engine: ``` --- STREAMING UNNEST by:[fields] Parquet SCAN ./resources/dataset.parquet PROJECT */2 COLUMNS --- END STREAMING DF []; PROJECT */0 COLUMNS; SELECTION: "None" ``` However, I am still running into memory issues when processing the large dataset (Parquet file size = 20GB). My questions are: - Why I'm getting the OOM error while everything is indicating it is using the streaming engine ? - Is there another way to leverage disk-based processing or chunking the data to handle datasets larger than memory? Any guidance or suggestions on how to resolve this issue would be greatly appreciated. Thank you in advance!
I am making a website with the option to change font-size. Every time when you refresh the page though, the font-size you selected returns to the original font-size. The code below shows the html and JavaScript code for how this function works: <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> let buttons = document.querySelector('.buttons'); let btn = buttons.querySelectorAll('.btn'); for (var i = 0; i <btn.length; i++){ btn[i].addEventListener('click', function(){ let current = document.getElementsByClassName('clicked'); current[0].className = current[0].className.replace("clicked", ""); this.className += " clicked"; }) } <!-- language: lang-css --> :root { --secondary-color: #4978ff; } .sec .buttons { width: 100%; display: flex; justify-content: flex-end; align-items: flex-end; margin-bottom: 20px; } .sec .buttons .btn { padding: 0 10px; display: inline-flex; background: #ddd; color: var(--primary-color); margin-left: 10px; cursor: pointer; } .sec .buttons .btn.clicked { background: var(--secondary-color); } .sec .buttons .btn:nth-child(2) { font-size: 1.5em; } .sec .buttons .btn:nth-child(3) { font-size: 2em; } <!-- language: lang-html --> <section class="sec"> <div class="content"> <div class="buttons"> <span class="btn clicked" onclick="document.getElementById('text').style.fontSize = '1em'">A</span> <span class="btn" onclick="document.getElementById('text').style.fontSize = '1.25em'">A</span> <span class="btn" onclick="document.getElementById('text').style.fontSize = '1.75em'">A</span> </div> <div class="text" id="text"> <h3>i'm an h3</h3> <p>i'm a paragraph</p> </div> </div> </section> <!-- end snippet --> I was wondering if it was possible to store the chosen font-size in the localStorage so that it stays the same after you refresh the page?
Windows 11 Visual Studio 2015 I would like to filter double values. [enter image description here][1] [enter image description here][2] [1]: https://i.stack.imgur.com/ZckkI.png [2]: https://i.stack.imgur.com/ICCrr.png 1. testo = text 2. numerico = numeric 3. valuta = currency I can filter Strings Dim srchStr As String = Me.TextBox1.text Dim strFilter As String = "MyCol1 LIKE '*" & srchStr.Replace("'", "''") & "*'" dv.RowFilter = strFilter I can filter Integers Dim srchStr As String = Me.TextBox1.Text Dim id As Integer If Integer.TryParse(srchStr, id) Then dv.RowFilter = "code = " & id Else MessageBox.Show("Error: ........") End If Dim strFilter As String = "code = " & id dv.RowFilter = strFilter but I can not filter a double value. I actually use this code to filter strings in my DataGridView Private Sub MyTabDataGridView_DoubleClick(ByVal sender As Object, ByVal e As System.EventArgs) Handles MyTabDataGridView.DoubleClick Try 'MyRow Dim row As Integer = MyTabDataGridView.CurrentRow.Index 'MyColumn Dim column As Integer = MyTabDataGridView.CurrentCell.ColumnIndex 'MyColumn and MyRow Dim ColumnRow As String = MyTabDataGridView(column, row).FormattedValue.ToString 'Header Text Dim HeaderText As String = MyTabDataGridView.Columns(column).HeaderText 'I exclude the errors If HeaderText = "id" Or HeaderText = "MyCol3" Or HeaderText = "MyCol4" Or HeaderText = "MyCol5" Then Exit Sub End If 'Ready to filter Dim strFilter As String = HeaderText & " Like '*" & ColomnRow.Replace("'", "''") & "*'" dv.RowFilter = strFilter Catch ex As Exception End Try Any suggestion will be highly appreciated.
|vba|filtering|
error CS1061: 'NavMeshSurface' does not contain a definition for 'IsPartOfPrefab' and no accessible extension method 'IsPartOfPrefab' accepting a first argument of type 'NavMeshSurface' could be found (are you missing a using directive or an assembly reference?) I dont know what to try or do
unity navmeshsurface prefab not found or whatever
|debugging|
null
I have a website for car and tow services for passenger cars and services which includes assistance and assistance for damaged cars in the North Only the pages I create will not be indexed for long. Please do a check. emdadrodbar.ir I used the yoast plugin and registered my sitemap in Google Search Console, but it didn't change much. Is there a better solution to get my pages indexed sooner?
The problem is that it takes a long time to index web pages in emdadrodbar.ir?
I want to update or insert 100,000 records into a database in a single batch process using some effective design pattern. Example: I have 1 million records in my table date range from 1/1/2020 to 1/1/2024 and if user updates any transaction in the middle, e.g. 1/1/2022, then based on that particular date I need to update the remaining records that come after 1/1/2022. I am using Java and Hibernate.
I'm attempting to display a list of movies on a website using Jinja2 (a template engine for Python) and Bootstrap (a front-end framework). However, I'm having difficulty getting the movie cards to display correctly.When trying to display the movie cards using Jinja2 and Bootstrap, the cards aren't being displayed as expected. I'm facing difficulties in correctly displaying the background image of the card, as well as ensuring that the movie information is displayed clearly and organized. ``` <!--{% extends 'base.html' %} {% block conteudo %} <h2 style="text-align: center;">Teste de filmes</h2> <hr> <ul class="list-group"> {% for filme in filmes %} <li>{{filme.title}}</li> <p>{{ filme.overview }}</p> <p>Release Date: {{ filme.release_date }}</p> <p>Vote Average: {{ filme.vote_average }}</p> <p>Vote Count: {{ filme.vote_count }}</p> <hr> {% endfor %} </ul> {% endblock conteudo %}--> {% extends 'base.html' %} {% block conteudo %} <h2 style="text-align:center;">Lista de Filmes</h2> <hr> <div class="row"> {% for filme in filmes %} <div class="col-md-4"> <div class="card" style="width: 18rem;"> <img src="http://image.tmdb.org/t/p/w500{{filme.backdrop_path}}" class="card-img-top" alt="..."> <div class="card-body"> <h5 class="card-title">{{filme.title}}</h5> <p class="card-text">{{filme.overview}}</p> <hr> <h4>Nota mΓ©dia<span class="badge bg-secondary">{{filme.vote_average}}</span></h4> </div> </div> </div> {% if loop.index % 3 == 0 %} </div><div class="row"> {% endif %} {% endfor %} </div> {% endblock %} ``` Checking if the URL of the movie's background image is correct and accessible. Ensuring that all Bootstrap classes are being applied correctly. Verifying that the movies variable is being passed correctly to the template. Any help or suggestions would be greatly appreciated! Thank you! [enter image description here][1] [1]: https://i.stack.imgur.com/sCFAk.png
{"Voters":[{"Id":1440565,"DisplayName":"Code-Apprentice"},{"Id":2395282,"DisplayName":"vimuth"},{"Id":6782707,"DisplayName":"Edric"}]}
I'm trying to develop a school schedule generator in JavaScript for a high school with various subjects, teachers, and classes. The goal is to create a balanced and efficient schedule that minimizes conflicts while considering teacher preferences and availability. The teacher’s subjects that he teaches are predetermined, and I have already distributed the teacher’s lectures equally over his working days, based on the number of weekly lectures for each subject and the number of teacher’s working days. However, when I come to distributing the teacher’s lectures among the different classes, I find a problem with the method of distributing the lectures equally across all classes without... Injustice to one of the teachers (I am not looking for an ideal method, but at least it is acceptable) If I wanted to distribute the lectures to one class and the teacher did not teach more than one subject in more than one class, it would be easy, but I find it difficult to distribute the teachers’ lectures to all classes in the most fair way. **Constraints:** So, I'm struggling with the algorithm for subject distribution. Ideally, I'd like an efficient algorithm that can handle these constraints: 1. Balanced subject distribution across classes throughout the week. 2. Respecting teacher preferences unavailable days 3. Distributing teachers’ lectures to classes in an organized and fair manner for all. 4. distributing lectures equally during workdays, with one day the teacher completing his lectures early. The Constraints seem complicated but I wanted to make it as clear as possible **Additional Information:** I'm familiar with basic object-oriented programming concepts in JavaScript and have looked at some online resources on scheduling algorithms. However, they seem too complex for my needs. Are there any efficient algorithms suitable for this scenario, or can you suggest an approach for distributing subjects effectively while considering the mentioned constraints? **Example Data:** ``` const teachers = [ { name: "Math-Teacher", id: "T1", workDays: 6, unavailableDays:[], subjects: ["M1", "M2", "M3"], }, { name: "Quilting-Teacher", id: "T2", workDays: 2, unavailableDays:['Mon'], subjects: ["Q1", "Q2", "Q3"], }, { name: "Italian-Teacher", id: "T3", workDays: 6, unavailableDays:[], subjects: ["I1", "I2", "I3"], }, { name: "Biology-Teacher", id: "T4", workDays: 4, unavailableDays:[], subjects: ["B1", "B2", "B3"], }, { name: "history-Teacher", id: "T5", workDays: 2, unavailableDays:['Sat', 'Tue'], subjects: ["H1"], }, { name: "Phasics-Teacher", id: "T6", workDays: 5, unavailableDays:[], subjects: ["P1", "P2", "P3"], }, { name: "Italian-Teacher", id: "T7", workDays: 3, unavailableDays:[], subjects: ["I1", "I2", "I3"], }, { name: "Chemistry-Teacher", id: "T8", workDays: 3, unavailableDays:[], subjects: ["C1", "C2", "C3"], }, { name: "English-Teacher", id: "T9", workDays: 4, unavailableDays:[], subjects: ["M1"], }, { name: "Arabic-Teacher", id: "T10", workDays: 6, unavailableDays:[], subjects: ["A1", "A2"], }, ]; const subjects = [ //1-sec subjects { name: "Math", class: "1-sec", id: "M1", weeklyLectures: 7, }, { name: "Biology", class: "1-sec", id: "B1", weeklyLectures: 4, }, { name: " Quilting", class: "1-sec", id: "Q1", weeklyLectures: 3, }, { name: "Isramic Culture", class: "1-sec", id: "I1", weeklyLectures: 3, }, { name: "Phasics", class: "1-sec", id: "P1", weeklyLectures: 5, }, { name: "History", class: "1-sec", id: "H1", weeklyLectures: 3, }, { name: "English", class: "1-sec", id: "E1", weeklyLectures: 5, }, { name: "Arabic", class: "1-sec", id: "A1", weeklyLectures: 6, }, { name: "Chemistry", class: "1-sec", id: "C1", weeklyLectures: 3, }, //2-sec subjects { name: "Math", class: "2-sec", id: "M2", weeklyLectures: 7, }, { name: "Biology", class: "2-sec", id: "B2", weeklyLectures: 4, }, { name: " Quilting", class: "2-sec", id: "Q2", weeklyLectures: 3, }, { name: "Isramic Culture", class: "2-sec", id: "I2", weeklyLectures: 3, }, { name: "Phasics", class: "2-sec", id: "P2", weeklyLectures: 5, }, { name: "English", class: "2-sec", id: "E2", weeklyLectures: 5, }, { name: "Arabic", class: "2-sec", id: "A2", weeklyLectures: 6, }, { name: "Chemistry", class: "2-sec", id: "C2", weeklyLectures: 3, }, //3-sec subjects { name: "Math", class: "3-sec", id: "M3", weeklyLectures: 7, }, { name: "Biology", class: "3-sec", id: "B3", weeklyLectures: 4, }, { name: " Quilting", class: "3-sec", id: "Q3", weeklyLectures: 3, }, { name: "Isramic Culture", class: "3-sec", id: "I3", weeklyLectures: 3, }, { name: "Phasics", class: "3-sec", id: "P3", weeklyLectures: 5, }, { name: "English", class: "3-sec", id: "E3", weeklyLectures: 5, }, { name: "Arabic", class: "3-sec", id: "A3", weeklyLectures: 6, }, { name: "Chemistry", class: "3-sec", id: "C3", weeklyLectures: 3, }, ]; const classes = [ { name: "1-secondary", id: "1-sec", DailyLectures: 7, subjects: ["M1", "Q1", "I1", "A1", "E1", "H1", "C1", "B1", "P1"], }, { name: "2-secondary", id: "2-sec", DailyLectures: 7, subjects: ["M2", "Q2", "I2", "A2", "E2", "C2", "B2", "P2"], }, { name: "3-secondary", id: "3-sec", DailyLectures: 7, subjects: ["M3", "Q3", "I3", "A3", "E3", "C3", "B3", "P3"], }, ]; const daysOfWeek = ["Sat", "Sun", "Mon", "Tue", "Wed", "Thr"]; ``` **Expected Output**: I expect the output to be a weekly schedule for each class, with lectures of teachers evenly distributed across the working days. For example(like this but in a efficient way): ``` 1-sec: { Sat: [ { name: 'Math', class: '1-sec', id: 'M1', weeklyLectures: 7 }, { name: 'Math', class: '1-sec', id: 'M1', weeklyLectures: 7 }, { name: ' Quilting', class: '1-sec', id: 'Q1', weeklyLectures: 3 }, { name: ' Quilting', class: '1-sec', id: 'Q1', weeklyLectures: 3 }, { name: 'Isramic Culture', class: '1-sec', id: 'I1', weeklyLectures: 3 }, { name: 'Biology', class: '1-sec', id: 'B1', weeklyLectures: 4 }, { name: 'History', class: '1-sec', id: 'H1', weeklyLectures: 3 }, { name: 'History', class: '1-sec', id: 'H1', weeklyLectures: 3 } ], Sun: [ { name: 'Math', class: '1-sec', id: 'M1', weeklyLectures: 7 }, { name: ' Quilting', class: '1-sec', id: 'Q1', weeklyLectures: 3 }, { name: 'Isramic Culture', class: '1-sec', id: 'I1', weeklyLectures: 3 }, { name: 'Biology', class: '1-sec', id: 'B1', weeklyLectures: 4 }, { name: 'History', class: '1-sec', id: 'H1', weeklyLectures: 3 }, { name: 'Phasics', class: '1-sec', id: 'P1', weeklyLectures: 5 }, { name: 'Chemistry', class: '1-sec', id: 'C1', weeklyLectures: 3 } ], Mon: [ { name: 'Math', class: '1-sec', id: 'M1', weeklyLectures: 7 }, { name: 'Isramic Culture', class: '1-sec', id: 'I1', weeklyLectures: 3 }, { name: 'Biology', class: '1-sec', id: 'B1', weeklyLectures: 4 }, { name: 'Phasics', class: '1-sec', id: 'P1', weeklyLectures: 5 }, { name: 'Chemistry', class: '1-sec', id: 'C1', weeklyLectures: 3 }, { name: 'Math', class: '1-sec', id: 'M1', weeklyLectures: 7 }, { name: 'Math', class: '1-sec', id: 'M1', weeklyLectures: 7 } ], Tue: [ { name: 'Math', class: '1-sec', id: 'M1', weeklyLectures: 7 }, { name: 'Biology', class: '1-sec', id: 'B1', weeklyLectures: 4 }, { name: 'Phasics', class: '1-sec', id: 'P1', weeklyLectures: 5 }, { name: 'Math', class: '1-sec', id: 'M1', weeklyLectures: 7 }, { name: 'Arabic', class: '1-sec', id: 'A1', weeklyLectures: 6 } ], Wed: [ { name: 'Math', class: '1-sec', id: 'M1', weeklyLectures: 7 }, { name: 'Phasics', class: '1-sec', id: 'P1', weeklyLectures: 5 }, { name: 'Arabic', class: '1-sec', id: 'A1', weeklyLectures: 6 } ], Thr: [ { name: 'Math', class: '1-sec', id: 'M1', weeklyLectures: 7 }, { name: 'Arabic', class: '1-sec', id: 'A1', weeklyLectures: 6 } ] } ```
The problem in your code is that `winsound` accepts `.wav` files, not `.mp3`. You can use any online converter to convert your `.mp3` audio file to `.wav`. Then, it should work. >! (I have to admit, I didn't expect such a content for strings here 0_0)
I am having trouble reading a large .dat file into R. I am using ``` data <- read.table("...2018029_ascii/FRSS108PUF.dat", fill=TRUE) ``` This results in a large dataframe with V1, V2 as column names. I am using the ASCII file at this link: https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2018029 "...nameoffolder/2018029_ascii/FRSS108PUF.dat"
null
To transform Sales Order into Item Fulfillment: var objItemFulfillment = record.transform({ fromType: record.Type.SALES_ORDER, fromId: salesOrderID, toType: record.Type.ITEM_FULFILLMENT, isDynamic: false }); Inventory Details are its own subrecords which may exist on each line. When you are looping through the lines of your transformed Item Fulfillment, you can get the Inventory Details subrecord. If it doesn't exist, it will create one, otherwise you will be editing the existing one. The `fieldId` for inventory details subrecord is `inventorydetail`. let intIFLineCount = objItemFulfillment.getLineCount({sublistId: 'item'}); for (let i = 0; i < intIFLineCount; i++) { //this is your inventory details record let objSubrecord = objRecord.getSublistSubrecord({ sublistId: 'item', fieldId: 'inventorydetail', line: i }); } Inventory Details is a record like any other, with its own sublist `inventoryassignment` which contains various fields, which you can find here: [https://system.netsuite.com/help/helpcenter/en_US/srbrowser/Browser2023_2/script/record/inventorydetail.html][1] [1]: https://system.netsuite.com/help/helpcenter/en_US/srbrowser/Browser2023_2/script/record/inventorydetail.html Into this sublist you can set values like you normally would: objSubrecord.setSublistValue({sublistId: 'inventoryassignment', fieldId: 'binnumber', value: 123456, line: x}); objSubrecord.setSublistValue({sublistId: 'inventoryassignment', fieldId: 'quantity', value: 20, line: x}); You can of course retrieve the values from existing inventory details too: let binNumber = objSubrecord.getSublistValue({sublistId: 'inventoryassignment', fieldId: 'binnumber', line: x}); let quantity = objSubrecord.getSublistValue({sublistId: 'inventoryassignment', fieldId: 'quantity', line: x}); So the only annoying part is that you will be looping through the item sublist on the Item Fulfillment record, and on each line you will loop through another sublist, this time for the Inventory Details record.
Storing the prefered font-size in localStorage
|javascript|html|css|
Yes, it is, in WWDC 23, ScreenCaptureKit API were introduced to capture screenshots programmatically. Please refer here : **What’s new in ScreenCaptureKit** https://developer.apple.com/videos/play/wwdc2023/10136/?time=586
There is a feature that comes with `iOS 17`: [defaultScrollAnchor(_:)][1] ScrollView { ForEach(0..<50) { i in Text("Item \(i)") .frame(maxWidth: .infinity) .padding() .background(.blue) .clipShape(.rect(cornerRadius: 25)) } } .defaultScrollAnchor(.bottom) [1]: https://developer.apple.com/documentation/swiftui/view/defaultscrollanchor(_:)
null
|database-design|database-normalization|first-normal-form|
|database-design|database-normalization|
I'd like to create an online quiz application. I have in mind several types of exercises, including those for filling in the blank: An exercise can require a user to enter the text: ``` Chemical energy produced by the ____ is stored in a small molecule called ____ ``` Or to pick an option from the given list: ``` Mitochondria contain their own small (genes/cells/chromosomes) ``` I've been wondering about the best way to represent it in SQL schema. The blanks can appear in any part of the sentence, and there can be any number of blanks. So I thought to represent it like this: ```java class BlankExercise { @ManyToOne private List<Part> parts; } ``` Where a `Part` can either be normal text or can be the blank. Thus each exercise is a sum of its parts: ``` Chemical energy produced by the ____ is stored in a small molecule called ____ ``` would have 4 parts: ``` 1. Chemical energy produced by the 2. (Mitochondria) 3. is stored in a small molecule called 4. (adenosine) ``` But that would require using a form of inheritance strategy - the `Part` would be a common parent class, which would be extended by `NormalPart` and `BlankEmptyPart` and `BlankOptionsPart`. But I've read that one should abstain from using inheritance with database entities, if at all possible, but it's difficult for me to see a solution that wouldn't involve inheritance. The only other solution I could think of would use several tables a separate table for 'normal' text and separate tables for each type of blank: ``` exercise_texts exercise_id text ``` and for the blanks where the user would have to type the whole word: ``` exerice_blanks_empty exercise_id solution_text blank_index ``` and for the blanks where the user would pick an existing option: ``` exerice_blanks_options blank_index exercise_id blank_id ``` with ``` exercise_blanks_options_list blank_id option_text is_correct ``` Each blank table would have `blank_index` column, which would correspond to its position in the sentence: for instance, `Mitochondria` would have an index of `1` and `adenosine` would have an index of `3`. I was wondering, which design is better, or neither is feasible and there is a better one?
Combining two wav files: import java.io.File; import java.io.IOException; import java.io.SequenceInputStream; import javax.sound.sampled.AudioFileFormat; import javax.sound.sampled.AudioInputStream; import javax.sound.sampled.AudioSystem; public class WavAppender { public static void main(String[] args) { String wavFile1 = "D:\\wav1.wav"; String wavFile2 = "D:\\wav2.wav"; try { AudioInputStream clip1 = AudioSystem.getAudioInputStream(new File(wavFile1)); AudioInputStream clip2 = AudioSystem.getAudioInputStream(new File(wavFile2)); AudioInputStream appendedFiles = new AudioInputStream( new SequenceInputStream(clip1, clip2), clip1.getFormat(), clip1.getFrameLength() + clip2.getFrameLength()); AudioSystem.write(appendedFiles, AudioFileFormat.Type.WAVE, new File("D:\\wavAppended.wav")); } catch (Exception e) { e.printStackTrace(); } } }
I'm student in practicing PintOS Project. In Programming Project 3(Virtual Memory), I got ploblems about "preprocess in compiling" (C program). I had tried all attempt that do my best, but I'm absolutely lost at this point on how to fix it. Finally i come to here, had to ask you about this issue. **error** I finished stack growth part, so I am modifying `syscall.c` to implement `mmap`, but got problems in buid process. **the `spt` field is being recognized as an incomplete type and is not being excluded.** **current situation** The thread structure in question is declared in `thread.h`, and the type `supplemental_page_table` of spt, an element in the `thread` structure, is declared in `vm.h`. Above the thread structure in current thread.h `#ifdef VM #include "vm/vm.h"` is preprocessing the format vm.h. I am currently using the EC2 server(ubuntu 18.04) via SSH connection to VS code, and have tried solutions such as make clean, make, inserting and changing the order of #include preprocessing and forward declaration code, and rebooting + reinstalling EC2, but there is no progress. **Questions** 1. If `vm.h`, where the `spt` structure is declared in `thread.h`, is included before the thread structure, shouldn't it be able to be used without problems? ``` ... #ifdef VM // I'm in project3(VM) #include "vm/vm.h" ... struct thread { ... #ifdef VM /* Table for whole virtual memory owned by thread. */ struct supplemental_page_table spt; // The spt structure is defined in vm.h. ... ``` ``` /* Print in terminal */ In file included from ../../include/userprog/process.h:4:0, from ../../include/vm/vm.h:7, from ../../vm/vm.c:4: ../../include/threads/thread.h:151:33: error: field β€˜spt’ has incomplete type struct supplemental_page_table spt; ^~~ In file included from ../../vm/vm.c:4:0: ../../include/vm/vm.h:200:1: warning: "/*" within comment [-Wcomment] /* ν˜„μž¬ ν”„λ‘œμ„ΈμŠ€μ˜ λ©”λͺ¨λ¦¬ 곡간을 λ‚˜νƒ€λ‚΄λŠ” κ΅¬μ‘°μ²΄μž…λ‹ˆλ‹€. ../../vm/vm.c: In function β€˜vm_init’: ../../vm/vm.c:21:23: warning: unused variable β€˜start’ [-Wunused-variable] struct list_elem *start = list_begin(&frame_table); ^~~~~ ../../vm/vm.c: In function β€˜vm_alloc_page_with_initializer’: ../../vm/vm.c:84:1: warning: label β€˜err’ defined but not used [-Wunused-label] err: ^~~ ../../vm/vm.c: In function β€˜spt_insert_page’: ../../vm/vm.c:105:6: warning: unused variable β€˜succ’ [-Wunused-variable] int succ = false; ^~~~ ../../vm/vm.c: In function β€˜spt_remove_page’: ../../vm/vm.c:111:55: warning: unused parameter β€˜spt’ [-Wunused-parameter] void spt_remove_page (struct supplemental_page_table *spt, struct page *page) { ^~~ ... ``` 2. Additionally, I have a customization called `tid_t` in `process.h` that is also defined in `thread.h`, and I've included it, but it doesn't reference it. I solved this problem by defining it again in process.h ,it is repeated, but I thought I'd ask along the same problem-context as above. For reference, I have the above issue after this. ``` /* Code in process.h */ #ifndef USERPROG_PROCESS_H #define USERPROG_PROCESS_H #include "threads/thread.h" bool install_page (void *upage, void *kpage, bool writable); // typedef int tid_t; // If i remove annotation in this line, going to problem mentioned above tid_t process_create_initd (const char *file_name); tid_t process_fork (const char *name, struct intr_frame *if_); int process_exec (void *f_name); int process_wait (tid_t); void process_exit (void); void process_activate (struct thread *next); #endif /* userprog/process.h */ /* Print */ In file included from ../../include/vm/vm.h:7:0, from ../../include/threads/thread.h:12, from ../../threads/init.c:24: ../../include/userprog/process.h:9:1: error: unknown type name β€˜tid_t’; did you mean β€˜size_t’? tid_t process_create_initd (const char *file_name); ^~~~~ size_t ``` ``` /* here is modified code i did, but i think it is not problem */ void *mmap (void *addr, size_t length, int writable, int fd, off_t offset) { if (offset % PGSIZE != 0) { return NULL; } if (pg_round_down(addr) != addr || is_kernel_vaddr(addr) || addr == NULL || (long long)length <= 0) return NULL; if (fd == 0 || fd == 1) { exit(-1); } if (spt_find_page(&thread_current()->spt, addr)) return NULL; struct file *target = find_file_by_fd(fd); if (target == NULL) return NULL; void * ret = do_mmap(addr, length, writable, target, offset); return ret; } void munmap (void *addr) { do_munmap(addr); } /* Do the mmap */ void *do_mmap (void *addr, size_t length, int writable, struct file *file, off_t offset) { struct file *mfile = file_reopen(file); void * ori_addr = addr; size_t read_bytes = length > file_length(file) ? file_length(file) : length; size_t zero_bytes = PGSIZE - read_bytes % PGSIZE; while (read_bytes > 0 || zero_bytes > 0) { size_t page_read_bytes = read_bytes < PGSIZE ? read_bytes : PGSIZE; size_t page_zero_bytes = PGSIZE - page_read_bytes; struct supplemental_page_table *spt = (struct supplemental_page_table*)malloc (sizeof(struct supplemental_page_table)); spt->file = mfile; spt->offset = offset; spt->read_bytes = page_read_bytes; if (!vm_alloc_page_with_initializer (VM_FILE, addr, writable, lazy_load_segment, spt)) { return NULL; } read_bytes -= page_read_bytes; zero_bytes -= page_zero_bytes; addr += PGSIZE; offset += page_read_bytes; } return ori_addr; } /* Do the munmap */ void do_munmap (void *addr) { while (true) { struct page* page = spt_find_page(&thread_current()->spt, addr); if (page == NULL) break; struct supplemental_page_table * aux = (struct supplemental_page_table *) page->uninit.aux; // dirty(μ‚¬μš©λ˜μ—ˆλ˜) bit 체크 if(pml4_is_dirty(thread_current()->pml4, page->va)) { file_write_at(aux->file, addr, aux->read_bytes, aux->offset); pml4_set_dirty (thread_current()->pml4, page->va, 0); } pml4_clear_page(thread_current()->pml4, page->va); addr += PGSIZE; } } ``` [Here is my Team git-repository][3] For now, we've saved this git in an intact (error-free) state, but we'll create a state with the same errors and push it soon. `Thank you very much for your time!` [1]: https://i.stack.imgur.com/Z3yIc.png [2]: https://i.stack.imgur.com/jmvKL.png [3]: https://github.com/KraftonJungle4th/Classroom5_Week10-11_Team3_PintOS/tree/DJ
This approach will work (tested with Go 1.22) ```go func (sh StreamHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request) { go func(done <-chan struct{}) { <-done fmt.Println("message", "client connection has gone away, request got cancelled") }(req.Context().Done()) // .. the rest of your code here } ```
your namespaces in the begining of the controller file should match the herirechy of the data sturcture it should implement [PSR-4 autoloading standard][1]. check [laravel doc][2] for more details so namespace for DokterController should be namespace App\Http\Controllers\dokter; and the use should be: use App\Http\Controllers\docker\DokterController; further try to refactor your code in your route file . Instead of calling controller during defining each route separately you shoud use Route::controller(controllername::class)->group(function(){ Route::get('routeurl','controllerMethodToCallForThisRoute'); }) [1]: https://www.php-fig.org/psr/psr-4/ [2]: https://laravel.com/docs/11.x/structure#the-app-directory
null
I am having problems with gcc-11 C++ compiler when using multiple inheritance and I want to call a specific method of the base class and I write using A::m1 explicitely. Visual C++ 2022 works with this code, however gcc-11 fails and produces an error. This is the code: ``` struct Base { void m1(){} void m2(){} void m3(){} }; struct A: Base{}; struct B: Base{}; struct C: A,B { using A::m1; void mc() { m1(); } }; ``` In MSVC++ 2022 it compiles ok, however gcc-11 fails with this error: In member function 'void C::mc()': 'Base' is an ambiguous base of 'C' m1(); It is an error of GCC 11 compiler? how can overcome it? why it is not taking into account the using statement? new GCC versions solve this issue? (Note: I know that replacing m1() by A::m1() works but I want to take benefit of "using") Probably many of you think that using virtual inheritance with Base solve the problem, it is correct, but for some reasons I cannot use virtual inheritance in this case. Another question, is there any way to take preference to ALL member of a base class instead of writing one by one with "using" statement, instead of writing: ``` using A::m1; using A::m2; using A::m3; ``` to write something like: ``` using A::Base ``` Any help is wellcome. Regards Pedro I tried to code with "using" in several ways but all of them failed with GCC 11 compiler The answers in linked post do not answer why gcc compiler does not compile and MSVC++ compiles. It is not an alternative in gcc for compiling this exact piece of code?
I'm trying to implement a Batch Gradient Descent algorithm in python that takes in the training set, the learning rate, and the number of iterations as input arguments, and returns the weights. However, when I run it, within a few iterations the values for the parameters get exponentially large and eventually it returns 'nan'. x = [[2104] [1600] [2400] [1416] [3000] [1985] [1534] [1427] [1380] [1494] [1940] [2000] [1890] [4478] [1268] [2300] [1320] [1236] [2609] [3031] [1767] [1888] [1604] [1962] [3890] [1100] [1458] [2526] [2200] [2637] [1839] [1000] [2040] [3137] [1811] [1437] [1239] [2132] [4215] [2162] [1664] [2238] [2567] [1200] [ 852] [1852] [1203]] y = [399900 329900 369000 232000 539900 299900 314900 198999 212000 242500 239999 347000 329999 699900 259900 449900 299900 199900 499998 599000 252900 255000 242900 259900 573900 249900 464500 469000 475000 299900 349900 169900 314900 579900 285900 249900 229900 345000 549000 287000 368500 329900 314000 299000 179900 299900 239500] a = 0.01 num_iter = 100 ``` def BGD ( x, y, a, num_iter): m = len(x) #number of samples n = x.shape[1] #number of features p = np.zeros(n) b = 0 for _ in range(num_iter): sum_p = np.zeros(n) sum_b = 0 for i in range(m): sum_p = sum_p + ((np.dot(p,x[i])+b) - y[i]) * x[i] sum_b = sum_b + (((np.dot(p,x[i])+b) - y[i])) p = p - (a * (1/m) * sum_p) b = b - (a * (1/m) * sum_b) return p, b p, b = BGD(x, y, 0.01, 100) print(p) print(b) ``` I get the following: RuntimeWarning: overflow encountered in add sum_p = sum_p + ((np.dot(p,x[i])+b) - y[i]) * x[i] RuntimeWarning: invalid value encountered in subtract p = p - (a * (1/m) * sum_p) [nan] nan
Batch Gradient Descent algorithm in python is returning huge values
|machine-learning|linear-regression|gradient-descent|
null
Our ASP.NET website is using `FormAuthentication`. After login success, it will add a cookie to client side. Here is the code. ``` FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, username, DateTime.Now, expiration, remember, roles, FormsAuthentication.FormsCookieName); HttpCookie authCookie = new HttpCookie(FormsAuthentication.FormsCookieName, FormsAuthentication.Encrypt(ticket)); Response.Cookies.Add(authCookie); ``` Then I can see that a cookie with name `.ASPXAUTH` is created in client browser. Next time, when I visit the website, I can see that `HttpContext.Current.Request.IsAuthenticated` is marked into `true` automatically. I guess ASP.NET decrypt the cookie and get the user information. How can I debug this process? I want to see the source code to find out how `HttpContext.Current.Request.IsAuthenticated` is assgined into true.
How does it work using ASP.NET FormAuthentication
|asp.net|webforms|owin|asp.net-authorization|asp.net-authentication|
null
null
null
null
The app crashes anytime I interact with just about any compose control. Just the release, the debug works fine. The error says: androidx.compose.ui.R$id is missing hide_in_inspector_tag The stacktrace is below. This definitely some sort of R8 issue. I've been adding things to proguard config and got to the point you at least seeing the class and field names, but so far I can't get to keep the field. I didn't have this problem on a release done a week ago. I updated the Android Studio to Android Studio Iguana | 2023.2.1 Patch 1 with: ``` gradleplugin = "8.3.1" gradleAndroidCommandPlugin = "1.6.2" ``` I feel like I must have some version mismatch, but I can't seem to find one. I tried the following in proguard without luck: ``` -keep class androidx.compose.ui.R$id { *; } -keepclassmembers class androidx.compose.ui.R$id { <init>(...); <fields>; } -keep class androidx.compose.ui.R$id { public static <fields>; } -keep class androidx.compose.ui.R$id { static int hide_in_inspector_tag; } ``` ``` E java.lang.NoSuchFieldError: No static field hide_in_inspector_tag of type I in class Landroidx/compose/ui/R$id; or its superclasses (declaration of 'androidx.compose.ui.R$id' appears ....ExHFPZftc_jp0b694EN84A==/base.apk) E at androidx.compose.material.ripple.RippleContainer.<init>(SourceFile:49) E at androidx.compose.material.ripple.AndroidRippleIndicationInstance.getOrCreateRippleContainer(SourceFile:48) E at androidx.compose.material.ripple.AndroidRippleIndicationInstance.addRipple(SourceFile:1) E at androidx.compose.material.ripple.Ripple$rememberUpdatedInstance$1$1.emit(SourceFile:2) E at androidx.compose.material.ripple.Ripple$rememberUpdatedInstance$1$1.emit(SourceFile:1) E at z7.F.B(SourceFile:214) E at z7.F$c.invokeSuspend(SourceFile:13) E at kotlin.coroutines.jvm.internal.a.resumeWith(SourceFile:12) E at w7.W.run(SourceFile:129) E at androidx.compose.ui.platform.AndroidUiDispatcher.performTrampolineDispatch(SourceFile:7) E at androidx.compose.ui.platform.AndroidUiDispatcher.access$performTrampolineDispatch(SourceFile:1) E at androidx.compose.ui.platform.AndroidUiDispatcher$dispatchCallback$1.run(SourceFile:3) E at android.os.Handler.handleCallback(Handler.java:938) E at android.os.Handler.dispatchMessage(Handler.java:99) E at android.os.Looper.loopOnce(Looper.java:201) E at android.os.Looper.loop(Looper.java:288) E at android.app.ActivityThread.main(ActivityThread.java:7839) E at java.lang.reflect.Method.invoke(Native Method) E at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:548) E at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1003) E Suppressed: kotlinx.coroutines.internal.DiagnosticCoroutineContextException: [androidx.compose.ui.platform.MotionDurationScaleImpl@9784809, androidx.compose.runtime.h@e1f9a0e, N0{Cancelling}@9390c2f, Androi```
|azure|oauth-2.0|microsoft-graph-api|azure-app-registration|microsoft-oauth|
It appears that it's important to express that you want the 7 specific items extracted by calling for tokens=1-7. Token 1 will be %%G, with each successive, stipulated token using %%H, %%I, etc. Wanting to express one line for each token comes by echoing them 1 echo command at a time for each token's variable, either as one per line stacked or parenthetically grouped sequentially with &s: do (echo %%G) & (echo %%H) & (echo %%I)... etc In total, the following worked for me: `for /f "tokens=1-7" %G in ("1 2 7 16 21 26 688") do ( echo %G echo %H echo %I echo %J echo %K echo %L echo %M )`
You can create a RandomRange class, you can create as many as you want and put into a list public class RandomRange { public int Start { get; set; } public int End { get; set; } public int Exception { get; set; } public RandomRange(int start, int end, int exception) { Start = start; End = end; Exception = exception; } public int getRandomIndex() { Random random = new Random(); int index = random.Next(Start, End+1); if (index > End) { index = Exception; } return index; } } In your main program, simply call it static void Main(string[] args) { Random random = new Random(); // Define your ranges List<RandomRange> ranges = new List<RandomRange> { new RandomRange(2, 10, 15), // Represents indices from 2 to 10 new RandomRange(15, 20, 25), // Represents indices from 15 to 20 new RandomRange(25, 30, 35) // Represents indices from 25 to 30 }; //i use random here in this case, but u can simply change it to a user input RandomRange selectedRange = ranges[random.Next(ranges.Count)]; Console.WriteLine($"Selected Index: {selectedRange.getRandomIndex()}"); } You can then use the `selectedRange.getRandomIndex()` to get your list item Note that there is no check for index out of bound exception and i presume you already done all the list indexing exception handling I hope the RandomRange class is easy enough
The reason I was getting trouble is because I was redirecting through my nav.php, which is an include file containing only the navbar information. When I put the window.location.href within my index.php, I was able to redirect with no issues.
I'm trying to restrict my solution using vpasolve() to only integers but I can't seem to find any equation or parameter that does that. Here is the code below if it helps, the variable I'm trying to restrict is N. Any guidance would be much appreciated. ``` clc clear syms t L S N height = 0.7; width = 0.1; Tdiff = 100; h = 17; Qmult = 6; eff = 0.7; k = 45; EQN1 = height==N*(t+S); m = ((2*height)/(k*t))^0.5; Lc = L+t/2; EQN2 = eff==(tanh(m*Lc)/(m*Lc)); EQN3 = Qmult * h*(width*height)*Tdiff==(eff*h*(2*3*Lc)*(Tdiff)*N)+(h*(width*(height-N*t))*(Tdiff)); EQN4 = 0.5== m*L; vpasolve([EQN1, EQN2, EQN3, EQN4], [t L S N], [0.005 0.3 0.35 60]) ``` I tried using different solve functions and different init parameters but nothing seems to help.
SQL schema for a fill-in-the-blank exercise
|sql|hibernate|database-design|orm|database-schema|
**Problem** * Merge `fr` and `events` based on a match between `events['Earnings_Date']` and the exact or *next* date in `fr['Date']` (so: 'looking backward' from `fr`'s point of view), grouped by column 'Symbol'. Keep only the first match. **Setup** Let's add 2 symbols (one expecting a match, one expecting zero). Also, it is perhaps easier to use [`pl.Expr.str.to_date`](https://docs.pola.rs/py-polars/html/reference/expressions/api/polars.Expr.str.to_date.html) instead of your [`pl.Expr.str.strptime`](https://docs.pola.rs/py-polars/html/reference/expressions/api/polars.Expr.str.strptime.html). * `fr` ```python import polars as pl fr = ( pl.DataFrame( { 'Symbol': ['A', 'A', 'A', 'B', 'B', 'C'], 'Date': ['2010-08-29', '2010-09-01', '2010-11-30', '2010-09-05', '2010-12-01', '2010-09-01'], } ) .with_columns(pl.col('Date').str.to_date('%Y-%m-%d')) .set_sorted(('Symbol', 'Date')) ) fr shape: (6, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Symbol ┆ Date β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ date β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════║ β”‚ A ┆ 2010-08-29 β”‚ β”‚ A ┆ 2010-09-01 β”‚ β”‚ A ┆ 2010-11-30 β”‚ β”‚ B ┆ 2010-09-05 β”‚ β”‚ B ┆ 2010-12-01 β”‚ β”‚ C ┆ 2010-09-01 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` * `events` ```python events = ( pl.DataFrame( { 'Symbol': ['A', 'A', 'B'], 'Earnings_Date': ['2010-06-01', '2010-09-01', '2010-12-01'], 'Event': [1, 4, 7], } ) .with_columns(pl.col('Earnings_Date').str.to_date('%Y-%m-%d')) .set_sorted(('Symbol', 'Earnings_Date')) ) events shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ Symbol ┆ Earnings_Date ┆ Event β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ date ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════════════β•ͺ═══════║ β”‚ A ┆ 2010-06-01 ┆ 1 β”‚ β”‚ A ┆ 2010-09-01 ┆ 4 β”‚ β”‚ B ┆ 2010-12-01 ┆ 7 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ ``` **Code** ```python fr = ( fr.join_asof( events, left_on='Date', right_on='Earnings_Date', by='Symbol' ) .with_columns( pl.when(pl.struct('Symbol', 'Earnings_Date').is_first_distinct()) .then(pl.col('Earnings_Date', 'Event')) ) ) fr shape: (6, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ Symbol ┆ Date ┆ Earnings_Date ┆ Event β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ date ┆ date ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ═══════════════β•ͺ═══════║ β”‚ A ┆ 2010-08-29 ┆ 2010-06-01 ┆ 1 β”‚ β”‚ A ┆ 2010-09-01 ┆ 2010-09-01 ┆ 4 β”‚ β”‚ A ┆ 2010-11-30 ┆ null ┆ null β”‚ β”‚ B ┆ 2010-09-05 ┆ null ┆ null β”‚ β”‚ B ┆ 2010-12-01 ┆ 2010-12-01 ┆ 7 β”‚ β”‚ C ┆ 2010-09-01 ┆ null ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ ``` **Explanation** * Use [`pl.DataFrame.join_asof`](https://docs.pola.rs/py-polars/html/reference/dataframe/api/polars.DataFrame.join_asof.html) (with default `strategy='backward'`). * Next, inside [`ppl.with_columns`](https://docs.pola.rs/py-polars/html/reference/dataframe/api/polars.DataFrame.with_columns.html) apply [`pl.when`](https://docs.pola.rs/py-polars/html/reference/expressions/api/polars.when.html) to set duplicates to `None` (`null`): * Use [`pl.struct`](https://docs.pola.rs/py-polars/html/reference/expressions/api/polars.struct.html) to create a single 'key' for the combination `['Symbol', 'Earnings_Date']` and check [`pl.Expr.is_first_distinct`](https://docs.pola.rs/py-polars/html/reference/expressions/api/polars.Expr.is_first_distinct.html). * Where `True` we want to keep the values for `['Earnings_Date', 'Event']`, so we can pass [`pl.col`](https://docs.pola.rs/py-polars/html/reference/expressions/col.html) to `.then`. Where `False`, we will get `None`.
I have upgraded the ClickHouse version from 22.6.1.1985 to 24.3.1.2672, which has resolved the issue. This is due to the fact that prior to the update, ClickHouse (in its 2022 version) adjusted the Asia/Tehran timezone to UTC+4:30 after March 20th to accommodate daylight saving time. However, post-2022, Iran discontinued the observance of daylight saving time. Consequently, following the update, the Asia/Tehran timezone has reverted to UTC+3:30 after March 20th, as it was before the change and it considered by Clickhouse in latest version.
I am tyring to display a video in the PDP page and wondering if its actually supported by Spartacus 6.5. I don't see any specific documantation about it but I can see a <cx-video> attribute in the Spartacus code which makes me believe that video files are supported. But I get a error **'cx-video' is not a known element: "If 'cx-video' is an Angular component, then verify that it is part of this module."** Do I have to import anything else in my modules to get it to work? normal images are working fine with <cx-media>. Are video files supported by Spartacus and where can I get any documentation or sample example of it.
Use video files in Spartacus PDP page using <cx-media