instruction
stringlengths
0
30k
|math|line|sprite|sfml|trigonometry|
null
**Searching forward using `scasb`** requires setting the direction flag DF to 0. The `cld` instruction does that. What it further requires is that you load the AL register with the character to be found, and the RDI register with the address of the **first** character of the string. **Searching backward using `scasb`** requires setting the direction flag DF to 1. The `std` instruction does that. What it also requires is that you load the AL register with the character to be found, and the RDI register with the address of the **last** character of the string. Then the formula to use is: RDI = AddressFirstChar + LengthString - 1 And if we plan to use the `repne` prefix, we also need to load the RCX register with the length of the string. You can write: std ; Set DF to 1 (backward direction) mov al, byte_to_find ; Load byte to find in AL mov ecx, 13 ; Load length of the string in RCX lea rdi, [source + rcx - 1] ; Load address of last character in RDI > jne NotFound ; Jump if not equal (byte not found) > jmp ByteFound ; Jump if equal (byte found) > NotFound: You don't need both these jumps. Because the zero flag ZF has but two states, either it's 0 or 1, you can jump to *ByteFound* if ZF=1, and just fall-through in *NotFound* if ZF=0. > SearchLoop: > scasb ; Compare AL with byte at [RSI] > jne NotFound ; Jump if not equal (byte not found) > jmp ByteFound ; Jump if equal (byte found) Although you named this 'Search**Loop**', there's no actual loop here! After `scasb` has finished comparing the single byte in AL to the byte in memory at `es:[rdi]` you immediately jump to one of the two possible exits on the procedure. You need to repeat the `scasb` instruction either through prefixing this instruction with the `repne` prefix (REPeat while Not Equal): repne scasb je ByteFound NotFound: or else by writing a normal loop: SearchLoop: scasb je ByteFound dec rcx jnz SearchLoop NotFound: > NotFound: > mov rax, 0 ; Set RAX to 0 (byte not found) > jmp Done > ByteFound: > mov rax, rdi ; Store the address of the found byte in RAX > dec rax ; Adjust RAX to point to the byte before the found byte > jmp Done > Done: > ret Some cleaning is required here, but especially your comment 'Adjust RAX to point to the byte *before* the found byte' is noteworthy. I don't know why you would want to return the address **before** the find, but if that is indeed what you need then know that RDI is already pointing to before the found byte, so you wouldn't need that `dec rax`! And should you need to point at the find itself, you would rather have to increment the value: NotFound: xor eax, eax ; Set RAX to 0 (byte not found) ret ByteFound: lea rax, [rdi + 1] ; Set RAX to point to the found byte ret ----- Summary - ### Searching backward std ; Set DF to 1 (backward direction) mov al, byte_to_find ; Byte to find in AL mov ecx, 13 ; Length of the string in RCX lea rdi, [source + rcx - 1] ; Address of last character in RDI repne scasb cld ; (*) je ByteFound NotFound: xor eax, eax ; Set RAX to 0 (byte not found) ret ByteFound: lea rax, [rdi + 1] ; Set RAX to point to the find ret (*) Whenever you need to use DF=1 locally, you should always return the direction flag to the clear state. Everybody expects to find DF=0, so we shouldn't have to bother using `cld` all the time! See https://stackoverflow.com/questions/48490225/the-probability-of-selected-eflags-bits. ### Searching forward ; Save to assume DF=0 (forward direction) mov al, byte_to_find ; Byte to find in AL mov ecx, 13 ; Length of the string in RCX lea rdi, source ; Address of first character in RDI repne scasb lea rax, [rdi - 1] ; Optimistically set RAX to point to the find je Done ; Byte found NotFound: xor eax, eax ; Set RAX to 0 (byte not found) Done: ret
I'm using below extension function to Upsert data with EF Core with `EFCore.BulkExtensions`, but the issue is the execution for this function when I tried to insert 2 millions record is taking around 17 minutes and lately it throws this exception > Could not allocate space for object 'dbo.SORT temporary run storage: 140737501921280' in database 'tempdb' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.\r\nThe transaction log for database 'tempdb' is full due to 'ACTIVE_TRANSACTION' and the holdup lsn is (41:136:347) I can see the storage for "C" partition is decreasing when I executed this function: [![enter image description here][1]][1] When I restart SQL Server, the free space becomes around 30 GB, I tried to use multi threading (parallel) with the insert with no noticed time change, so what do you recommend or is there any issue in the code shown here. **Note:** the for loop is not taking too much time even if it was 2 millions records. ``` public static async Task<OperationResultDto> AddOrUpdateBulkByTransactionAsync<TEntity>(this DbContext _myDatabaseContext, List<TEntity> data) where TEntity : class { using (var transaction = await _myDatabaseContext.Database.BeginTransactionAsync()) { try { _myDatabaseContext.Database.SetCommandTimeout(0); var currentTime = DateTime.Now; // Disable change tracking _myDatabaseContext.ChangeTracker.AutoDetectChangesEnabled = false; // Set CreatedDate and UpdatedDate for each entity foreach (var entity in data) { var createdDateProperty = entity.GetType().GetProperty("CreatedDate"); if (createdDateProperty != null && (createdDateProperty.GetValue(entity) == null || createdDateProperty.GetValue(entity).Equals(DateTime.MinValue))) { // Set CreatedDate only if it's not already set createdDateProperty.SetValue(entity, currentTime); } var updatedDateProperty = entity.GetType().GetProperty("UpdatedDate"); if (updatedDateProperty != null) { updatedDateProperty.SetValue(entity, currentTime); } } // Bulk insert or update var updateByProperties = GetUpdateByProperties<TEntity>(); var bulkConfig = new BulkConfig() { UpdateByProperties = updateByProperties, CalculateStats = true, SetOutputIdentity = false }; // Batch size for processing int batchSize = 50000; for (int i = 0; i < data.Count; i += batchSize) { var batch = data.Skip(i).Take(batchSize).ToList(); await _myDatabaseContext.BulkInsertOrUpdateAsync(batch, bulkConfig); } // Commit the transaction if everything succeeds await transaction.CommitAsync(); return new OperationResultDto { OperationResult = bulkConfig.StatsInfo }; } catch (Exception ex) { // Handle exceptions and roll back the transaction if something goes wrong transaction.Rollback(); return new OperationResultDto { Error = new ErrorDto { Details = ex.Message + ex.InnerException?.Message } }; } finally { // Re-enable change tracking _myDatabaseContext.ChangeTracker.AutoDetectChangesEnabled = true; } } } ``` [1]: https://i.stack.imgur.com/ts9IF.png
|c#|sql-server|performance|entity-framework-core|efcore.bulkextensions|
IIS Rewrite Module exclude bots but allow GoogleBot
I want to create a simple bar chart for a specific key and its values in a json data in FastAPI. The data is as shown in the image. I want to make a chart where indexes are in the x-bar and values of Blue in the y-bar. Can you please help me or lead me somewhere where I can learn that? {"1": {"Red": 14, "Blue": 12}, "2": {"Red": 58, "Blue": 54}, "3": {"Red": 26, "Blue": 65}}
Creating bar chart in FastAPI
|python|html|json|charts|fastapi|
null
I have read recently that C# uses Quicksort algorithm to sort an array. What I would like to know whether C# uses recursive approach or iterative approach? What I have found is [this link and it looks to me that they use iterative approach][1]. I am just wondering whether it is true that C# uses iterative implementation of QuickSort algorithm? [1]: https://referencesource.microsoft.com/#mscorlib/system/array.cs,78c62950ae211cd3,references
Does Sort() method in C# use recursion?
|c#|sorting|quicksort|
I'm trying to make a NASM program which reads two 64-bit integers. My code returns SIGSEGV even before allowing me to input numbers, and I can't figure out why ``` global main extern printf, scanf section .text main: push dword B push dword A push dword Input_format call scanf add esp, 12 xor eax, eax ret section .data Input_format: db "%d %d", 0 section .bss A: resd 1 B: resd 1 ``` The code is expected to run successfully without any output, but returns the following: `terminated by signal SIGSEGV (Address boundary error)`
scanf in x64 NASM returns segfault
|segmentation-fault|nasm|
null
The right way to do it is to create a new `Scanner`. The `Scanner` doesn't expose its internal construct for reading files and so there's no public way to seek back to the beginning. If for some reason you really don't want to do this you could *try* using the `ReadableByteChannel` constructor of the `Scanner` with a `FileChannel`, and then seek to position 0 and `reset` the `Scanner`, but I don't know if this would work, and it's easier just to make a new `Scanner`. Alternatively, as suggested in the comments (although you state that this isn't possible in your case), you could read the contents into some data structure in memory first and then operate on that data. --- That said, looking at your code, you could probably just combine `edgesSet` and `verticesSet` into one function, so you only need to read the file once, e.g.: - Read the file one line at a time. For each line: - Store the whole line in the edge set as you do now - Split the line you just read into vertex indices and store them in the vertex set And just do that for every line, to build both sets at once.
Just set the major unit to 1 so the axis counts 1 - 8 by 1s. Add to the x-axis setting ``` chart.set_x_axis({'name': 'X Axis'}) ``` ``` chart.set_x_axis({ 'name': 'X Axis', 'major_unit': 1, # set major unit to 1 }) ``` [![Chart with major unit set to 1][1]][1] <br> You could also add a label to the series like; ``` # Add line series to the chart chart.add_series({ 'categories': '=Sheet1!$C$1:$C$2', # Adjusted for new data range 'values': '=Sheet1!$D$1:$D$2', # Adjusted for new data range 'line': {'type': 'linear'}, # Adding linear trendline 'data_labels': {'value': False, # Add Data Label 'position': 'below', 'category': True, } }) ``` All this is doing is labeling your trend line, with the lower label sitting on the X-Axis.<br> You'll notice the value is set at start and end points but I don't think there is any means to remove the top textbox using Xlsxwriter. It can be removed manually however simply by clicking the textbox twice and then use the delete key.<br> And for that matter you could manually move the bottom textbox to align with the other numbers too if you like [![enter image description here][2]][2] [1]: https://i.stack.imgur.com/cZLFl.png [2]: https://i.stack.imgur.com/wvyu6.png <br> **Other options to add vert line** <br> You could add a vertical line using Error Bars but it doesn't give you any addition features. In fact you would have to use the major unit change to see the 3 on the x-axis value since it has no data label.<br> Apart from that I suppose you could just add a drawing line/textbox on to the Chart.
My problem is related to pitch-shifting audio in Python. I have the current modules installed: numpy, scipy, pygame, and the scikits "samplerate" api. My goal is to take a stereo file and play it back at a different pitch in as few steps as possible. Currently, I load the file into an array using `pygame.sndarray`, then apply a samplerate conversion using `scikits.samplerate.resample`, then convert the output back to a sound object for playback using pygame. The problem is garbage audio comes out of my speakers. Surely I'm missing a few steps (in addition to not knowing anything about math and audio). import time, numpy, pygame.mixer, pygame.sndarray from scikits.samplerate import resample pygame.mixer.init(44100,-16,2,4096) # choose a file and make a sound object sound_file = "tone.wav" sound = pygame.mixer.Sound(sound_file) # load the sound into an array snd_array = pygame.sndarray.array(sound) # resample. args: (target array, ratio, mode), outputs ratio * target array. # this outputs a bunch of garbage and I don't know why. snd_resample = resample(snd_array, 1.5, "sinc_fastest") # take the resampled array, make it an object and stop playing after 2 seconds. snd_out = pygame.sndarray.make_sound(snd_resample) snd_out.play() time.sleep(2)
Trying to change pitch of audio file with scikits.samplerate.resample results in garbage audio from pygame
I get the following error with the repository design pattern. I get this error from EfEntityRepositoryBase while inheriting EfCarDal in the Data Access layer. I couldn't overcome the problem. > Severity Code Description Project File Line Suppression State Error CS0311 The type 'Carebook.Dal.Concrete.EntityFramework.Conetext.AppDbContext' cannot be used as type parameter 'TContext' in the generic type or method 'EfEntityRepositoryBase<TEntity, TContext>'. There is no implicit reference conversion from 'Carebook.Dal.Concrete.EntityFramework.Conetext.AppDbContext' to 'Microsoft.AspNetCore.Identity.EntityFrameworkCore.IdentityDbContext` --- namespace Carebook.Core.DataAccess.Concrete { public class EfEntityRepositoryBase<TEntity, TContext> : IEntityRepository<TEntity>, IDisposable where TEntity : class, IEntity, new() where TContext : IdentityDbContext, new() { public void Add(TEntity entity) { using (TContext context = new TContext()) { var addEntity = context.Entry(entity); addEntity.State = EntityState.Added; context.SaveChanges(); } } public void Delete(TEntity entity) { using (TContext context = new TContext()) { var deleteEntity = context.Entry(entity); deleteEntity.State = EntityState.Deleted; context.SaveChanges(); } } public void Dispose() { throw new NotImplementedException(); } public TEntity Get(Expression<Func<TEntity, bool>>? filter) { using (TContext context = new TContext()) { return context.Set<TEntity>().SingleOrDefault(filter); } } public List<TEntity> GetAll(Expression<Func<TEntity, bool>>? filter = null) { using (TContext context = new TContext()) { return filter == null ? context.Set<TEntity>().ToList() : context.Set<TEntity>().Where(filter).ToList(); } } public void Update(TEntity entity) { using (TContext context = new TContext()) { var updateEntity = context.Entry(entity); updateEntity.State = EntityState.Modified; context.SaveChanges(); } } } } namespace Carebook.Dal.Concrete.EntityFramework.Conetext { public class AppDbContext : IdentityDbContext<User, Role, int> { public AppDbContext(DbContextOptions options) : base(options) { } protected override void OnModelCreating(ModelBuilder builder) { builder.ApplyConfigurationsFromAssembly(Assembly.GetExecutingAssembly()); base.OnModelCreating(builder); } public virtual DbSet<Car> Cars { get; set; } public virtual DbSet<CarPicture> CarPictures { get; set; } public virtual DbSet<Feature> Features { get; set; } public virtual DbSet<Reservation> Reservations { get; set; } public virtual DbSet<Pricing> Pricings { get; set; } public virtual DbSet<Contact> Contacts { get; set; } } } namespace Carebook.Dal.Concrete.EntityFramework { public class EfCareDal: EfEntityRepositoryBase<Car,AppDbContext>,ICarDal { } }
I had the same error when trying to run the application. ------start of error snippet ----------- warning: default scripting plugin is disabled: The provided plugin org.jetbrains.kotlin.scripting.compiler.plugin.ScriptingCompilerConfigurationComponentRegistrar is not compatible with this version of compiler error: unable to evaluate script, no scripting plugin loaded -------- end of error snippet ---------- The following steps worked for me (Android Studio Iguana | 2023.2.1 Patch 1) **1.** Open up \.idea\workspace.xml file under your project **2.** Search the "workspace.xm" file for "RunManager". The <component> tag in the XML file looked like this: <component name="RunManager" selected="Kotlin script (Beta).build.gradle.kts"> <configuration name="app" type="AndroidRunConfigurationType" factoryName="Android App"> Changed the XML to as shown below: <component name="RunManager" selected="Android App.app"> <configuration name="app" type="AndroidRunConfigurationType" factoryName="Android App"> **Another option for Step 2** is to delete the "selected" attribute and the IDE's auto update feature will add the correct "selected" attribute to the "component" tag. **3.** Then perform this -> "clean project", "build project" and "run" the application and the error went away. App started running from the IDE.
You should see html pages as completely isolated programs. Every time you click a link, or reload a page, the following thing happens: - the current page is removed - all the javascript code in the current page stops running - all the variables your code created, stop existing then - the new html page is downloaded, and rendered - all the javascript files in the new html page are executed, from scratch. - the code doesn't know anything about what was in the previous page, it doesn't even know what is in other browser tabs. Your issue comes from the fact that you have two separate html pages. even though you included the same javascript file in both of them, this does not mean that there is shared data between them. the global variable in the second page will not contain the value set in the first page. The correct solution for your case should be to pass the data about the image you want to open in the url: in index.html, modify the urls to be like this: ```html <div id="extrait"> <a href="gallery.html#1"><img src="assets/image1.jpg" alt="Image 1" onclick="updateIndex(1)"></a> <a href="gallery.html#2"><img src="assets/image2.jpg" alt="Image 2" onclick="updateIndex(2)"></a> <a href="gallery.html#3"><img src="assets/image3.jpg" alt="Image 3" onclick="updateIndex(3)"></a> </div> ``` in the javascript code, you can add some code that as soon as the page loads: - checks the url, and retieves the hash, which is the part of the url after the # symbol. - set the current index based on that hash ```js const galleryImages = document.querySelectorAll('#carousel img'); let currentIndex = 0; document.addEventListener("DOMContentLoaded", function() { // Get the hash from the URL var hash = window.location.hash.substr(1); // Show the image corresponding to the hash if (hash) { showImage(Number(hash)); } }); // the rest of your code ... ```
Your problem can be solved without using of negative look behind. ```none \bty(?:[a-z]*[a-su-z])?\b ``` The [regex](https://regex101.com/r/2WVfc7/3) can be broken down as follows. ```none \b the boundary between a word char (\w) and something that is not a word char ----------------------------------------------------- ty 'ty' ----------------------------------------------------- (?: group, but do not capture (optional (matching the most amount possible)): ----------------------------------------------------- [a-z]* any character of: 'a' to 'z' (0 or more times (matching the most amount possible)) ----------------------------------------------------- [a-su-z] any character of: 'a' to 's', 'u' to 'z' ----------------------------------------------------- )? end of grouping ----------------------------------------------------- \b the boundary between a word char (\w) and something that is not a word char ```
You can make use of `concurrent.futures.wait()` function. This function allows you to wait for a collection of futures to complete. import os from concurrent.futures import ProcessPoolExecutor, as_completed, wait from time import time from multiprocessing import Manager LIMIT = 1 # Assuming you have this defined somewhere MAX_LENGTH = 10 # Assuming you have this defined somewhere SECRET_STRING = "secret" # Assuming you have this defined somewhere CHARACTERS = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890" # Assuming you have this defined somewhere def main(): start_time = time() cpu_cores = os.cpu_count() - LIMIT part_size = len(CHARACTERS) // cpu_cores for length in range(MAX_LENGTH): with Manager() as manager: event = manager.Event() with ProcessPoolExecutor() as executor: attacks = {} for part in range(cpu_cores): if part == cpu_cores - 1: first_bit = CHARACTERS[part_size * part :] else: first_bit = CHARACTERS[part_size * part : part_size * (part + 1)] future = executor.submit( brute_force, SECRET_STRING, first_bit, event, *([CHARACTERS] * length) ) attacks[future] = f"Brute Force {length} - {first_bit}" # Wait for all tasks to complete wait(attacks) for attack in as_completed(attacks): password = attack.result() if password: attack_type = attacks[attack] print(f"Password found by {attack_type} attack: {password}") event.set() break if not password: print(f"Password not found on length of {length}") elif password: break def brute_force(secret, first_bit, event, *args): # Your brute force logic goes here pass if __name__ == "__main__": main() Output: Password not found on length of 0 Password not found on length of 1 Password not found on length of 2 Password not found on length of 3 Password not found on length of 4 Password not found on length of 5 Password not found on length of 6 Password not found on length of 7 Password not found on length of 8 Password not found on length of 9
Use optString, optBoolean and so on: ``` java for (int i = 0; i < data.length(); i++) { JSONObject jsonObject = data.getJSONObject(i); jsonObject.optString("employer_name", "default_value"); } ``` check [documentation][1] of JSONObject. For your other question, there are libraries for turning a jsonString to a class, [check this guide][2] and this other [SO question][3] [1]: https://www.javadoc.io/static/org.json/json/20171018/index.html?org/json/JSONObject.html [2]: https://mkyong.com/java/how-do-convert-java-object-to-from-json-format-gson-api/ [3]: https://stackoverflow.com/a/28957612/17234028
*This solution requires a fairly new version of tox. I tested and it works with 4.12.2 and 4.8.0 but not on 4.6.0*. The Tox 4.12.2 documentation on [External Package builder](https://tox.wiki/en/4.14.2/config.html#external-package-builder) tells that it is possible to define an `external` package option (thanks for the comment @JürgenGmach). The external [package](https://tox.wiki/en/4.14.2/config.html#package) option means that you set ```ini [testenv] ... package = external ``` In addition to this, one must create a section called `[.pkg_external]` (or `<package_env>_external` if you have edited your [package_env](https://tox.wiki/en/4.14.2/config.html#package_env) which has an alias `isolated_build_env`). In this section, one should define *at least* the [`package_glob`](https://tox.wiki/en/4.14.2/config.html#package_glob), which tells the tox where to install the wheel. If you also want to *create* the wheel, the you can do that in the `commands` option of the `[.pkg_external]`. ## Simple approach (multiple builds) Example of a working configuration (tox 4.12.2): ```ini [testenv:.pkg_external] deps = build==1.1.1 commands = python -c 'import shutil; shutil.rmtree("{toxinidir}/dist", ignore_errors=True)' python -m build -o {toxinidir}/dist package_glob = {toxinidir}{/}dist{/}wakepy-*-py3-none-any.whl ``` - Pros: Pretty simply to implement - Cons: This approach has the downside that you will trigger the build (`python -m build`) for each of your environments which do not have `skip_install=True`. This has an open issue: [tox #2729](https://github.com/tox-dev/tox/issues/2729). ## Building wheel only once It is also possible to make tox 4.14.2 build the wheel only once using the tox [hooks](https://tox.wiki/en/4.14.2/plugins.html). As can be seen from the *Order of tox execution* (in Appendix), one hook which can be used for this is the `tox_on_install` for the ".pkg_external" (either "requires" or "deps"). I use it to place a dummy file (`/dist/.TOX-ASKS-REBUILD`) which means that a build should be done. If that `.TOX-ASKS-REBUILD` exists, when the build script is ran, the `/dist` folder with all of its contents is removed, and new `/dist` folder with a .tar.gz and a .whl file is created. - Pros: - Faster to run tox, as sdist and wheel built only as many times as required. - Will build also if using tox with single env, like `tox -e py311` (if not `skip_install=True`) - Cons: - More involved - Will not work in parallel mode. For that, probably would require to have separate build command which would be ran each time before tox (unless the parallelization plugin supports a common pre-command). Hopefully this solution will become unnecessary at some point (when #2729 gets resolved) ### The hook - Located at `tests/tox_utils/tox_hooks.py` ```python from __future__ import annotations import typing from pathlib import Path from typing import Any from tox.plugin import impl if typing.TYPE_CHECKING: from tox.tox_env.api import ToxEnv dist_dir = Path(__file__).resolve().parent.parent.parent / "dist" tox_asks_rebuild = dist_dir / ".TOX-ASKS-REBUILD" @impl def tox_on_install(tox_env: ToxEnv, arguments: Any, section: str, of_type: str): if (tox_env.name != ".pkg_external") or (of_type != "requires"): return # This signals to the build script that the package should be built. tox_asks_rebuild.parent.mkdir(parents=True, exist_ok=True) tox_asks_rebuild.touch() ``` ### pyproject.toml The hook must be registered somewhere. I use pyproject.toml for this: ``` [project.entry-points.tox] mypkg_tox_hooks = "tests.tox_utils.tox_hooks" ``` ### The build_mypkg.py - Located at `/tests/tox_utils/build_mypkg.py` ```python import shutil import subprocess from pathlib import Path dist_dir = Path(__file__).resolve().parent.parent.parent / "dist" def build(): if not (dist_dir / ".TOX-ASKS-REBUILD").exists(): print("Build already done. skipping.") return print(f"Building sdist and wheel into {dist_dir}") # Cleanup. Remove all older builds; the /dist folder and its contents. # Note that tox would crash if there were two files with .whl extension. # This also resets the TOX-ASKS-REBUILD so we build only once. shutil.rmtree(dist_dir, ignore_errors=True) out = subprocess.run( f"python -m build -o {dist_dir}", capture_output=True, shell=True ) if out.stderr: raise RuntimeError(out.stderr.decode("utf-8")) print(out.stdout.decode("utf-8")) if __name__ == "__main__": build() ``` ### The tox.ini ```ini [testenv] ; The following makes the packaging use the external builder defined in ; [testenv:.pkg_external] instead of using tox to create sdist/wheel. ; https://tox.wiki/en/latest/config.html#external-package-builder package = external [testenv:.pkg_external] ; This is a special environment which is used to build the sdist and wheel ; After running this environment, the *.whl and *.tar.gz are available in the ; dist/ folder. deps = ; The build package from PyPA. See: https://build.pypa.io/en/stable/ build==1.1.1 commands = python tests/tox_utils/build_mypkg.py ; This determines which files tox may use to install mypkg in the test ; environments. This should match with the file created during running the ; command defined in this section. package_glob = {toxinidir}{/}dist{/}mypkg-*-py3-none-any.whl ``` ## Appendix ### Order of tox execution The order of execution within tox can be reverse-engineered by using the dummy hook file defined in the Appendix (`tox_print_hooks.py`) and the bullet point list about the order of execution in the [System Overview](https://tox.wiki/en/4.14.2/user_guide.html#system-overview). Note that I have set the `package = external` already which has some effect on the output. Here is what tox does: ``` 1) CONFIGURATION tox_register_tox_env tox_add_core_config tox_add_env_config (N+2 times[1]) 2) ENVIRONMENT (for each environment) tox_on_install (envname, deps) envname: install_deps (if not cached) If not all(skip_install) AND first time: [2] tox_on_install (.pkg_external, requires) .pkg_external: install_requires (if not cached) tox_on_install (.pkg_external, deps) .pkg_external: install_deps (if not cached) If not skip_install: .pkg_external: commands tox_on_install (envname, package) envname: install_package [3] tox_before_run_commands (envname) envname: commands tox_after_run_commands (envname) tox_env_teardown (envname) ``` ------ <sup>[1]</sup> N = number of environments in tox config file. The "2" comes from .pkg_external and .pkg_external_sdist_meta <br> <sup>[2]</sup> "First time" means: First time in this tox call. This is done only if there is at least one selected environment which does not have `skip_install=True`. <br> <sup>[3]</sup> This installs the package from wheel. If using the `package = external` in [testenv], it takes the wheel from the place defined by the `package_glob` in the `[testenv:.pkg_external]` <br> ### The dummy hook file `tox_print_hooks.py` ```python from typing import Any from tox.config.sets import ConfigSet, EnvConfigSet from tox.execute.api import Outcome from tox.plugin import impl from tox.session.state import State from tox.tox_env.api import ToxEnv from tox.tox_env.register import ToxEnvRegister @impl def tox_register_tox_env(register: ToxEnvRegister) -> None: print("tox_register_tox_env", register) @impl def tox_add_core_config(core_conf: ConfigSet, state: State) -> None: print("tox_add_core_config", core_conf, state) @impl def tox_add_env_config(env_conf: EnvConfigSet, state: State) -> None: print("tox_add_env_config", env_conf, state) @impl def tox_on_install(tox_env: ToxEnv, arguments: Any, section: str, of_type: str): print("tox_on_install", tox_env, arguments, section, of_type) @impl def tox_before_run_commands(tox_env: ToxEnv): print("tox_before_run_commands", tox_env) @impl def tox_after_run_commands(tox_env: ToxEnv, exit_code: int, outcomes: list[Outcome]): print("tox_after_run_commands", tox_env, exit_code, outcomes) @impl def tox_env_teardown(tox_env: ToxEnv): print("tox_env_teardown", tox_env) ```
You could try using the following formulas, this assumes there is no `Excel Constraints` as per the tags posted: [![enter image description here][1]][1] ---------- =TEXT(MAX(--TEXTAFTER(B$2:B$7,"VUAM")*($A2=A$2:A$7)),"V\U\A\M\00000") ---------- Or, using the following: =TEXT(MAX((--RIGHT(B$2:B$7,5)*($A2=A$2:A$7))),"V\U\A\M\00000") ---------- Or, you could use the following as well using `XLOOKUP()` & `SORTBY()`: [![enter image description here][2]][2] ---------- =LET( x, SORTBY(A2:B7,--RIGHT(B2:B7,5),-1), y, TAKE(x,,1), XLOOKUP(A2:A7, y, TAKE(x,,-1))) ---------- The above can be made bit shorter: =LET(_z, A2:A7, XLOOKUP(_z,_z, TAKE(SORTBY(A2:B7,--RIGHT(B2:B7,5),-1),,-1))) ---------- <sup> § Notes On **`Escape Characters`**: The use of `backslash` before & after the `V`, `U`, `A` & `M` is an **`escape character`**. Because the `V`, `U`, `A` & `M` on its own serves a different purpose, we are escaping it meaning hence asking Excel to **`literally form text`** with that character. </sup> ---------- Here is the **`Quick Fix`** to your **`existing formula`**, escape characters are not placed correctly, info on the same refer `§` [![enter image description here][3]][3] ---------- =TEXT(MAX(IF($A$2:$A$100=A2, MID($B$2:$B$100, 5, 5)+0)),"V\U\A\M\00000") ---------- [1]: https://i.stack.imgur.com/K0j2K.png [2]: https://i.stack.imgur.com/ovCkj.png [3]: https://i.stack.imgur.com/82vOL.png
I need to produce XML out of my table in the format like below. Can you advise what method I should go with (Path, Element, Raw) or could it be a combination of several? Please find below my test data and some code I just started. Also see the requested XML format with a lot of nests. Can this be done in a single `SELECT` or can I compose various parts by level? ```sql SELECT * INTO #t FROM ( SELECT 2020 yy, 'Alpha' CompanyName, 'BossA' FirstName_Owner, 'Smith' LastName_Owner, 'John1' FirstName_Client , 'Dow1' LasttName_Client,'Yes' Active union SELECT 2020 yy, 'Alpha' CompanyName, 'BossA' FirstName_Owner, 'Smith' LastName_Owner, 'Peter2' FirstName_Client , 'Redd2' LasttName_Client,'No' Active UNION SELECT 2020 yy, 'Alpha' CompanyName, 'BossyBB' FirstName_Owner, 'Green' LastName_Owner, 'Mary' FirstName_Client , 'Robbins3' LasttName_Client,'Yes' Active )a --- SELECT TOP 100 * FROM #t -- DROP TABLE IF EXISTS #t SELECT yy, CompanyName --SELECT -- FirstName_Owner, LastName_Owner, -- FirstName_Client , LasttName_Client, Active FROM #t FOR XML RAW('CompanyInfo') , ELEMENTS -- Path --FOR XML RAW('Client'), ROOT ('Clients') ``` Requested output in XML format for my test data: <GemReport> <Version>2.0</Version> <Company> <CompanyInfo> <Year>2020</Year> <CompanyName>Alpha</CompanyName> <Owners> <Owner> <FirstName>John</FirstName> <LastName>Doe</LastName> <Clients> <Client> <FirstName>John1</FirstName> <LastName>Doe1</LastName> <Contract> <Active>Yes</Active> </Contract> </Client> <Client> <FirstName>Peter2</FirstName> <LastName>Redd2</LastName> <Contract> <Active>NO</Active> </Contract> </Client> </Clients> </Owner> <Owner> <FirstName>BossyBB</FirstName> <LastName>Green</LastName> <Clients> <Client> <FirstName>Mary</FirstName> <LastName>Robbins3</LastName> <Contract> <Active>Yes</Active> </Contract> </Client> </Owners> </CompanyInfo> </Company> </GemReport> [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/larY9.jpg
I try to run this on macOS: pip3.8 install turicreate I have: Python 3.8.6 and pip 24.0 from (python 3.8) I am using this on the opened using Rosetta terminal. I still get this error message: ******************************************************************************** Please avoid running ``setup.py`` directly. Instead, use pypa/build, pypa/installer or other standards-based tools. See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details. ******************************************************************************** !! self.initialize_options() installing to build/bdist.macosx-14-arm64/wheel running install ================================================================================== TURICREATE ERROR If you see this message, pip install did not find an available binary package for your system. Supported Platforms: * macOS 10.12+ x86_64. * Linux x86_64 (including WSL on Windows 10). Support Python Versions: * 2.7 * 3.5 * 3.6 * 3.7 * 3.8 Another possible cause of this error is an outdated pip version. Try: `pip install -U pip` ================================================================================== [end of output] I got this error with my initial python and pip versions updated both to the requirements of turicreate. I tried using Rosetta and checking my anaconda version which is x86_64. Tried pip install wheel, pip install -U turicreate.
How do I install Turicreate on macOS, Python 3.8?
|python|pip|turi-create|
null
**Explanation** I have Visual Studio v17.9.5. When I publish an existing project or create a new one and publish it, it shows me only the http:5000 port. It does not run neither on cloud. 500 - Internal server error. There is a problem with the resource you are looking for, and it cannot be displayed. **Question** Why is the https port disabled on publish? How can it be enabled? The output I get when I run the app, after publish: info: Microsoft.Hosting.Lifetime[14] Now listening on: http://localhost:5000 info: Microsoft.Hosting.Lifetime[0] Application started. Press Ctrl+C to shut down. info: Microsoft.Hosting.Lifetime[0] Hosting environment: Production info: Microsoft.Hosting.Lifetime[0]
**Description:** I'm encountering an unexpected behavior in my Django e-commerce website where the quantity of items in the cart increments at an exponential rate instead of a linear one. Here's a breakdown of the problem and the relevant code snippets: **Problem Description:** When adding an item to the cart, the quantity appears to double with each click of the "add" button. For instance, if I add an item once, it shows a quantity of 3 instead of 1. Then, upon adding it again, the quantity jumps to 6 instead of 2, and so forth. Similarly, when decreasing the quantity using the "remove" button, it doesn't decrement linearly but exhibits similar exponential behavior. **Code Snippets:** ***cart.html:*** This snippet shows the HTML code responsible for displaying the quantity and buttons to add or remove items from the cart. <div class="li-cartstyle cartbodyinformation-quantity"> <p class="quantity-input">{{ item.quantity }}</p> <div class="cartquantity-buttons"> <i data-product={{ item.product.id }} data-action="add" class="update-cart change-quantity fa-solid fa-angle-up"></i> <i data-product={{ item.product.id }} data-action="remove" class="update-cart change-quantity fa-solid fa-angle-down"></i> </div> </div> ***cart.js:*** This JavaScript code handles the interactions with the cart, such as adding or removing items. // var user = '{{request.user}}' var updateBtns = document.getElementsByClassName('update-cart') var user = isAuthenticated ? 'AuthenticatedUser' : 'AnonymousUser'; for(var i=0; i < updateBtns.length; i++){ updateBtns[i].addEventListener('click', function(){ var productId = this.dataset.product var action = this.dataset.action console.log('productId:', productId, '\naction: ', action) console.log('USER: ',user) if(user === 'AnonymousUser'){ addCookieItem(productId, action) } else{ updateUserOrder(productId, action) } }) } function addCookieItem(productId, action){ console.log('Not Logged in') if (action == 'add'){ if (cart[productId] == undefined){ cart[productId] = {'quantity': 1} } else { cart[productId]['quantity'] += 1 } } if (action == 'remove'){ cart[productId]['quantity'] -= 1 if (cart[productId]['quantity'] <= 0){ console.log('Remove Item') delete cart[productId] } } console.log('Cart:', cart) document.cookie = 'cart=' + JSON.stringify(cart) + ";domain=;path=/" location.reload() } function updateUserOrder(productId, action){ console.log('User is logged in, sending data..') var url = '/update_item/' fetch(url, { method:'POST', headers:{ 'Content-Type':'application/json', 'X-CSRFToken': csrftoken, }, body:JSON.stringify({'productId': productId, 'action': action }) }) .then((response) => { return response.json() }) .then((data) => { console.log('data:', data) location.reload() }) } ***views.py:*** This Python code contains the backend logic for updating the cart items. def updateItem(request): data = json.loads(request.body) productId = data['productId'] action = data['action'] print('Action:', action) print('productId:', productId) customer = request.user.customer product = Product.objects.get(id=productId) order, created = Order.objects.get_or_create(customer=customer, complete=False) orderItem, created = OrderItem.objects.get_or_create(order=order, product=product) if action == 'add': orderItem.quantity = (orderItem.quantity + 1) elif action == 'remove': orderItem.quantity = (orderItem.quantity - 1) orderItem.save() if orderItem.quantity <= 0: orderItem.delete() return JsonResponse('Item was added', safe=False) **Additional Context:** The website is built using Django framework. The issue seems to persist regardless of whether the user is authenticated or not. Upon inspecting the network requests, I notice that the updateItem view in ***views.py*** is being called correctly, but the quantity manipulation there seems to result in unexpected behavior. **Desired Outcome:** I expect the quantity of items in the cart to increment or decrement by one unit upon clicking the respective buttons. I would appreciate any insights or suggestions on how to resolve this issue and ensure the quantity adjustments behave as expected. Thank you in advance for your assistance! If further clarification or code snippets are needed, please let me know.
Issue with Quantity Increment in Django E-commerce Cart
|javascript|python|django|e-commerce|shopping-cart|
Try running `python -m pip install playsound` if you do not have pip installed aleady, run `sudo apt-get install python-pip` (or `sudo apt-get install python3-pip` if you are using Python3)
How to Set Expiry Dates for Google Drive
|google-drive-api|drive|
null
Here is a solution. The error i was getting said something similar to: > my worker node has python3.10 while the driver has python3.11 What I had to do was to navigate to `/usr/bin/` and execute `ls -l`. I got the following output: ``` lrwxrwxrwx 1 root root 7 Feb 12 19:50 python -> python3 lrwxrwxrwx 1 root root 9 Oct 11 2021 python2 -> python2.7 -rwxr-xr-x 1 root root 14K Oct 11 2021 python2.7 -rwxr-xr-x 1 root root 1.7K Oct 11 2021 python2.7-config lrwxrwxrwx 1 root root 16 Oct 11 2021 python2-config -> python2.7-config lrwxrwxrwx 1 root root 10 Feb 12 19:50 python3 -> python3.10 -rwxr-xr-x 1 root root 15K Feb 12 19:50 python3.11 -rwxr-xr-x 1 root root 3.2K Feb 12 19:50 python3.11-config lrwxrwxrwx 1 root root 17 Feb 12 19:50 python3-config -> python3.11-config -rwxr-xr-x 1 root root 2.5K Apr 8 2023 python-argcomplete-check-easy-install-script -rwxr-xr-x 1 root root 383 Apr 8 2023 python-argcomplete-tcsh lrwxrwxrwx 1 root root 14 Feb 12 19:50 python-config -> python3-config ``` Notice the line `lrwxrwxrwx 1 root root 10 Feb 12 19:50 python3 -> python3.10` I realized that my python was pointing python3.10 even though I had python3.11 installed. If that's the case with you, then the following fix should work: 1. **Locate Python 3.11:** First, ensure that Python 3.11 is installed on your system. You can usually find it in `/usr/bin/` or `/usr/local/bin/`. Let's assume it's in `/usr/bin/python3.11`. 2. **Update the Symbolic Link:** Open a terminal and run:<br> `sudo ln -sf /usr/bin/python3.11 /usr/bin/python3` 3. **Verify the Update:** In the terminal, run: <br> `ls -l /usr/bin/python3` <br> <br> This should show something like:<br> `lrwxrwxrwx 1 root root XX XXX XX:XX /usr/bin/python3 -> /usr/bin/python3.11` <br><br>This indicates that python3 now points to Python 3.11. Now, when you run your PySpark code, it should use Python 3.11 on both the worker nodes and the driver.
I wanted to know if it is possible to add dynamic html to a folium map using timestampedgeojson so that the title of the map can get updated on every frame (.e.g. Date). When the visualization starts the title of the map will get updated with the new date on each frame, when it is being played. I am following this example below that constructs a basic time-based visualization. ``` m = folium.Map(location=[35.68159659061569, 139.76451516151428], zoom_start=16) # Lon, Lat order. lines = [ { "coordinates": [ [139.76451516151428, 35.68159659061569], [139.75964426994324, 35.682590062684206], ], "dates": ["2017-06-02T00:00:00", "2017-06-02T00:10:00"], "color": "red", }, { "coordinates": [ [139.75964426994324, 35.682590062684206], [139.7575843334198, 35.679505030038506], ], "dates": ["2017-06-02T00:10:00", "2017-06-02T00:20:00"], "color": "blue", }, { "coordinates": [ [139.7575843334198, 35.679505030038506], [139.76337790489197, 35.678040905014065], ], "dates": ["2017-06-02T00:20:00", "2017-06-02T00:30:00"], "color": "green", "weight": 15, }, { "coordinates": [ [139.76337790489197, 35.678040905014065], [139.76451516151428, 35.68159659061569], ], "dates": ["2017-06-02T00:30:00", "2017-06-02T00:40:00"], "color": "#FFFFFF", }, ] features = [ { "type": "Feature", "geometry": { "type": "LineString", "coordinates": line["coordinates"], }, "properties": { "times": line["dates"], "style": { "color": line["color"], "weight": line["weight"] if "weight" in line else 5, }, }, } for line in lines ] folium.plugins.TimestampedGeoJson( { "type": "FeatureCollection", "features": features, }, period="PT1M", add_last_point=True, ).add_to(m) m ```
I have input like this and I need out put map like below. How can I do it. **Input** AccountDTO UserDAO TransferDTO DepositDAO **Output** DAO UserDAO DepositDAO DTO AccountDTO TransferDTO DTO
Segregate class names using regular expresions
|java|regex|
ok for me, i just uninstalled my flutter and dart extension on vs code and then reinstalled them
I am very new to gym anytrading i have this pandas dataframe where there is a column with a list of lists that are different lengths I am trying to figure out how to put that into the gym anytrading environment. Below is a link to a csv of the sample data in the dataframe and a code snippet. I keep getting this error TypeError: cannot unpack non-iterable NoneType object import gym import pandas as pd import numpy as np from gym_anytrading.envs import TradingEnv class CustomTradingEnv(TradingEnv): def __init__(self, df): super().__init__(df, window_size=10) self.reward_range = (0, 1) def _process_data(self): # Process your DataFrame to ensure it's in the correct format # Here, you can perform any necessary preprocessing steps pass def reset(self): # Initialize the environment with the data from the DataFrame self._process_data() return super().reset() env = CustomTradingEnv(df) observation = env.reset() for _ in range(100): # Run for 100 steps action = env.action_space.sample() # Sample a random action observation, reward, done, info = env.step(action) if done: break https://docs.google.com/spreadsheets/d/1-LFNzZKXUG44smSYOy2rgVVnqiygLfs00lAl2vFdsxM/edit?usp=sharing
|python|pandas|dataframe|openai-gym|
null
{"Voters":[{"Id":3074564,"DisplayName":"Mofi"},{"Id":7318120,"DisplayName":"D.L"},{"Id":6738015,"DisplayName":"Compo"}],"SiteSpecificCloseReasonIds":[18]}
try uninstalling your flutter and dart extension on vs code and then reinstalled them
This seems like the perfect opportunity to do something I have been thinking about for a while. Due to the dataset being so small, I would consider having a redis database for fast access, sorting and just general reading that is a dupe of a persistent database of your choice, making reads lightning fast and writes not so much, but the impact is reduced by the frequency of both operations. Please note that this is like using a chainsaw to cut flowers but it seemed like a very open question.
I have been battling this same issue for 2 days. Try declaring the selectedCategory in the State but outside the build function. This line: var selectedCategory = state.revenueCategories[2].category; This worked for me
In the previous version of Blazor I could use: await SignOutManager.SetSignOutState(); Navigation.NavigateTo($"authentication/logout"); Now, I have to use: Navigation.NavigateToLogout("authentication/logout"); But the following code redirects me to Not found. Should I create a page "authentication/logout" or what? @using Microsoft.AspNetCore.Components.WebAssembly.Authentication @inject NavigationManager Navigation <AuthorizeView> <Authorized> <button class="nav-link btn btn-link" @onclick="BeginLogOut">Log out</button> </Authorized> <NotAuthorized> <a href="Account/Login">Log in</a> </NotAuthorized> </AuthorizeView> @code{ private void BeginLogOut() { Navigation.NavigateToLogout("authentication/logout"); } } [![enter image description here][1]][1] There is no single example anywhere :/ [1]: https://i.stack.imgur.com/r1NKy.png
How to logout using link (HTTP GET) in Blazor and .NET 8?
|blazor|blazor-server-side|blazor-webassembly|
I've just forgot to init `stmtRaw`. Cringe. Fixed variant of `executeQuery`: ```cpp std::shared_ptr<IQueryResult> executeQuery(std::string_view query, const std::vector<std::string> &params) { std::cerr << "SQLiteDatabaseManager::executeQuery" << std::endl; std::cerr << "SQLiteDatabaseManager::executeQuery | query = \"" << query << "\"" << std::endl; sqlite3_stmt *stmtRaw = nullptr; if (sqlite3_prepare_v2(m_db.get(), query.data(), -1, &stmtRaw, nullptr) != SQLITE_OK) { const std::string errorMsg = "Failed to prepare statement: " + std::string(sqlite3_errmsg(m_db.get())); std::cerr << errorMsg << std::endl; return nullptr; } // Ensure stmtRaw is valid before proceeding if (!stmtRaw) { std::cerr << "Failed to create the statement, stmtRaw is null." << std::endl; return nullptr; } // Binding parameters for (size_t i = 0; i < params.size(); ++i) { if (sqlite3_bind_text(stmtRaw, static_cast<int>(i + 1), params[i].c_str(), -1, SQLITE_TRANSIENT) != SQLITE_OK) { const std::string errorMsg = "Failed to bind parameter at position " + std::to_string(i + 1) + ": " + sqlite3_errmsg(m_db.get()); std::cerr << errorMsg << std::endl; sqlite3_finalize(stmtRaw); // Clean up before returning return nullptr; } } return std::make_shared<SqliteQueryResult>(stmtRaw); } ```
Given h, k, and r, the (x, y) values in the circle will satisfy `h - r <= x <= h + r` and `k - r <= y <= k + r`. Hence, the simplest way to do this is to use double for-loop with x, and y values in the range. In the for-loop, you can use `if` to check whether (x - h)^2 + (y - k)^2 <= r^2. Alternatively, you can use a for-loop with x value first, and calculate k ± sqrt(r^2 - (x-h)^2) with floor/ceil functions to find the y's range for the specific x.
I'm new to matplotlib and can't find out how to update my plot once I enter new values for the lines displayed in the plot. Showing the plots with the values that I entered first works without an issue. main.py ``` from two_dof_plotter import MechanismPlotter import time # Create an object of MechanismPlotter two_dof_robot_plot = MechanismPlotter(0.25, 0.3, 0.1, 0, 0) two_dof_robot_plot.start_plot() two_dof_robot_plot.draw_required_work_envelope(0, 0) two_dof_robot_plot.draw_mechanism_with_inverse_kinematic(0.5, -0.0375) two_dof_robot_plot.show_plot() time.sleep(2) two_dof_robot_plot.draw_mechanism_with_inverse_kinematic(0.2, -0.0375) ``` two_dof_plotter.py ``` import matplotlib.pyplot as plt import numpy as np class MechanismPlotter: def __init__(self, upper_arm_length, lower_arm_length, lever_arm_length, gripper_x_offset, gripper_y_offset): self.lower_arm_length = lower_arm_length self.upper_arm_length = upper_arm_length self.lever_arm_length = lever_arm_length self.gripper_x_offset = gripper_x_offset self.gripper_y_offset = gripper_y_offset self.ploted_lines = [] self.fig, self.ax = plt.subplots() def start_plot(self): self.ax.set_aspect('equal') self.ax.set_xlim(-0.6, 0.6) self.ax.set_ylim(-0.6,0.6) self.ax.set_xlabel('X') self.ax.set_ylabel('Y') plt.grid(True) def draw_mechanism_with_forward_kinematic(self, upper_arm_angle, lower_arm_angle): # Calculate the coordinates of the lower and upper arm endpoints upper_arm_x = self.upper_arm_length * np.cos(upper_arm_angle) upper_arm_y = self.upper_arm_length * np.sin(upper_arm_angle) lower_arm_x = upper_arm_x + self.lower_arm_length * np.cos(lower_arm_angle) lower_arm_y = upper_arm_y + self.lower_arm_length * np.sin(lower_arm_angle) lever_arm_x = -(self.lever_arm_length * np.cos(lower_arm_angle)) lever_arm_y = -(self.lever_arm_length * np.sin(lower_arm_angle)) # Plot the lines if len(self.ploted_lines) > 0: self.ploted_lines[0].set_data([0, upper_arm_x], [0, upper_arm_y]) self.ploted_lines[1].set_data([upper_arm_x, lower_arm_x], [upper_arm_y, lower_arm_y]) self.ploted_lines[2].set_data([0, lever_arm_x], [0, lever_arm_y]) self.ploted_lines[3].set_data([upper_arm_x, upper_arm_x + lever_arm_x], [upper_arm_y, upper_arm_y + lever_arm_y]) self.ploted_lines[4].set_data([lever_arm_x, upper_arm_x + lever_arm_x], [lever_arm_y, upper_arm_y + lever_arm_y]) # Draw the updated plot self.fig.canvas.draw() else: # Plot the lines line1, = self.ax.plot([0, upper_arm_x], [0, upper_arm_y], 'b-', linewidth=2) line2, = self.ax.plot([upper_arm_x, lower_arm_x], [upper_arm_y, lower_arm_y], 'r-', linewidth=2) line3, = self.ax.plot([0, lever_arm_x], [0, lever_arm_y], 'g-', linewidth=1) line4, = self.ax.plot([upper_arm_x, upper_arm_x + lever_arm_x], [upper_arm_y, upper_arm_y + lever_arm_y], 'g-', linewidth=1) line5, = self.ax.plot([lever_arm_x, upper_arm_x + lever_arm_x], [lever_arm_y, upper_arm_y + lever_arm_y], 'g-', linewidth=1) # Add the lines to the list of lines to remove self.ploted_lines.extend([line1, line2, line3, line4, line5]) # Calculate the distance between the upper arm and the mechanism for the lower arm distance = np.sin(np.pi-(-lower_arm_angle)-upper_arm_angle) * self.lever_arm_length def draw_mechanism_with_inverse_kinematic(self, x, y): # Calculate the inverse kinematic angles x = x - self.gripper_x_offset y = y - self.gripper_y_offset c = np.sqrt(x**2 + y**2) upper_arm_angle = np.arccos((self.upper_arm_length**2 + c**2 - self.lower_arm_length**2) / (2 * self.upper_arm_length * c)) + np.arctan(y/x) lower_arm_angle = upper_arm_angle + np.arccos((self.lower_arm_length**2 + self.upper_arm_length**2 - c**2) / (2 * self.lower_arm_length * self.upper_arm_length)) - np.pi # Draw the mechanism ploted_lines = self.draw_mechanism_with_forward_kinematic(upper_arm_angle, lower_arm_angle) def remove_ploted_mechanism(self, ploted_lines): for line in ploted_lines: line.remove() self.ax.clear() self.fig.canvas.draw() def draw_required_work_envelope(self, x_por_offset, y_por_offset): # Define the coordinates of the work envelope points x = [0.16+x_por_offset, 0.16+x_por_offset, 0.5+x_por_offset, 0.32+x_por_offset, 0.16+x_por_offset,] y = [0.06-y_por_offset, -0.0375-y_por_offset, -0.0375-y_por_offset, 0.06-y_por_offset, 0.06-y_por_offset,] # Plot the work envelope self.ax.fill_between(x, y, color='skyblue', alpha=0.5) def show_plot(self): plt.show() ``` what do i have to do to update it correctly after calling the method draw_mechanism_with_forward_kinematic a second time? I already tried different implementations of the draw() function and tried with the .ion(), but so far I haven't found where my issue is.
matplotlib plot not updating after the plot is first shown with .show
Folium Timestampedgeojson - How to add dynamic html for the title of the map
|python|visualization|geospatial|folium|folium-plugins|
My MERN app is trying to connect with the Atlas service to read and write some data. Multiple times while firing up the backend server, I get this error: `Error: connect ECONNREFUSED 43.205.72.30:27017` When I successfully connect, I work fine for a while, but it again throws an error.. There is a complete response I get: ``` { "message": "connect ECONNREFUSED 43.205.72.30:27017", "stack": "MongoNetworkError: connect ECONNREFUSED 43.205.72.30:27017\n at connectionFailureError (D:\\Projects\\MERN\\munch-lane\\node_modules\\mongodb\\lib\\cmap\\connect.js:379:20)\n at TLSSocket.<anonymous> (D:\\Projects\\MERN\\munch-lane\\node_modules\\mongodb\\lib\\cmap\\connect.js:285:22)\n at Object.onceWrapper (node:events:629:26)\n at TLSSocket.emit (node:events:514:28)\n at emitErrorNT (node:internal/streams/destroy:151:8)\n at emitErrorCloseNT (node:internal/streams/destroy:116:3)\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21)" } ``` Initially, I was working on my WiFi, but other articles on the internet stated the same issues with WiFi, maybe due to port 27017 not being open on the router. So, I tried using my mobile hotspot, and it started working fine for a while before again throwing the same issues. I have added 0.0.0.0 to the whitelist IPs. NodeJs version is `v20.10.0` The connection string is: `mongodb+srv://techmirtz:<password>@cluster0.yqhzzuj.mongodb.net/dbname?retryWrites=true&w=majority&appName=Cluster0` I see many similar questions addressed but most of them are for localhost one, not Atlas.
connect ECONNREFUSED 43.205.72.30:27017 while connecting to Atlas
|mongodb|mongoose|backend|mern|
null
I am wondering if there is any way to replace something such as `entities.entities[playerData].` with something like `enti.` for while the code still recognizes it all the same. I've looked it up, and I didn't find any answer on the internet. I assume it's just because I don't know how to word the question correctly.
Processing substitute long variables for something shorter
|processing|
null
`bundle exec pod install --project-directory=ios` It is similar to `cd ios && pod install`. It means, you have to run this before `npm run ios`. You can use this instead. But for above command, you have to run this command from root directory of your project. As you can see in your folder structure there will be `ios` folder. Here the full explanation of this command:- `bundle exec`: This part of the command ensures that the pod command is executed within the context of a Ruby bundle. It's a way to ensure that the correct version of CocoaPods (if specified in the project's Gemfile) is used. `pod install`: This is the CocoaPods command that installs the dependencies specified in the Podfile of the project. It resolves dependencies and downloads the necessary libraries. `--project-directory=ios`: This flag specifies the directory where the Podfile is located. In this case, it's telling CocoaPods to look for the Podfile in the ios directory. This is useful in projects where the iOS code is organized into a subdirectory, commonly named ios. Also the error, you are trying to solve. You have to follow these steps:- `Method 1:` ``` Step 1: cd ios Step 2: pod repo update Step 3: pod install ``` move to method 2, if it won't work. `Method 2:` Step 1: If you are using a `react-native-flipper` your iOS build will fail when `NO_FLIPPER=1` is set. because `react-native-flipper` depends on (FlipperKit,...) that will be excluded To fix this you can also exclude `react-native-flipper` using a `react-native.config.js` ``` module.exports = { ..., // other configs dependencies: { ...(process.env.NO_FLIPPER ? { 'react-native-flipper': { platforms: { ios: null } } } : {}), } }; ``` Step 2: You have to run one of these commands from root directory of the project: ```NO_FLIPPER=1 bundle exec pod install --project-directory=ios``` or ```cd ios && NO_FLIPPER=1 pod install```
On one hand, I desire the ability to paste stickers into my TextField. While images are already functional, when attempting to copy an image from my pictures and convert it into a sticker, I do not receive the option to paste it into my TextField. ``` TextField( contentInsertionConfiguration: ContentInsertionConfiguration( onContentInserted: (_) {}, allowedMimeTypes: ['*'], ), decoration: const InputDecoration( border: OutlineInputBorder(), labelText: 'Enter your name', ), ), ``` The second issue pertains to having stickers appear alongside emojis: [Random App Where the stickers are displayed](https://i.stack.imgur.com/Zievq.jpg) However, in my Flutter app, I do not encounter this keyboard layout, as shown in [my app in flutter](https://i.stack.imgur.com/q7ai7.jpg)
I want to paste stickers into to my TextField and to show the stickers beside the emojis
|flutter|
null
If everything seems to be configured properly, the cause may lie somewhere else. In my case some of the Application Insights __domains were blocked by my ad-blocker__. This resulted in failed requests to the Application Insights APIs, making it appear as if the invocations were not loading on the monitor page. The invocation logs were visible in the Azure Portal again after whitelisting these domains: ``` *.applicationinsights.io *.loganalytics.io ```
I have this program #include <stdio.h> int main(int argc, char *argv[]) { int i; for (i=0; i < argc;i++) { printf("argv[%d] = %s\n", i, argv[i]); } } I am wondering about the argument `char *argv[]`. My first thought was that this was an **array of character pointers**? But then it should compile if we write: `(char *argv)[]`? But then I get the error meassage testargs.c:3:20: error: expected declaration specifiers or '...' before '(' token 3 | int main(int argc, (char *argv)[]) { However it will compile and run correctly if I have `char *(argv[])`. But what is this actually in terms of pointers and arrays? Is it a character pointer to an array, or what is it?
What does: "char *argv[]" mean?
|arrays|c|string|pointers|char|
Looking at your user model and assuming you're using a more recent version of mongoose. Something you need to know before using MongoDB or mongoose, you should remember that, it doesn't require you adding the `_id` on the model because at runtime it automatically adds it to every new document that is created. You only add it unless if you want to override it with your own unique value. You must be very careful doing that. As for the `createdAt` and `updatedAt` you also don't need that, simply because if you enable the `timestamps: true` property, mongoose automatically adds the timestamps to every document you create and also updates the time when a document is updated. I have helped you refactored your User model. That's the first place to begin with for your debugging Your model should look like this import {Schema, model } from 'mongoose'; const userSchema = new Schema({ firstname: { type: String, required: true, minLength: 2, maxLength: 50 }, lastname: { type: String, required: true, minLength: 2, maxLength: 50 }, email: { type: String, maxLength: 50, unique: true, required: true }, password: { type: String, minLength: 6, required: true }, picturepath:{ type: String, default: '' // There's no point making a default value if it'll be null or empty string }, friends: [], location: String, occupation: String, viewedprofile: Number, impression: Number }, {timestamps: true}); const User = mongoose.model('User', userSchema); module.exports = User;
Build gcc with a different name
|gcc|makefile|gnu-make|open-source|gnu|
null
Store UUIDs as data type [**`uuid`**][1], not as `varchar` or `text`. That would require more space in RAM and on disk, be slower to process and more error prone. See: - [What is the optimal data type for an MD5 field?][2] - https://stackoverflow.com/questions/33836749/postgresql-using-uuid-vs-text-as-primary-key/33838373#33838373 [1]: https://www.postgresql.org/docs/current/datatype-uuid.html [2]: https://dba.stackexchange.com/a/115316/3684
{"OriginalQuestionIds":[218384],"Voters":[{"Id":6782707,"DisplayName":"Edric"},{"Id":10871900,"DisplayName":"dan1st might be happy again"},{"Id":207421,"DisplayName":"user207421","BindingReason":{"GoldTagBadge":"android"}}]}
My frontend is a React app that I compiled using `npm run build`. Once the build folder is copied to the django project, I go in my Django virtualenv and run `python3 manage.py collectstatic` and `python3 manage.py runserver` When I run the server, I can read two errors in the console: Loading module from “http://127.0.0.1:8000/assets/index-sLPpAV_Z.js” was blocked because of a disallowed MIME type (“text/html”). The resource from “http://127.0.0.1:8000/assets/index-6ReKyqhx.css” was blocked due to MIME type (“text/html”) mismatch (X-Content-Type-Options: nosniff). and one warning: Loading failed for the module with source “http://127.0.0.1:8000/assets/index-sLPpAV_Z.js”. (Django) settings.py STATIC_URL = 'static/' STATICFILES_DIRS = [ os.path.join(BASE_DIR, 'build/static'), ] STATIC_ROOT = os.path.join(BASE_DIR, 'static') package.json ... "scripts": { "dev": "vite", "build": "rm -rf ../backend/build && tsc && vite build && cp -r build ../backend/build", "lint": "eslint . --ext ts,tsx --report-unused-disable-directives --max-warnings 0", "preview": "vite preview" }, ... I am confuse because there is no static folder in the build folder. Django gives me this warning: WARNINGS: ?: (staticfiles.W004) The directory '/home/user/project/backend/build/static' in the STATICFILES_DIRS setting does not exist. The build folder looks like this: build/ ----assets/ --------index-<randomString>.css --------index-<anotherString>.js ----index.html ----vite.svg
Cannot make Django run the frontend from Vite's build ("was blocked because of a disallowed MIME type (“text/html”)")
|django|build|static|vite|
null
On your github code, you have changed the annotation @Controller to @RestController on the AccountController. It solved the problem? Because @RestController returns JSON. The next step is try to remove the mapper bean that you've created in SimpleBankingApplication class, because Spring manages it with Jackson JSON Mapper by default
Pandas concat function giving "FutureWarning: The behavior of DataFrame concatenation with empty or all-NA entries is deprecated"
|python|pandas|future-warning|
I have this code in Angular 12 and it works perfectly. The problem comes when changing the version from 12 to 16. ``` import { ComponentRef, ComponentFactoryResolver, ViewContainerRef, ViewChild, Component, ViewRef } from "@angular/core"; import { ChildComponent } from "../child/child.component"; @Component({ selector: "app-parent", templateUrl: "./parent.component.html", styleUrls: ["./parent.component.css"] }) export class ParentComponent { @ViewChild("viewContainerRef", { read: ViewContainerRef }) VCR: ViewContainerRef; child_unique_key: number = 0; componentsReferences = Array<ComponentRef<ChildComponent>>() constructor(private CFR: ComponentFactoryResolver) {} createComponent() { let componentFactory = this.CFR.resolveComponentFactory(ChildComponent); let childComponentRef = this.VCR.createComponent(componentFactory); let childComponent = childComponentRef.instance; childComponent.unique_key = ++this.child_unique_key; childComponent.parentRef = this; // add reference for newly created component this.componentsReferences.push(childComponentRef); } remove(key: number) { if (this.VCR.length < 1) return; let componentRef = this.componentsReferences.filter( x => x.instance.unique_key == key )[0]; let vcrIndex: number = this.VCR.indexOf(componentRef as any); // removing component from container this.VCR.remove(vcrIndex); // removing component from the list this.componentsReferences = this.componentsReferences.filter( x => x.instance.unique_key !== key ); } } ``` Example: https://stackblitz.com/edit/add-or-remove-dynamic-component?file=src%2Fapp%2Fparent%2Fparent.component.ts The problem in Angular 16 is that vcrIndex is always -1 when not found. ``` let vcrIndex: number = this.VCR.indexOf(componentRef as any); // removing component from container this.VCR.remove(vcrIndex); ``` Any suggestions or how can I fix this without having to modify the way dynamic components are created? I have tried several things such as adding an index or an identifier but I still have not been able to solve the problem. Thx I have looked to see if the VCR structure is different and I could add an identifier and search for it manually but I couldn't.
Add and remove dynamic component Angular
|angular|indexing|dynamic|components|
null
This was an error on their end and should be fixed now: https://github.com/netlify/netlify-plugin-nextjs/issues/255
There are 2 ways to hide add to cart for variations that have a specific attribute value: 1st Way: ```php add_filter( 'woocommerce_available_variation', 'hide_add_to_cart_for_specific_attribute_value', 10, 3); function hide_add_to_cart_for_specific_attribute_value( $data, $product, $variation ) { $attributes = $variation->get_attributes(); $taxonomy = 'pa_badge-print'; if ( isset($attributes[$taxonomy]) && $attributes[$taxonomy] === 'yes' ) { $data['is_purchasable'] = false; } return $data; } ``` 2nd way: ```php add_filter( 'woocommerce_variation_is_purchasable', 'hide_add_to_cart_for_specific_attribute_value', 10, 2 ); function hide_add_to_cart_for_specific_attribute_value( $is_purchasable, $variation ) { $attributes = $variation->get_attributes(); $taxonomy = 'pa_badge-print'; if ( isset($attributes[$taxonomy]) && $attributes[$taxonomy] === 'yes' ) { $is_purchasable = false; } return $is_purchasable; } ``` Code goes in functions.php file of your active child theme (or active theme). Both ways work.