instruction
stringlengths
0
30k
Inherit Maui.SplashTheme and customize it in MAUI app
|android|maui|styling|
The video_player offers a crucial property mixWithOthers. Set it to true during video initialization: videoController = VideoPlayerController.asset( 'path/to/asset', videoPlayerOptions: VideoPlayerOptions(mixWithOthers: true)) ..initialize() mixWithOthers might not work on all platforms (especially web). You might need to explore platform-specific solutions for web in such cases.
I'm currently working on a project where I'm able to generate a sitemap successfully. I've created several sitemaps, and one of them is named "videos". After some research, I discovered that Google recommends using a dedicated sitemap for videos. Here's an example of the structure recommended by Google for a video sitemap: ```` <?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:video="http://www.google.com/schemas/sitemap-video/1.1"> <url> <loc>https://www.example.com/videos/some_video_landing_page.html</loc> <video:video> <video:thumbnail_loc>https://www.example.com/thumbs/123.jpg</video:thumbnail_loc> <video:title>Lizzi is painting the wall</video:title> <video:description> Gary is watching the paint dry on the wall Lizzi painted. </video:description> <video:player_loc> https://player.vimeo.com/video/987654321 </video:player_loc> </video:video> </url> </urlset> ````` After many hours of research, I couldn't find any documentation regarding generating a video sitemap with Next.js that allows customizing the tags. Currently, my code looks like this: ```` if (id === "videos") { if (Array.isArray(video)) { return video.map((video) => ({ url: `${baseUrl}/tv/${video.id}/${helpers.formatTextUrl(video.title)}`, hello: new Date().toISOString(), changeFrequency: "daily", priority: 0.5, })); } } ````` The resulting output looks like this: ````` <url> <loc>http://exemple.com/tv/68/070324-le-journal-de-maritima-tv</loc> <changefreq>daily</changefreq> <priority>0.5</priority> </url>
I need to read a NetCDF file in R using terra package. Here is a snapshot of the NetCDF ``` (NC <- nc_open("Test11.nc")) File Test11.nc (NC_FORMAT_NETCDF4): 1 variables (excluding dimension variables): float Var[x,y,lambert_azimuthal_equal_area] (Contiguous storage) units: UNITS _FillValue: -99999 long_name: Map1 Description grid_mapping: lambert_azimuthal_equal_area 3 dimensions: x Size:390 units: m standard_name: projection_x_coordinate long_name: x coordinate of projection y Size:404 units: m standard_name: projection_y_coordinate long_name: y coordinate of projection lambert_azimuthal_equal_area Size:1 grid_mapping_name: lambert_azimuthal_equal_area false_easting: 4321000 false_northing: 3210000 latitude_of_projection_origin: 52 longitude_of_projection_origin: 10 long_name: CRS definition longitude_of_prime_meridian: 0 semi_major_axis: 6378137 inverse_flattening: 298.257222101 spatial_ref: PROJCS["ETRS89-extended / LAEA Europe",GEOGCS["ETRS89",DATUM["European_Terrestrial_Reference_System_1989",SPHEROID["GRS 1980",6378137,298.257222101]],PRIMEM["Greenwich",0],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","4258"]],PROJECTION["Lambert_Azimuthal_Equal_Area"],PARAMETER["latitude_of_center",52],PARAMETER["longitude_of_center",10],PARAMETER["false_easting",4321000],PARAMETER["false_northing",3210000],UNIT["metre",1],AXIS["Northing",NORTH],AXIS["Easting",EAST],AUTHORITY["EPSG","3035"]] crs_wkt: PROJCS["ETRS89-extended / LAEA Europe",GEOGCS["ETRS89",DATUM["European_Terrestrial_Reference_System_1989",SPHEROID["GRS 1980",6378137,298.257222101]],PRIMEM["Greenwich",0],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","4258"]],PROJECTION["Lambert_Azimuthal_Equal_Area"],PARAMETER["latitude_of_center",52],PARAMETER["longitude_of_center",10],PARAMETER["false_easting",4321000],PARAMETER["false_northing",3210000],UNIT["metre",1],AXIS["Northing",NORTH],AXIS["Easting",EAST],AUTHORITY["EPSG","3035"]] GeoTransform: 2630000 10000 0 5420000 0 -10000 ``` -------------------------- Reading the file using raster: ``` raster::raster("Test11.nc", var = "Var") class : RasterLayer dimensions : 404, 390, 157560 (nrow, ncol, ncell) resolution : 10000, 10000 (x, y) extent : 2630000, 6530000, 1380000, 5420000 (xmin, xmax, ymin, ymax) crs : +proj=laea +lat_0=52 +lon_0=10 +x_0=4321000 +y_0=3210000 +ellps=GRS80 +units=m +no_defs source : Test11.nc names : Map1.Description z-value : 1 zvar : Var ``` The name of the layer is the `long_name` attribute of the `Var` variable, not the variable name itself. Is it possible to directly force raster to assign the name of the layer to the variable of interest (i.e., without using `names(R) = "Var"`)? ``` names(raster::raster("Test11.nc", var = "Var")) # "Map1.Description" ``` ------------------------ When reading it by terra the name of the layer is incorrect. ``` terra::rast("Test11.nc") class : SpatRaster dimensions : 404, 390, 1 (nrow, ncol, nlyr) resolution : 10000, 10000 (x, y) extent : 2630000, 6530000, 1380000, 5420000 (xmin, xmax, ymin, ymax) coord. ref. : ETRS89-extended / LAEA Europe (EPSG:3035) source : Test11.nc varname : Var (Map1 Description) name : Var_lambert_azimuthal_equal_area=1 unit : UNITS ``` ``` > varnames(terra::rast("Test11.nc")) [1] "Var" > names(terra::rast("Test11.nc")) [1] "Var_lambert_azimuthal_equal_area=1" > longnames(terra::rast("Test11.nc")) [1] "Map1 Description" ``` The name of the layer is the name of the variable concatenated with the CRS dimension `lambert_azimuthal_equal_area` followed by "=1". Which attributes are read by default by terra and raster? I tried explicitly assigning attributes like `varname` or name to the `Var` variable without affecting how terra reads layer name. Any explanation or suggestions?
Unexpected layer name when reading NetCDF file using terra and raster R packages
|r|netcdf|r-raster|terra|ncdf4|
I have an Angular workspace where I build and compile 5 packages. I then install those packages on a separate project. When referencing these builds within the same workspace. This works no problem. However, after I pack and publish and install on a new project it no longer works. I have narrowed the code down to this try catch. No errors are throw, no navigation is performed. This code is in my component inside my library. ``` typescript try { await this._oidcService.handleCallback(); const url = sessionStorage.getItem('redirectUrl') ?? ''; sessionStorage.removeItem('redirectUrl'); this._router.navigate([url]).then(success => { console.log("Navigation successful:", success) }).catch(error => { console.error("Navigation Error: ", error); }); } catch (error) { console.error('Error handling sign-in callback:', error); // Handle errors, e.g., show a message, log out, navigate to an error page, etc this._router.navigate(['/error'], { queryParams: { error: 'Sign-in callback error' } }) } ``` Neither Navigation Successful or the Navigation Error gets triggered. But I can see the session storage is updating and removing the redirect url. I have attempted to add "preserveSymlinks": true with no difference. I have logged the Navigation Start/End/Error/Cancel. Nothing indicating that the router is trying to change. The strangest part is that it works within my library workspace. I have a sandbox application set up to use the dist folder. Is anyone able to assist me on this matter? Is this even a feature that is allowed, router navigating from a library?
TypeError: unsupported operand type(s) for /: 'property' and 'complex'
|python|python-3.x|oop|debugging|
null
$.ajax({ url: url, type: "POST", data: JSON.stringify(data), contentType: "application/json; charset=utf-8", dataType: "json", success: function(){ ... } }) See : [jQuery.ajax()](http://api.jquery.com/jQuery.ajax/)
I'ved tried to setup Django with channels to provide notification to React. https://github.com/axilaris/react-django-channels <-- I have put my project code here. in backend/backend/settings.py INSTALLED_APPS = [ .. 'daphne', 'channels', ] CHANNEL_LAYERS = { 'default': { 'BACKEND': 'channels.layers.InMemoryChannelLayer' } } ASGI_APPLICATION = 'user_api.routing.application' in backend/backend/asgi.py (I didnt touch anything) import os from django.core.asgi import get_asgi_application from django.urls import path os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'backend.settings') application = get_asgi_application() in backend/user_api/routing.py from channels.routing import ProtocolTypeRouter, URLRouter from django.urls import path from . import consumers from django.core.asgi import get_asgi_application django_asgi_app = get_asgi_application() application = ProtocolTypeRouter({ "http": django_asgi_app, "websocket": URLRouter([ path("ws/notifications/", consumers.NotificationConsumer.as_asgi()), ]), }) in backend/user_api/consumers.py from channels.generic.websocket import AsyncWebsocketConsumer import json class NotificationConsumer(AsyncWebsocketConsumer): async def connect(self): print("XXX connect") await self.accept() async def disconnect(self, close_code): print("XXX disconnect") pass async def receive(self, text_data): print("XXX receive") text_data_json = json.loads(text_data) message = text_data_json['message'] await self.send(text_data=json.dumps({ 'message': message })) async def notification_message(self, event): print("XXX notification_message") await self.send(text_data=json.dumps(event["text"])) Finally in React, App.js useEffect(() => { const ws = new WebSocket('ws://localhost:8000/ws/notification/'); ws.onopen = () => { console.log('Connected to notification websocket'); }; ws.onmessage = e => { const data = JSON.parse(e.data); setMessage(data.message); }; ws.onerror = e => { console.error('WebSocket error', e); }; ws.onclose = e => { console.error('WebSocket closed', e); }; return () => { ws.close(); }; }, []); In views.py (when press login submit, it should trigger a notification websocket to React) class UserLogin(APIView): permission_classes = (permissions.AllowAny,) authentication_classes = (SessionAuthentication,) ## def post(self, request): print("YYY UserLogin") logging.debug("XXX UserLogin") channel_layer = get_channel_layer() async_to_sync(channel_layer.group_send)( 'notifications', { 'type': 'notification_message', 'text': 'test send', } ) return Response({"email": "a@gmail.com"}, status=status.HTTP_200_OK) Note that React is running on port 3000 and Django is on port 8000 % npm start <-- React % python manage.py runserver <-- Django logs from django and react https://gist.github.com/axilaris/3e3498ae670514c45ba6a36d8511c797 react logs App.js:79 WebSocket connection to 'ws://localhost:8000/ws/notifications/' failed: WebSocket is closed before the connection is established. App.js:71 WebSocket error Event {isTrusted: true, type: 'error', target: WebSocket, currentTarget: WebSocket, eventPhase: 2, …} App.js:75 WebSocket closed CloseEvent {isTrusted: true, wasClean: false, code: 1006, reason: '', type: 'close', …} localhost/:1 Error while trying to use the following icon from the Manifest: http://localhost:3000/logo192.png (Download error or resource isn't a valid image) App.js:62 Connected to notification websocket django logs System check identified 1 issue (0 silenced). March 28, 2024 - 11:01:17 Django version 4.1.5, using settings 'backend.settings' Starting ASGI/Daphne version 4.1.0 development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. WebSocket HANDSHAKING /ws/notifications/ [127.0.0.1:57763] INFO:django.channels.server:WebSocket HANDSHAKING /ws/notifications/ [127.0.0.1:57763] WebSocket DISCONNECT /ws/notifications/ [127.0.0.1:57763] INFO:django.channels.server:WebSocket DISCONNECT /ws/notifications/ [127.0.0.1:57763] XXX connect XXX disconnect WebSocket HANDSHAKING /ws/notifications/ [127.0.0.1:57767] INFO:django.channels.server:WebSocket HANDSHAKING /ws/notifications/ [127.0.0.1:57767] XXX connect WebSocket CONNECT /ws/notifications/ [127.0.0.1:57767] INFO:django.channels.server:WebSocket CONNECT /ws/notifications/ [127.0.0.1:57767] YYY UserView HTTP GET /api/user 200 [0.02, 127.0.0.1:57761] INFO:django.channels.server:HTTP GET /api/user 200 [0.02, 127.0.0.1:57761] YYY UserView HTTP GET /api/user 200 [0.00, 127.0.0.1:57761] INFO:django.channels.server:HTTP GET /api/user 200 [0.00, 127.0.0.1:57761] YYY UserLogout HTTP POST /api/logout 200 [0.01, 127.0.0.1:57761] INFO:django.channels.server:HTTP POST /api/logout 200 [0.01, 127.0.0.1:57761] YYY UserLogin HTTP POST /api/login 200 [0.00, 127.0.0.1:57761] INFO:django.channels.server:HTTP POST /api/login 200 [0.00, 127.0.0.1:57761] UPDATE: I think I got something working, like daphne is started up and there seems to a connection but aborted. pressing the login button should send the websocket notification but I dont see any message received in React side.
The problem was because of invalid global gradle caches, when i cleaned it up, everything become works. To cleanup global gradle caches use rm -rf $HOME/.gradle/caches/
I've got a problem with running Invoke-Command on remote Computer with WinRM up and running. I've configured and run WinRM, but Invoke-Command with task of Silent Install is not successfully run until I've run it when any User is logged in on remote PC. So without active users session the task simply hangs and nothing happened, but with User session everything works as expected. I've checked everything inside Script Block and it's correct. I've tried to run task with adding User to Local Administrator Group and as Domain Admin, the same... Any suggestions? Thx Part of my code is below: ` # Create Session $session = New-PSSession -ComputerName $computer # Install the application on the remote computer Invoke-Command -Session $session -ScriptBlock { Start-Process -FilePath "c:\VMware-Horizon-Agent-x86_64-2312-8.12.0-23142606.exe" -ArgumentList "/s /v`"/qn ADDLOCAL=Core,USB,RTAV,VmwVaudio,GEOREDIR,V4V,PerfTracker,HelpDesk,PrintRedir REMOVE=ClientDriveRedirection`"" -Wait }` I've tried to silent install software by running Invoke Command and expected the command will run successfully
Onvoke-command wotks only whan any user logged
|powershell|invoke-command|winrm|silent-installer|
null
I have a pdf document with radio responses like attached screenshot. I want to extract the selected response only through python or any OCR technique. Is there any way of doing it? (https://i.stack.imgur.com/3fXu6.png) I have tried pdfplumber,pdfminer,pytesseract but they are not able to extract the response only.
Is there any OCR or technique that can recognize/identify radio buttons printed out in the form of pdf document?
|python|nlp|ocr|large-language-model|information-extraction|
null
I wish to determine, at runtime, whether CUDA-aware MPI is about to `sendrecv` data directly between VRAM of my GPUs, or whether it is going to silently fall-back to first cloning the data to RAM. I wish to avoid the latter scenario, because I can do that cloning _faster_. I need to perform this check in a robust way for integration into a library. # Context My distributed `C++` application involves swapping significant amounts of data (e.g. `64 GiB`) between the VRAM of (very beefy) CUDA GPUs. A user can compile with "regular" MPI, *or* CUDA-aware MPI, and the communicating code logic resembles: ``` function swapGPUArrays(): if MPI is CUDA-aware: exchange VRAM pointers directly else: cudaMemcpy VRAM to RAM exchange RAM pointers cudaMemcpy RAM to VRAM ``` ``` function exchangeArrays(): partition arrays into maximum-sized MPI messages (about `16 GiB`) asynchronously send/recv each message wait for all asynchs to finish ``` The code uses [this check](https://www.open-mpi.org/faq/?category=runcuda#mpi-cuda-aware-support) to determine if the MPI-compiler is CUDA-aware (and can ergo directly sendrecv CUDA device memory) and otherwise falls back to copying the device memory to permanent RAM arrays, which are then exchanged. Note that because the exchanged memory is too large for a single MPI message, it is divided into several messages; these are asynchronously exchanged (so their transmission can occur simultaneously), as per [this work](https://arxiv.org/abs/2308.07402). This code works great in the following scenarios: - a non-CUDA aware MPI compiler is used; the memory is exchanged through RAM - a UCX-enabled CUDA-aware MPI compiler is used; the VRAM pointers are directly exchanged, and behind the scenes, are done so using optimised methods (e.g. peer-to-peer direct inter-GPU communication, when permitted by things like NVLink). # Problem Consider the scenario where a user compiles this code _with_ CUDA-aware MPI, but their GPUs are _not_ directly connected to the network/interconnect. This means that at runtime, the calls to the CUDA-aware MPI's `sendrecv` will secretly route the messages through RAM, in a similar spirit to my code above. This runs correctly, but alas, they are at a performance disadvantage! - In my manual copy, I `cudaMemcpy` the _entirety_ of the data from VRAM to RAM in one call. I then subsequently split the data into separate inter-RAM messages. - The CUDA-aware MPI makes the VRAM-to-RAM copies _for each_ message. Execution has already reached function `exchangeArrays()` and divided the payload into messages before MPI opts to copy them to RAM. This means that letting CUDA-aware MPI "take the wheel" results in a slower RAM-to-VRAM copy, by virtue of being split into many smaller copies. For my testing, this is about `2x` as slow. # Sought solution If I knew in advance that the CUDA-aware MPI was going to route through RAM anyway, I could opt to do it myself through a _single_ `cudaMemcpy` call. So I seek a function like ``` isMpiGoingToRouteThroughRAM() ``` which I could then query to change my original `if` statement to: ``` if (MPI is CUDA-aware) and (not isMpiGoingToRouteThroughRAM()): ... ``` I ideally need to perform this check at runtime, rather than compile time, since a user may compile and deploy on distinct machines. I appreciate such a check may prove MPI-compiler specific. It is my intention to support _all_ major MPI compilers (certainly at least OpenMPI and MPICH) using pre-processor guards. All suggestions welcome! # Related question When a CUDA-aware MPI falls back to routing through RAM, it surely needs RAM buffers to send/receive the messages to/from. I do not pass any RAM pointer to the `sendrecv` call - is it wastefully allocating and destroying temporary RAM? That'd be another catastrophe I wish to avoid!
null
The mistake is the result set from that query has _two columns_. Otherwise, this dataset is a poor choice for demonstrating the JOIN types, because there are no values that are unique to A and B, such that the OUTER join types will always closely resemble the INNER join types. Really, the _only_ difference for the INNER join is excluding the `null` rows, because `null` is never equal to itself. You might also change the ordering of the sets for left vs right, but without an `ORDER BY` clause the order is meaningless and you are free to write the results in whatever order you want... and many databases will often silently rewrite RIGHT JOIN as the equivalent LEFT JOIN and give you the results as if you'd written it that way in the first place... ordering and all. Finally, I were to list the five basic join types, I would write: "left, right, inner, cross, and full". I would NOT list "outer", because its already included as part of left right and full. More recently I might also include a sixth LATERAL type (APPLY in SQL Server).
Usecase: I have an SPSC queue in multi-thread setup, where I want to prefetch the write_index's mempool on a successful pop. Following is my original implementation: ```c++ void process() { if(spsc_queue->pop()) { // Do some pre-processing auto r_index = spsc_queue->read_index.load(memory_order_relaxed); auto val = mempool_[r_index+1]; // logic to use val follows } } ``` I have seen latency improvements when I change the setup to include data prefetching: ```c++ void process() { if(spsc_queue->pop()) { auto r_index = spsc_queue->read_index.load(memory_order_releaxed); __builtin_prefetch(mempool_ + r_index + 1, 0, 0); // Do some pre-processing auto val = mempool_[r_index + 1]; // logic to use val follows } } ``` Issue is: Given that pop will be inlined, I am not sure if the compiler / processor re-ordering is shifting the prefetch instruction to execute even if pop fails. Consider: ```c++ void pop() { auto w_index = write_index.load(memory_order_acquire); auto r_index = read_index.load(memory_order_relaxed); // What if builtin_prefetch is reordered to before this return. if (w_index == r_index) return false; // early return for pop fail // update other structs return true; } ``` I want to ensure that the prefetching only occurs if pop() returns true. Not otherwise. Things I tried: 1. using mfence : If I place mfence just after the early return, its bound to ensure non-reordering at compiler / processor level. But that is going to cost me additional latency, since mfence will is a costly instruction. 2. Measure the cache accesses in pop fail path. In case it rises, its possible that prefetch is being reordered at processor level (not conclusive). 3. Compiler reordering can be prevented using compiler native support. Issue is processor level reordering. 4. Accessing a volatile after early return, and then issueing the prefetch instruction. From my readings.. it seems like volatile only syncrhonizes with other volatile so in my case it won't be helpful. Is there any deterministic way that I can be sure of the prefetch instruction isn't executed on pop fail path? Compiler: ``` $ gcc -v gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Target: x86_64-linux-gnu ``` TIA.
Need to get a number using only one loop for and `charCodeAt()` We can't use native methods and concatenations, parseInt, parseFloat, Number, +str, 1 * str, 1 / str, 0 + str, etcc <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> const text = "In this market I lost 0,0000695 USDT, today is not my day"; const parseBalance = (str) => { const zero = "0".charCodeAt(0); const nine = "9".charCodeAt(0); const coma = ",".charCodeAt(0); let num = 0, factor = 1; for (let i = str.length - 1; i >= 0; i--) { const char = str.charCodeAt(i); if (char >= zero && char <= nine) { num += (char - 48) * factor; factor *= 10; } } return num; }; console.log(parseBalance(text)); <!-- end snippet --> Need result: `0,0000695` My current result is `695` Tell me how to correct the formula for writing zeros
Yeah, you just have to make a forEach (in your orderServiceImpl getAllOrders() function) to be able to use the entityListToResponseModel for all your entries individually instead of using your entityListToResponseModelList to be able to properly and completely map each order.
Ended up just getting rid of the damned function and implementing it in my service implementation. Leaving the code here for anyone who might need it! entityListToResponseModelList seems to get confused whenever it needs to list off multiple aggregates/entities @Override public List<OrderResponseModel> getOrders() { List<OrderResponseModel> orderResponseModelList = new ArrayList<>(); List<Order> orders = orderRepository.findAll(); orders.forEach(order -> { if(!productRepository.existsProductByProductIdentifier_ProductId(order.getProductIdentifier().getProductId())) throw new InvalidInputException(("Invalid input for productId: ") + order.getProductIdentifier().getProductId()); if(!customerRepository.existsCustomerByCustomerIdentifier_CustomerId(order.getCustomerIdentifier().getCustomerId())) throw new InvalidInputException(("Invalid input for customerId: ") + order.getCustomerIdentifier().getCustomerId()); if(!employeeRepository.existsEmployeeByEmployeeIdentifier_EmployeeId(order.getEmployeeIdentifier().getEmployeeId())) throw new InvalidInputException(("Invalid input for employeeId: ") + order.getEmployeeIdentifier().getEmployeeId()); orderResponseModelList.add(orderResponseMapper.entityToResponseModel(order, productRepository.findByProductIdentifier_ProductId(order.getProductIdentifier().getProductId()), customerRepository.findByCustomerIdentifier_CustomerId(order.getCustomerIdentifier().getCustomerId()), employeeRepository.findByEmployeeIdentifier_EmployeeId(order.getEmployeeIdentifier().getEmployeeId()))); }); return orderResponseModelList; }
[1]I want to add an information column on the right side of the stack bar area (see [the attached image][2]). Like **Do at all** combines **Often** and **Occasionally**. I want to do this in R. # Create the data frame ``` library(tidyverse) data <- data.frame( event = c("Talking with family", "Donating good", "Posting on social media", "Give interview", "Volunteering for a charity", "Speaking with a journalist", "Participating in protest"), lk1 = c(23, 20, 21, 40, 36, 40, 23), lk2 = c(10, 36, 30, 12, 12, 12, 25), lk3 = c(36, 20, 37, 36, 40, 36, 40), lk4 = c(31, 24, 12, 12, 12, 12, 12), lk1_lk2 = c(33, 56, 51, 52, 48, 52, 48) ) ``` # Plotting ``` ggplot(data_long, aes(x = value, y = event, fill = likert)) + geom_bar(stat = "identity") + geom_text(aes(label = value), position = position_stack(vjust = 0.5), color = "white", size = 3) + geom_text(data = data_long %>% filter(likert == "lk1"), aes(label = lk1_lk2, x = max(value) + 2, y = event), hjust = -50, color = "black", size = 3) + scale_x_continuous(labels = scales::percent_format(), expand = expand_scale(add = c(1, 10))) + # Adjusting x-axis limits labs(title = "Preference Distribution for Various Events", x = "Percentage", y = "Event") + theme_minimal() + theme(legend.position = "top") + geom_text(data = NULL, aes(x = Inf, y = Inf, label = "Do at All"), hjust = 1, vjust = 1, color = "black") + # Add text at right-top corner scale_fill_discrete(labels = c("lk1" = "Often", "lk2" = "Occasionally", "lk3" = "Never", "lk4" = "Prefer not to say")) ``` [1]: https://i.stack.imgur.com/ojtZy.jpg [2]: https://i.stack.imgur.com/TbGJG.jpg
I am trying to add the code below to an existing application that uses GStreamer 1.22.1. It compiles but gives a link error: >/home/src/webrtc/gst/gst.c:99: undefined reference to `gst_rtp_buffer_map' What am I missing? This is the build command: ``` meson \ -Dbuildtype=release \ -Drs=disabled \ -Dtests=disabled \ -Dexamples=disabled \ -Drtsp_server=disabled \ -Dges=disabled \ -Dgst-examples=disabled \ -Ddoc=disabled \ -Dgtk_doc=disabled \ -Dpython=disabled \ -Dqt5=disabled \ -Dgstrtp=enabled \ -Dgst-plugins-bad:openh264=disabled \ -Dugly=${x264_flag} \ -Dgpl=${x264_flag} \ -Dlibav=enabled \ -Dbase=enabled \ -Dgood=enabled \ -Dbad=enabled \ -Dgst-plugins-good:rtp=enabled \ -Dgst-plugins-base:ogg=disabled \ -Dgst-plugins-base:vorbis=disabled \ -Dgst-plugins-good:jpeg=disabled \ -Dgst-plugins-good:lame=disabled \ -Dgst-plugins-bad:rtp=enabled \ -Dgst-plugins-bad:webrtc=enabled \ -Dgst-plugins-bad:va=enabled \ -Dgst-plugins-bad:dvb=disabled \ -Ddevtools=disabled \ --prefix=/opt/local/build/gst/ builddir && ninja -C builddir && ninja -C builddir install && ldconfig ``` And this is the new code: ``` #include <gst/rtp/gstrtpbuffer.h> ... if (gst_rtp_buffer_map (buffer, GST_MAP_READ, &rtp)) { rtptime = gst_rtp_buffer_get_timestamp (&rtp); seqnum = gst_rtp_buffer_get_seq (&rtp); gst_rtp_buffer_unmap (&rtp); } ```
Gstreamer + meson: Undefined reference to `gst_rtp_buffer_map'
|gstreamer|meson-build|
I have an AWS auto-scaler group using docker-autoscaler executor. I have the desired capacity set to 1, min set to 0 and max set to 15. For some reason my gitlab jobs are not executing concurrently on the same instance. Once the EC2 instance completes running the job, the instance terminates and the desired capacity is decremented to 0 (the min). The remaining jobs in my gitlab pipeline are left waiting for an instance until it times out. I also receive an error on my gitlab-runner.service that shows the following: *builds=0 error=failed to update executor: reserving taskscaler capacity: no capacity: no immediately available capacity executor=docker-autoscaler* In order to get the other jobs to run, I have to manually update the desired capacity once again. I've also tried setting the min to 1 and the desired to 1, but it still terminates my instance and gives me an error on the gitlab runner that it tried to decrement the desired capacity but it couldn't because it is as the minimum value.
AWS ASG desired capacity decrements to minimum and terminates instance before completing all gitlab jobs
|amazon-web-services|gitlab-ci-runner|
I'm trying to run a Gamma analysis in a self-paced reading data. However, the model successively fails to converge. I've seen some answers here trying to solve this problem for other people, but none of the solutions was feasible for me. I couldn't get an example from my data because it's too long (more than 9k rows). It consists of a self-paced reading experiment in which TR is the response time and I want a gamma model (since the data is not normal at all, but skewed to the right) to calculate TR in function of cond (condition; there were 3 in the experiment (C1, C2 and C3)) and occurrence (there were also 3: OC1, OC2 and OC3). I've already tried to use a Nelder_Mead optimizer instead of the default optimizer, but it seems not to work either. Take a look at my models: `mod_cond*oc <- glmer(TR ~ cond*oc + (1|item) + (1|sujeito) , data=dadoslimpos5, family=Gamma(link="identity"))` `mod_cond*oc <- glmer(TR ~ cond*oc + (1|item) + (1|sujeito) , data=dadoslimpos5, family=Gamma(link="identity"), control = glmerControl(optimizer ="Nelder_Mead"))` None of them seems to work. I've gotten these messages for each of them, respectively: Warning message: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model failed to converge with max|grad| = 0.0484605 (tol = 0.002, component 1) Warning message: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model failed to converge with max|grad| = 0.0471255 (tol = 0.002, component 1) Is there any solution to this problem? If someone needs my data to take a look at, I've posted the clean data (only regions of interest) [here](https://docs.google.com/document/d/1HS6RByo0E7eyBTJv0-DGQfgrhWS1Vxq8Ld2R3n5Xtpk/edit?usp=sharing). I've already published these data before using mixed models, but now I need to take a look at them using gamma and I'm expecting the model to converge without warnings. Thanks a lot!
I'd like to know how to insert a column into an array. I know the push function to add a row but not how to add a column like that: ```none A ; 1 B ; 2 C ; 3 D ; 4 ``` To: ```none A ; A1 ; 1 B ; B2 ; 2 C ; C3 ; 3 D ; D4 ; 4 ``` I guess I will have to loop between each row to add an item between two item but is there a way to do it as a bulk?
I have a static website at firebase with a domain connected, how I can to deploy another small website to same domain to url look something like? : example.com/weather create aother app insite same project on firebase It's require to integrate this weather app inside main website or it's impossible to achive like this?
Hosting another static website to firebase
|firebase|
null
i am including my code for your reference and also the screen when app is launched. ``` <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/main" android:layout_width="match_parent" android:layout_height="match_parent" android:gravity="bottom" android:orientation="vertical" android:padding="0dp" tools:context=".dashboard"> <com.google.android.material.bottomnavigation.BottomNavigationView android:layout_width="match_parent" android:layout_height="60dp" android:background="#6000BCD4" android:padding="0dp" app:itemIconSize="20dp" app:itemPaddingBottom="0dp" app:itemPaddingTop="0dp" app:menu="@menu/bottom_nav_value" /> </LinearLayout> ``` [This what i am getting](https://i.stack.imgur.com/49iuA.jpg) I want to remove extra spacing below the navigation view icons.
I am not able to remove space below the navigation view icon in android studio. What;s wrong with code?
|java|android|xml|
null
Just type this in your terminal: flutter config --android-sdk path of your sdk For example: flutter config --android-sdk c:\android\sdk Let me know if your problem solved or not **Happy Coding :)**
It is possible using a [lateral subquery](https://www.postgresql.org/docs/current/queries-table-expressions.html#QUERIES-LATERAL) and `jsonb_array_elements`. I assume that your `events` column type is JSONB rather than JSON. ```sql with t as ( select brand, (j ->> 'timestamp')::timestamp last_bar_timestamp from the_table, lateral jsonb_array_elements(events -> 'data') j where j ->> 'event' = 'bar' ) select distinct on (brand) * from t order by brand, last_bar_timestamp desc; ``` [DB Fillde](https://www.db-fiddle.com/f/sejUN3iVUgLgoYo5WkBnu9/1) Unrelated but this would be way easier and faster with a normalized data design. **Edit** For "order by last_bar_timestamp only" you need a subquery because `order by brand, last_bar_timestamp desc` determines [which record to be picked](https://stackoverflow.com/questions/9795660/postgresql-distinct-on-with-different-order-by) for each brand, i.e. the latest. The CTE that is there for clarity in the first version is removed. ```sql select * from ( select distinct on (brand) brand, (j ->> 'timestamp')::timestamp last_bar_timestamp from the_table, lateral jsonb_array_elements(events -> 'data') j where j ->> 'event' = 'bar' order by brand, last_bar_timestamp desc ) t order by last_bar_timestamp; ```
Router.Navigate from Angular Library Doesn't Work
|angular|typescript|routes|router|
I'm currently working on a project that involves processing large volumes of textual data for natural language processing tasks. One critical aspect of my pipeline involves string matching, where I need to efficiently match substrings within sentences against a predefined set of patterns. Here's a mock example to illustrate the problem with following list of sentences: ```python sentences = [ "the quick brown fox jumps over the lazy dog", "a watched pot never boils", "actions speak louder than words" ] ``` And I have a set of patterns: ```python patterns = [ "quick brown fox", "pot never boils", "actions speak" ] ``` My goal is to efficiently identify sentences that contain any of these patterns. Additionally, I need to tokenize each sentence and perform further analysis on the matched substrings. Currently, I'm using a brute-force approach with nested loops, but it's not scalable for large datasets. I'm looking for more sophisticated techniques or algorithms to optimize this process. How can I implement string matching for this scenario, considering scalability and performance? Any suggestions would be highly appreciated!
How to make pattern matching efficient for large text datasets in python
|python|substring|
i've got this problem when calling firestore function from angular fire. When i'm succesfully authenticated, i'm fetching to firestore the current user document id for additional data associated with the user. ```typescript // In App.Component ngOnInit(): void { this.afAuth.onAuthStateChanged((user) => { if (user) { // Some firestore call like // await this.firestore.collection<T>(collectionName).ref.where(fieldName, '==', fieldValue).get(); // --> FirebaseError: [code=permission-denied]: Missing or insufficient permissions. } }); } ``` I don't understand because my rules for firestore are : ``` rules_version = '2'; service cloud.firestore { match /databases/{database}/documents { match /{document=**} { allow read, write: if true; } } } ``` so read and write are permitted to everyone. I'm facing this error since i've enforced security as firebase console suggested, now, as far as i remember, only specific applications are accepted but i don't know how, where to settle recognized applications / get the documentation about that. is it related to https://firebase.google.com/docs/projects/iam/permissions ?
FirebaseError: [code=permission-denied]: Missing or insufficient permissions. even if firestore rules read and write are permitted
|firebase|google-cloud-firestore|angularfire2|
{"Voters":[{"Id":1127428,"DisplayName":"Dale K"},{"Id":3216427,"DisplayName":"joanis"},{"Id":16217248,"DisplayName":"CPlus"}],"SiteSpecificCloseReasonIds":[11]}
{"Voters":[{"Id":4370109,"DisplayName":"Brian Tompsett - 汤莱恩"},{"Id":466862,"DisplayName":"Mark Rotteveel"},{"Id":16217248,"DisplayName":"CPlus"}],"SiteSpecificCloseReasonIds":[18]}
Model failed to converge (gamma model, self-paced reading data)
|r|statistics|convergence|gamma|
null
I have been into Angular and this site really helped me in building an application. I have built a dynamic form in angular with JSON data, where it creates a form dynamically with different controls, validations, properties etc. However, i want to make it even more dynamic, for example one of the control type is say "Picklist" in the json, in this case i want to create a dropdown and pull values from DB and bind it with Dropdown control. Assuming that i already have APIs. Question is, how to make store the dropdown values dynamically across the application. I want it to be dynamic, say suppose in my JSON i have 2 picklist dropdown, Country and State for which the values will be fetched from API. I need to dynamically call the respective APIs and store the array of values somewhere to load when the form loads right. In this case i think i have to use Array and to make the subscribe method non Async so that i can store the values in respective Array index
I have been trying to disable the compute engine api and notebook api but have no luck I have tried the following > via the API services > via gcloud commands (gcloud services disable compute.googleapis.com --force) Neither work. I got the attached error when I tried via api service [![enter image description here][1]][1] All resources have been deleted both in compute engine and notebooks so I am not sure why it is still holding it up. Anyone experienced this before? [1]: https://i.stack.imgur.com/B9RXd.png
unable to disable compute engine and notebooks api on gcp
|google-cloud-platform|google-compute-engine|
{"Voters":[{"Id":23839105,"DisplayName":"Dodgy Programmer"}],"DeleteType":1}
In mycase, the imported module name collided with a local variable of the same name ``` # imported module from expiry_type import get_expiry_type # local variable with the same name as the imported module name expiry_type ```
We have a lot of code like the following ``` int functionName(int arg) { // Some comment if (condition) { // Why are we here? ... } } ``` Is there a setting to make clang format put the comments following the open braces on a new line? (e.g. ``` int functionName(int arg) { // Some comment if (condition) { // Why are we here? ... } } ``` ) We already use clang-format but I can't see how to acheive this. Any help much appreciated!
Make clang-format put trailing comments onto newline after opening brace
|c++|clang-format|
{"OriginalQuestionIds":[30549139],"Voters":[{"Id":1038015,"DisplayName":"Robert Longson","BindingReason":{"GoldTagBadge":"javascript"}}]}
I use Doctrine (version 3.1) and I do some association EXTRA_LAZY. OneToMany and OneToOne association work well. But when I try an EXTRA_LAZY ManyToMany association, the ->matching($criteria) methods has a strange behavior. The Criteria::expr()->contains create an invalid sql. It add this : "AND te.name CONTAINS ?" to the sql request. Other associations (OneToOne and OneToMany) create a valid SQL with a like % %. Do you know how to solve this problem? After declaring a ManyToMany association in my class A : ``` #[ORM\JoinTable(name: 'association_A_B')] #[ORM\JoinColumn(name: 'A_id', referencedColumnName: 'A_id')] #[ORM\InverseJoinColumn(name: 'B_id', referencedColumnName: 'B_id')] #[ORM\ManyToMany(targetEntity: B::class, fetch: "EXTRA_LAZY")] private Collection $b; ``` This code : ``` $criteria = Criteria::create(); $criteria->where(Criteria::expr()->contains("name", "searchString")); return $this->b->matching($criteria); ``` failed with this error : An exception occurred while executing a query: SQLSTATE[42000]: Syntax error or access violation: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CONTAINS To better understand the problem, I modified the Doctrine code to hard replace the CONTAINS by a like : it seems to work fine. I don't understand why Doctrine doesn't handle CONTAINS correctly in this case.
Hopefully thiss will save you some time as well. ensure that you have all these installed firebase_messaging, firebase_core, http googleapis_auth Also, download the service account file from cloud firestore by following these steps... Navigate to your project on cloud firebase 1. Select the gear icon (Project settings) 2. Select the service account tab 3. Generete the file 5.locate the file on you local pc or machine 4. finally, use those variables and fill the "FILL_UP" placeholder on getAccessToken method static Future<String> generateFCMAccessToken() async { try { /* get these details from the file you downloaded(generated) from firebase console */ String type = "FILL_UP"; String project_id = "FILL_UP"; String private_key_id = "FILL_UP"; String private_key = "FILL_UP"; String client_email = "FILL_UP"; String client_id = "FILL_UP"; String auth_uri = "FILL_UP"; String token_uri = "FILL_UP"; String auth_provider_x509_cert_url = "FILL_UP"; String client_x509_cert_url = "FILL_UP"; String universe_domain = "FILL_UP"; final credentials = ServiceAccountCredentials.fromJson({ "type": type, "project_id": project_id, "private_key_id": private_key_id, "client_email": client_email, "private_key": private_key, "client_id": client_id, "auth_uri": auth_uri, "token_uri": token_uri, "auth_provider_x509_cert_url": auth_provider_x509_cert_url, "client_x509_cert_url": client_x509_cert_url, "universe_domain": universe_domain }); List<String> scopes = [ "https://www.googleapis.com/auth/firebase.messaging" ]; final client = await obtainAccessCredentialsViaServiceAccount( credentials, scopes, http.Client()); final accessToken = client; Timer.periodic(const Duration(minutes: 59), (timer) { accessToken.refreshToken; }); return accessToken.accessToken.data; } catch (e) { Reuse.logger.i("THis is the error: $e"); } return ""; }
I want to create an Android application that, when I use my own TTS (Text-to-Speech) function to speak, listens to the surrounding sounds and triggers an event when it detects someone else speaking. My idea is to use FFT to compare the original TTS audio with external user voices. Is there any method to achieve this? I have tried to use the below logic to compare the FFT genrate by JTransforms ``` private fun calculateCosineSimilarity(vector1: DoubleArray, vector2: DoubleArray): Double { if(vector1.size != vector2.size){ return 0.0 } var dotProduct = 0.0 var norm1 = 0.0 var norm2 = 0.0 var i = 0 while (i < vector1.size) { dotProduct += vector1[i] * vector2[i] + vector1[i + 1] * vector2[i + 1] norm1 += Math.pow(vector1[i], 2.0) + Math.pow(vector1[i + 1], 2.0) norm2 += Math.pow(vector2[i], 2.0) + Math.pow(vector2[i + 1], 2.0) i += 2 } return dotProduct / (Math.sqrt(norm1) * Math.sqrt(norm2)) } ```
Detect the voice of multiple person speaking
try this query example. since i dont know your db structure (was not provided) and data example i assumed on my own. i think this is what are you looking for ``` New With {Key.a = af.AppfolderiD.Value, Key.v = isLoggedInUser_ID_Role} ``` code ```VB# Dim isLoggedInUser_ID_Role As Integer = 1 Dim GBItems = ( From af In ApplicationFolders Join ar In AppFolderRoleConfig On New With {Key.a = af.AppfolderiD.Value, Key.v = isLoggedInUser_ID_Role} Equals New With {Key.a = ar.AppFolderID.Value, Key.v = ar.RoleID.Value} Where af.ParentID = 0 AndAlso af.TypeID = 1 Select New HomeTree() With { .ID_Key = af.ID_Key, .AppFolderId = af.AppfolderiD, .Name = af.Name '... }) GBItems.Dump() ```
Which EsLint rules enforce the following correction? ```typescript type Incorrect = Array< number >; type Correct = Array<number>; ``` Every @typescript-eslint / @stylistic rule I have tried dodges it.
null
I don't quite understand your code or your example, but I think (hope?) the following code sample shows (the principle of) how you can accomplish what you need: from matplotlib import pyplot as plt import matplotlib.colors as mcolors import shapely from shapely.plotting import plot_line, plot_polygon poly1 = shapely.box(2, 0, 4, 3) poly2 = shapely.box(0, 1, 2, 2) lines = [] # Intersecting lines intersecting_lines = poly1.boundary.intersection(poly2.boundary) lines.extend(shapely.get_parts(shapely.line_merge(intersecting_lines))) # Non intersecting boundaries lines.extend( shapely.get_parts(shapely.line_merge(poly1.boundary.difference(intersecting_lines))) ) lines.extend( shapely.get_parts(shapely.line_merge(poly2.boundary.difference(intersecting_lines))) ) # Plot fig, ax = plt.subplots(ncols=2, figsize=(15, 15)) plot_polygon(poly1, ax=ax[0], color="red") plot_polygon(poly2, ax=ax[0]) colors = [] for line, color in zip(lines, mcolors.TABLEAU_COLORS): plot_line(line, ax=ax[1], color=color) plt.show() Plotted image with input left, output right: [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/uhtSQ.png
Google App Engine no longer supports Java 8. I am using Eclipse and used it in combination with the Goolge App Engine tools. Testing and deploying was very simple. I am looking for a method how I can easily migrate my current code to Java 21 and still use Eclipse. Is there are way to migrate a Java Eclipse project to Java 21 and keep the testing and deployment process simple? Looking at the documentation I see a lot of frameworkes and tools like Maven and Spring. I am hesitant to use several new tools which might not work with my setup. Is there an easy and simple way to keep making changes to my Java application under Eclipse or any other IDE? I am happy to change code and do the migration work I just don't want to learn 100 deployment steps and then figure out a change to Spring will not work with my setup. Thank you I tried to install Eclipse with new Google Cloud tools but I cannot figure out how to simply migrate my application.
Migrating Google App Engine - Eclipse Java 8
How to runtime detect when CUDA-aware MPI will transmit through RAM?
|c++|cuda|mpi|ucx|
The reason that your property is never set and the `OnFilenameChanged()` method never gets called is that the `QueryProperty` doesn't match any of the properties in your *EditViewModel*. The problem specifically is in this line: ```c# await Shell.Current.GoToAsync($"{"Edit"}?Filename_{s}"); ``` Note how you are missing an `=` character. It should be ```c# await Shell.Current.GoToAsync($"{"Edit"}?Filename_={s}"); ``` Also note that only the value of `s` will be passed, the argument will not include `Filename_` in it. Generally, I would recommend to not use the underscore (`_`) and change the `QueryProperty` to this: ```c# [QueryProperty(nameof(Filename), nameof(Filename))] public partial class EditViewModel : ObservableObject { [ObservableProperty] string filename; partial void OnFilenameChanged(string value) { ReadFile(); } } ``` Then call it like this, for example: ```c# await Shell.Current.GoToAsync($"/Edit?Filename={s}"); ```
[enter image description here][1](https://i.stack.imgur.com/6bm05.png) I also follow following step but it's still not working : If SWC continues to fail to load you can opt-out by disabling swcMinify in your next.config.js or by adding a .babelrc to your project with the following content Just help me out of this problem [1]: https://i.stack.imgur.com/LvYmJ.png
I have an HTML/CSS/JS website where I'm embedding facebook posts, and adding some text (with various info and action buttons) to the top of each post. I'm adding the text that goes above each post dynamically in Javascript, so that the embedded post (iframe) is in a separate container than the text that goes above it. As you can see in the image below, the text is positioned as `absolute` so that it appears above the post, and not next to it. However, I can't seem to get the post to move *down* nor the text to move up in order to create some separation between them. Any help or ideas would be greatly appreciated!! UPDATE: I meant to include a codepen that reproduces the issue with minimal code: https://codepen.io/Mickey_Vershbow/pen/dyLzLNV?editors=1111 See below for code as well: <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> // Array of facebook post ID's facebookArr = [{ post_id: "pfbid0cm7x6wS3jCgFK5hdFadprTDMqx1oYr6m1o8CC93AxoE1Z3Fjodpmri7y2Qf1VgURl" }, { post_id: "pfbid0azgTbbrM5bTYFEzVAjkVoa4vwc5Fr3Ewt8ej8LVS1hMzPquktzQFFXfUrFedLyTql" } ]; // Variables to store post ID, embed code, parent container let postId = ""; let embedCode = ""; let facebookContainer = document.getElementById("facebook-feed-container"); $(facebookContainer).empty(); // Loop through data to display posts facebookArr.forEach((post) => { let relativeContainer = document.createElement("div"); postId = post.post_id; postLink = `${postId}/?utm_source=ig_embed&amp;utm_campaign=loading`; // ---> UPDATE: separate container element let iframeContainer = document.createElement("div"); embedCode = `<iframe src="https://www.facebook.com/plugins/post.php?href=https%3A%2F%2Fwww.facebook.com%2FIconicCool%2Fposts%2F${postId}&show_text=true&width=500" width="200" height="389" style="border:none;overflow:hidden" scrolling="no" frameborder="0" allowfullscreen="true" allow="autoplay; clipboard-write; encrypted-media; picture-in-picture; web-share" id=fb-post__${postId}></iframe>`; // Update the DOM iframeContainer.innerHTML = embedCode; // ADDITIONAL TEXT let additionalText = document.createElement("div"); additionalText.className = "absolute"; additionalText.innerText = "additional text to append"; relativeContainer.append(additionalText, iframeContainer); facebookContainer.append(relativeContainer); }); <!-- language: lang-css --> #facebook-feed-container { display: flex; flex-direction: row; row-gap: 1rem; column-gap: 3rem; padding: 1rem; } .absolute { position: absolute; margin-bottom: 5rem; color: red; } .risk-container { margin-top: 3.5rem; } <!-- language: lang-html --> <script src="https://code.jquery.com/jquery-3.6.0.min.js"></script> <div id="facebook-feed-container"> </div> <!-- end snippet --> [1]: https://i.stack.imgur.com/33joo.jpg
|java|eclipse|google-app-engine|
null
I start with an initial content page with a size of 1285 x 800. I'm using shell navigation and I say `Shell.Current.Navigation.PushModalAsync(new NewDb());` and in the NewDb content page I have: protected override void OnAppearing() { base.OnAppearing(); Window.Height = 450; } In a button of the NewDb page I have `Shell.Current.Navigation.PopAsync();` but when I return back to the first page, it remains the height of 450. How do I resize the height to 800? I tried entering code in the OnAppearing() code-behind in the first page but that doesn't seem to work.
How do I change the size of a window?
|c#|xaml|maui|code-behind|
Say you have the following code: ```cpp // a.cpp int get() { return 0; } ``` ```cpp // b.cpp int get(); // usually, one doesn't write this directly, but gets these // declarations from included header files int x = get(); ``` When compiling `b.cpp`, the compiler simply assumes that `get()` symbol was defined *somewhere*, but it doesn't yet care where. The linking phase is responsible for finding the symbol and correctly linking the object files produced from `a.cpp` and `b.cpp`. If `a.cpp` didn't define `get`, you would get a linker error saying "undefined reference" or "unresolved external symbol". ### C++ Standard Wording Compiling a C++ program takes place in several phases specified in [[lex.phases]](https://eel.is/c++draft/lex.phases), the last of which is relevant: > **9.** All external entity references are resolved. Library components are linked to satisfy external references to entities not defined in the current translation. All such translator output is collected into a program image which contains information needed for execution in its execution environment. See [Keith Thompson's answer](https://stackoverflow.com/a/8834196/5740428) for a summary of these phases. The specified errors occur during this last stage of compilation, most commonly referred to as linking. It basically means that you compiled a bunch of source files into object files or libraries, and now you want to get them to work together. ### Linker Errors in Practice If you're using Microsoft Visual Studio, you'll see that projects generate `.lib` files. These contain a table of exported symbols, and a table of imported symbols. The imported symbols are resolved against the libraries you link against, and the exported symbols are provided for the libraries that use that `.lib` (if any). Similar mechanisms exist for other compilers/ platforms. Common error messages are `error LNK2001`, `error LNK1120`, `error LNK2019` for **Microsoft Visual Studio** and `undefined reference to` *symbolName* for **GCC**. The code: struct X { virtual void foo(); }; struct Y : X { void foo() {} }; struct A { virtual ~A() = 0; }; struct B: A { virtual ~B(){} }; extern int x; void foo(); int main() { x = 0; foo(); Y y; B b; } will generate the following errors with **GCC**: /home/AbiSfw/ccvvuHoX.o: In function `main': prog.cpp:(.text+0x10): undefined reference to `x' prog.cpp:(.text+0x19): undefined reference to `foo()' prog.cpp:(.text+0x2d): undefined reference to `A::~A()' /home/AbiSfw/ccvvuHoX.o: In function `B::~B()': prog.cpp:(.text._ZN1BD1Ev[B::~B()]+0xb): undefined reference to `A::~A()' /home/AbiSfw/ccvvuHoX.o: In function `B::~B()': prog.cpp:(.text._ZN1BD0Ev[B::~B()]+0x12): undefined reference to `A::~A()' /home/AbiSfw/ccvvuHoX.o:(.rodata._ZTI1Y[typeinfo for Y]+0x8): undefined reference to `typeinfo for X' /home/AbiSfw/ccvvuHoX.o:(.rodata._ZTI1B[typeinfo for B]+0x8): undefined reference to `typeinfo for A' collect2: ld returned 1 exit status and similar errors with **Microsoft Visual Studio**: 1>test2.obj : error LNK2001: unresolved external symbol "void __cdecl foo(void)" (?foo@@YAXXZ) 1>test2.obj : error LNK2001: unresolved external symbol "int x" (?x@@3HA) 1>test2.obj : error LNK2001: unresolved external symbol "public: virtual __thiscall A::~A(void)" (??1A@@UAE@XZ) 1>test2.obj : error LNK2001: unresolved external symbol "public: virtual void __thiscall X::foo(void)" (?foo@X@@UAEXXZ) 1>...\test2.exe : fatal error LNK1120: 4 unresolved externals ### Common Causes - [Failure to link against appropriate libraries/object files or compile implementation files][2] - [Declared and undefined variable or function.][3] - [Common issues with class-type members][4] - [Template implementations not visible.][5] - [Symbols were defined in a C program and used in C++ code.][6] - [Incorrectly importing/exporting methods/classes across modules/dll. (MSVS specific)][7] - [Circular library dependency][8] - [undefined reference to `WinMain@16'][9] - [Interdependent library order][10] - [Multiple source files of the same name][11] - [Mistyping or not including the .lib extension when using the `#pragma` (Microsoft Visual Studio)][12] - [Problems with template friends][13] - [Inconsistent `UNICODE` definitions][14] - [Missing "extern" in const variable declarations/definitions (C++ only)][15] - [Visual Studio Code not configured for a multiple file project][16] - [Errors on Mac OS X when building a dylib, but a .so on other Unix-y systems is OK][17] - [Your linkage consumes libraries before the object files that refer to them][18] [1]: https://stackoverflow.com/a/8834196 [2]: https://stackoverflow.com/a/12574400 [3]: https://stackoverflow.com/a/12574403 [4]: https://stackoverflow.com/a/12574407 [5]: https://stackoverflow.com/a/12574417 [6]: https://stackoverflow.com/a/12574420 [7]: https://stackoverflow.com/a/12574423 [8]: https://stackoverflow.com/a/20358542 [9]: https://stackoverflow.com/questions/5259714/undefined-reference-to-winmain16/5260237#5260237 [10]: https://stackoverflow.com/a/24675715 [11]: https://stackoverflow.com/questions/14364362/visualstudio-project-with-multiple-sourcefiles-of-the-same-name [12]: https://stackoverflow.com/a/25744263 [13]: https://stackoverflow.com/a/35891188 [14]: https://stackoverflow.com/a/36475406 [15]: https://stackoverflow.com/a/45478255 [16]: https://stackoverflow.com/a/72328407 [17]: https://stackoverflow.com/a/74949274 [18]: https://stackoverflow.com/a/43305704
I'm trying to animate a sine wave made out of a Shape struct. To do this, I need to animate the phase of the wave as a continuous looping animation - and then animate the frequency and strength of the wave based on audio input. As far as I understand it, the `animatableData` property on the Shape seems unable to handle multiple animation transactions. I've tried using `AnimatablePair` to be able set/get more values than one, but it seems they need to come from the same animation instance (and for my case I need two different instances). The code goes something like this: <!-- language: swiftui --> ``` @Binding var externalInput: Double @State private var phase: Double = 0.0 @State private var strength: Double = 0.0 struct MyView: View { var body: some View { MyShape(phase: self.phase, strength: self.strength) .onAppear { /// The first animation (that needs to be continuous) withAnimation(.linear(duration: 1000).repeatForever()) { self.phase = .pi * 2 } } .onChange(self.externalInput) { /// The second animation (that is reactive) withAnimation(.default) { self.strength = self.externalInput } } } } ``` <!-- language: swiftui --> ``` struct MyShape: Shape { var phase: Double var strength: Double var animatableData: AnimatablePair<Double, Double> { get { (phase, strength) } set { phase = newValue.first strength = newValue.second } } /// Drawing the bezier curve that integrates the animated values func path(in rect: CGRect) -> Path { ... } } ``` This approach however seems to only animate the value from the last animation transaction that's been initialised. I am guessing both animated values goes in, but when the second animation transaction fires, it sets the first value to the target value (without any animations) - as those animated values are part of the first transaction (that gets overridden by the second one). My solution right now is to use an internal `Timer`, in a wrapper View for the Shape struct, to take care of updating the value of the `phase` value - but this is far from optimal, and quite ugly. When setting the values in `animatableData`, is there a way to access animated values from other transactions - or is there another way to solve this? I've also tried with implicit animations, but that seems to only render the last set animation as well - and with a lot of other weird things happening (like the whole view zooming across the screen on a repeated loop...).
null
**Note**: I'm new to c# and .net. I understand middleware is designed to work with API calls. I'm trying to build something that works with all methods. I'm working on recording logs and metrics for APIs and specific operations. To achieve this, I'm planning to use middleware. I've implemented a basic version of middleware along with a custom attribute, and that works fine*** with API methods like 'PostUsers' in the below example code snippet. namespace MyApp.Namespace1 { [ApiController] public class Controller : ControllerBase { [HttpPost] [Route( "users" )] [OperationName( "PostUsers" )] // Custom attribute to send the operation name to middleware public async Task PostUsers( [FromBody] UsersRequest request ) { try { ... } catch ( Exception ex ) { ... // This error (ex.Message, etc.) should be available in middleware } } } } But when I apply the attribute and middleware to the non-controller methods like 'HandleUsers' and 'CreateStorage,' mentioned in the code snippet below, it doesn't work internal class UsersProvider { [OperationName( "HandleUsers" )] private async Task HandleUsers() { } } public class Storage { [OperationName( "CreateStorage" )] private static void CreateStorage( ) { } } Is there any way to achieve a similar pattern so that it can be applied to all types of methods? ***: Currently, I'm passing the required details from the controller to the middleware by using HttpContext.Items. Is there any better way of doing this? My goal is to offload all the logs and metric-related logic to middleware and keep the controller/target method as clean as possible
It depends on the device you're using. Try adding `touchstart` event listener for mobile In your case: <div ontouchstart="toggleForm()" id="toggleButton"> <h3>TITLE</h3> </div> *The most efficient way is to have a if statement in your script and add `touchstart` or `click` event listener based on the users device (desktop or mobile)
{"OriginalQuestionIds":[47256212],"Voters":[{"Id":2901002,"DisplayName":"jezrael","BindingReason":{"GoldTagBadge":"datetime"}}]}
My website preloader is not showing properly first the website appears for a few mili seconds then preloader then the website. I am using Elementor WordPress, I have added the preloader code in my theme file editor here is the code snippet ``` "add_action('wp_footer', 'Allprocoding_Preloader'); function Allprocoding_Preloader() { if (!is_admin() && $GLOBALS["pagenow"] !== "wp-login.php") { $delay = 1; // seconds $desktop_loader = 'https://spoky.co/wp-content/uploads/2024/02/Group-1171275193-1.gif'; $mobile_loader = 'https://spoky.co/wp-content/uploads/2024/02/mobile-.gif'; $overlayColor = '#ffffff'; ?> <script> document.addEventListener("DOMContentLoaded", function () { var preloader = document.createElement('div'); preloader.style.position = 'fixed'; preloader.style.top = '0'; preloader.style.bottom = '0'; preloader.style.left = '0'; preloader.style.right = '0'; preloader.style.backgroundColor = '<?php echo esc_js($overlayColor); ?>'; preloader.style.zIndex = '100000'; preloader.style.display = 'flex'; preloader.style.alignItems = 'center'; preloader.style.justifyContent = 'space-around'; var loaderImg = document.createElement('img'); loaderImg.src = '<?php echo esc_url($desktop_loader); ?>'; loaderImg.alt = ''; loaderImg.style.height = '30vw'; preloader.appendChild(loaderImg); document.body.appendChild(preloader); document.body.style.overflow = "hidden"; var mediaQuery = window.matchMedia("only screen and (max-width: 760px)"); function handleMediaChange(event) { if (event.matches) { loaderImg.src = '<?php echo esc_url($mobile_loader); ?>'; loaderImg.style.height = '20vw'; } else { loaderImg.src = '<?php echo esc_url($desktop_loader); ?>'; loaderImg.style.height = '15vw'; } } mediaQuery.addListener(handleMediaChange); handleMediaChange(mediaQuery); window.addEventListener("load", function () { // Remove the preloader after the delay setTimeout(function () { preloader.remove(); document.body.style.overflow = "visible"; }, <?php echo esc_js($delay * 1000); ?>); }); }); </script> <?php } } ``` what to do to make the prelaoder appears perfectly fine I have tried adding code to inline css and and preload css and JavaScript files but there is no change in the scenario. What are other possible ways to figure out this issue?
Sometimes it happens due to firewall issue. You can download the plugin from github and install. Follow the below steps, 1. Go to https://github.com/cucumber/cucumber-eclipse 2. Click on release [![enter image description here][1]][1] 3. Download the plugin. [![enter image description here][2]][2] 4. Go to Eclipse -> Help -> Install new software -> Click on 'Add' -> Click on Archive -> Select the plugin location -> Click open -> Click ok. 5. Once it's done. Restart your eclipse. [1]: https://i.stack.imgur.com/EtFvd.png [2]: https://i.stack.imgur.com/ZWSmJ.png
(https://i.stack.imgur.com/6bm05.png) I also follow following step but it's still not working : If SWC continues to fail to load you can opt-out by disabling swcMinify in your next.config.js or by adding a .babelrc to your project with the following content Just help me out of this problem [1]: https://i.stack.imgur.com/LvYmJ.png