instruction
stringlengths
0
30k
βŒ€
I am working on a project of android development with Jetpack compose and Firestore. I have wrtite the code for upload the pdf file from the Admin panel. But I am not able to retrive the uploaded file in user Panel. Can anyone suggest me a library ya give me a code for it.. plzz I want code or suggestion of library for loading pdf file from firestore to user panel with jetpack compose.
Can Anyone help me to load pdf file from firebase to user app by URL in jetpack compose?
|android|firebase|google-cloud-firestore|firebase-realtime-database|android-jetpack-compose|
null
So Leadership Team is how I want the end result ``` atGroupsDefaultColors: { LeadershipTeam: "rgba(255, 3, 3, 1)", DeveloperTeam: "rgba(230, 83, 0, 0.5)", ManagementTeam: "rgba(0, 255, 145, 0.5)", SnrAdmin: "rgba(11, 175, 255, 0.5)", Admin: "rgba(11, 175, 255, 0.5)", SnrModerator: "rgba(11, 175, 255, 0.5)", Moderator: "rgba(11, 175, 255, 0.5)", trialMod: "rgba(11, 175, 255, 0.5)", }, staffTeamPage: { LeadershipTeam: [ ``` Tried ChatGPT didn't come up with anything useful.
I am assuming you are referring to XSS problem? But no XSS defenses are perfect. Maybe you could try: ``` OWASP Java HTML Sanitizer ```
I am creating a docker image with a env.yml file. This yml file has a python package named mohit-housing-price-prediction=0.0.2 which I created locally. While creating an image using docker build -t mohitsharmatigeranalytics/tamlep:0.3 . I get ``` 164.0 ERROR: Could not find a version that satisfies the requirement mohit-housing-price-prediction==0.0.2 (from versions: none) ``` my Dockerfile is ``` FROM continuumio/miniconda3 LABEL maintainer="Mohit Sharma" WORKDIR /app # Copy project files COPY . /app #Install dependencies using Conda RUN conda env create -f docker_environment.yml # Activatethe Conda environment SHELL ["conda", "run", "-n", "mle-dev", "/bin/bash", "-c"] # Set executable permissions for the Python scripts RUN chmod +x src/housing_price_prediction/components/ingest_data.py \ && chmod +x src/housing_price_prediction/components/train.py \ && chmod +x src/housing_price_prediction/components/score.py # Set executable permissions for the the .sh file RUN chmod +x /app/run_scripts.sh #Set entrypointand default command ENTRYPOINT [ "conda", "run", "-n", "mle-dev" ] CMD ["./run_scripts.sh"] ``` and my run_scripts.sh file is ``` #!/bin/bash cd /app/src/housing_price_prediction/components # Run data_ingest.py echo "Running data_ingest.py..." python ingest_data.py # Run train.py echo "Running train.py..." python train.py # Run score.py echo "Running score.py..." python score.py ``` Since I have already created the package locally, shouldn't the environment be created without any errors in the docker image
By default mongoose will buffer commands; this is to manage the possible intermittent disconnections with MongoDB server. By default, under the event of a disconnection, mongoose will wait for 10 seconds / 10000 milliseconds before throwing an error. This is what happens in your case. Mongoose throws an error after retrying the given insertion command for 10 seconds. Therefore the main problem is mongoose could not establish server connection in 10 seconds. This is something to check further. In order to check this connection issue, there needs to be an error handler provided to the connection statement shown below. let conn = mongoose.connect("mongodb://localhost:27017/todo") Thanks to @jQueeny, he has given you a code snippet for doing the same. Once you have trapped this error, the error message would help you to know the reason for failing connection. The hostname - localhost, specified in the connection string has enough potential for this kind of connection issues. You can see the details of a similar case enclosed here. It suggests to replace localhost with IP address. [Why can I connect to Mongo in node using 127.0.0.1, but not localhost?][1] Thanks WeDoTheBest4You [1]: https://stackoverflow.com/questions/73133094/why-can-i-connect-to-mongo-in-node-using-127-0-0-1-but-not-localhost
I created a custom HTML element that create a new element from the following template: <template id="track"> <div class="record">Nothing yet</div> </template> Here the JS code that defines the custom element: class Track extends HTMLElement { constructor() { super(); this.template = document.getElementById("track"); this.clone = this.template.content.cloneNode(true); } connectedCallback() { this.append( this.clone ); } set record(html) { // Case 1: Works only for element 1 (before appending to the DOM) this.clone.querySelector('.record').innerHTML = html; // Case 2: Works only for element 2 (after appending to the DOM) //this.querySelector('.record').innerHTML = html; } } customElements.define('my-track', Track); Note the record property that changes the HTML content of the `<div>`. I would like to be able to update the element content whatever it has been attached or not the DOM. I can't write a code that works in both cases (before and after appending the custom element to the DOM). The following code only works in case 1: const el1 = document.createElement('my-track'); el1.record = "Bla<b>bla</b> #1"; document.body.append(el1); The following code only works works in case 2 const el2 = document.createElement('my-track'); document.body.append(el2); el2.record = "Bla<b>bla</b> #2"; Of course, in the method `set record()`, I could check if the element has already been attached to the DOM and switch to case 1 or case 2 accordingly. But I feel this is not the right way to build custom elements. What is the best option to update element content in the method `set record()` whether or not the element has been attached to the DOM ? I create a JSFiddle here: https://jsfiddle.net/Alphonsio/gL8a239o/13/
Change custom element content before and after attaching to the DOM
|javascript|custom-element|html-templates|
Summarize pods not running, by Namespace and Reason - I'm having trouble finding the reason
|kubectl|
null
I'm using net core 8.0, and for this issue I had to install ```Microsoft.AspNetCore.Identity.UI``` package and add ```.AddDefaultUI();``` to my ```builder.Services.AddIdentity<AppUser, IdentityRole>``` . ```c# // Identity builder.Services.AddIdentity<AppUser, IdentityRole>(options => { options.User.RequireUniqueEmail = true; options.Password.RequireDigit = true; options.Password.RequireLowercase = true; options.Password.RequireUppercase = true; options.Password.RequireNonAlphanumeric = true; options.Password.RequiredLength = 12; options.SignIn.RequireConfirmedEmail = true; }) .AddEntityFrameworkStores<ApplicationDBContext>() .AddDefaultUI(); ``` also I have ```app.MapIdentityApi<AppUser>();```to map all auth enpoints. and this fixed my issue.
Try using Scala 2 syntax fastparse.parse("1234", implicit p => parseAll(MyParser.int(p))) https://scastie.scala-lang.org/DmytroMitin/MrFZ0EhiSPeFDHd1IyBhrA (I'll try to find SO question with Scala 3 syntax)
I've been working on a python discord bot and wanted to containerize it, which has worked pretty well, but while testing one of the features (bot -> open API) via HTTPS I'm getting the following error: `ssl.SSLError: Cannot create a client socket with a PROTOCOL_TLS_SERVER context (_ssl.c:811)` I've read various articles and tutorials online but they either seem to half answer my question or partially relate to other applications altogether, such as configuring Nginx which I think is just muddying the water a little. So far I've encountered people mentioning to create and move some certs and one answer saying to include "--network host" into the dockerfile, but it doesn't seem like there is any issue with the network connectivity itself I was tempted to just change the request URL to use HTTP instead as there's no credentials or sensitive data being transmitted but would feel a lot more comfortable knowing it's using HTTPS instead. My dockerfile is as below (note: I added the 'RUN apt-get update'.... block after my investigations hoping that would generate a certificate and the error would magically clear up but that's not the case). `FROM python:3.10-bullseye COPY requirements.txt /app/ COPY ./bot/ /app RUN apt-get update \ && apt-get install openssl \ && apt-get install ca-certificates RUN update-ca-certificates WORKDIR /app RUN pip install -r requirements.txt COPY . . CMD ["python3", "-u", "v1.py"]` I tried a little bit basic of diagnostics through the container like checking the directories for certs and trying to curl to a HTTPS URL but being brand new to docker I'm not really sure what I'm looking for or how to progress any further so any help would be appreciated - Googling tutorials - Googling stackover flow + reddit questions - Basic (networking) diagnostics
I'm encountering an issue while trying to run a binary file using both SPWN or Pwntools. Here's the context: **SPWN Logs:** ```plaintext [*] Binary: baskin [*] Libc: libc-2.27.so [*] Loader: ld-linux-x86-64.so.2 [*] file baskin ELF 64-bit LSB executable x86-64 dynamically linked not stripped [*] checksec baskin RELRO: Partial RELRO Stack: No canary found NX: NX enabled PIE: No PIE (0x400000) Libc version: 2.27 [+] Trying to unstrip libc [*] Libc unstripped -- ldd of the original binary linux-vdso.so.1 (0x00007fffec961000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fba54613000) /lib64/ld-linux-x86-64.so.2 (0x00007fba54843000) -- ldd of the patched binary linux-vdso.so.1 (0x00007ffdd1f1c000) libc.so.6 => ./debug_dir/libc.so.6 (0x00007f648de00000) ./debug_dir/ld-linux.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007f648e343000) ``` **Pwntools Code:** ```python from pwn import * p = process('baskin', env={'LD_PRELOAD':'./libc-2.27.so'}) libc = ELF('./libc-2.27.so') p.interactive() ``` **Error Encountered:** ``` Inconsistency detected by ld.so: dl-call-libc-early-init.c: 37: _dl_call_libc_early_init: Assertion `sym != NULL' failed! ``` It seems like both SPWN and Pwntools are unable to run the binary file `baskin` with the specified libc version (`libc-2.27.so`). Despite attempting to unstrip the libc in SPWN, the error persists. I've checked the environment variables and paths, but I couldn't identify any obvious issues. Any insights or suggestions on how to resolve this inconsistency would be greatly appreciated.
I was solving this problem on Leetcode but I am unable to understand why my solution is wrong I am using the hash map approach to solve this, the time complexity is O(n) but the error is not something related to time exceeding Question: ``` You are given two strings s and t. String t is generated by random shuffling string s and then add one more letter at a random position. Return the letter that was added to t. Example 1: Input: s = "abcd", t = "abcde" Output: "e" Explanation: 'e' is the letter that was added. Example 2: Input: s = "", t = "y" Output: "y" Constraints: 0 <= s.length <= 1000 ``` **my Solution** ``` var findTheDifference = function (s, t) { let mapSet = {} let final = "" s.split('').forEach((elem) => { mapSet[elem] === undefined ? mapSet[elem] = 1 : mapSet[elem]++ }) console.log(mapSet) t.split('').forEach((elem) => { mapSet[elem] === undefined ? mapSet[elem] = 1 : mapSet[elem]-- if (mapSet[elem] != 0) { console.log(mapSet, mapSet[elem]) final = elem } }) console.log(mapSet) return final }; ```
389. Find the Difference LeetCode
|hashmap|
I'm trying to deploy my Django Rest framework app using vercel. While building, I got an error "Error: Unable to find any supported Python versions." My vercel.json is below. { "version": 2, "regions": ["hnd1"], "builds": [ { "src": "myproject/wsgi.py", "use": "@vercel/python", "config": { "maxLambdaSize": "15mb" } }, { "src": "build_files.sh", "use": "@vercel/static-build", "config": { "distDir": "static" } } ], "routes": [ { "src": "/static/(.*)", "dest": "/static/$1" }, { "src": "/(.*)", "dest": "myproject/wsgi.py" } ] } I confirmed that I could successfully download django and other things. (it means build_files.sh is executed correctly.) I think I might have problems with static files. I added static root and url to settins.py STATIC_URL = '/static/' STATIC_ROOT = os.path.join(BASE_DIR, 'static') and added urlpatterns to urls.py urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) Why is this error occurred? I appreciate your opinion and advice!
Deploy my Django rest framework app to Vercel
|python|django-rest-framework|vercel|
in my case, this was fixed by "exit" the current sbt shell (in intellij) [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/eMX9n.png
Inside your Canvas game object, create a Panel that will be the area in with the floating joystick can be spawn. Bellow the Panel add your Button. Now create a global `TouchInputManager` component/script which will detect which UI element has been touched by performing a raycast when the user touches the screen. You can attach it to your Canvas GO. ``` using System.Collections.Generic; using UnityEngine; using UnityEngine.EventSystems; using UnityEngine.InputSystem.EnhancedTouch; using ETouch = UnityEngine.InputSystem.EnhancedTouch; public class TouchInputManager : MonoBehaviour { private GraphicRaycaster graphicRaycaster; private PointerEventData pointerEvtData = new(null); private List<RaycastResult> raycastResults = new(); private void Awake() { graphicRaycaster = GetComponent<GraphicRaycaster>(); } private void HandleFingerDown(Finger finger) { pointerEvtData.position = finger.screenPosition; raycastResults.Clear(); // Perform a raycast to find the UI element touched by the finger graphicRaycaster.Raycast(pointerEvtData, raycastResults); if (raycastResults.Count > 0) { var gameObj = raycastResults[0].gameObject; // Notify the UI element about the touch gameObj.SendMessage("OnFingerDown", finger, SendMessageOptions.DontRequireReceiver); } } private void OnEnable() { EnhancedTouchSupport.Enable(); ETouch.Touch.onFingerDown += HandleFingerDown; ETouch.Touch.onFingerUp += HandleFingerUp; ETouch.Touch.onFingerMove += HandleFingerMove; } private void OnDisable() { EnhancedTouchSupport.Disable(); ETouch.Touch.onFingerDown -= HandleFingerDown; ETouch.Touch.onFingerUp -= HandleFingerUp; ETouch.Touch.onFingerMove -= HandleFingerMove; } // Other methods ... // } ```
Validation error for TruCustomApp from trulens_eval - Input should be an instance of Queue [type=is_instance_of, input_value=<queue.Queue object ...>
|python|
null
I've recently updated my app to Material3 and replaced the `ActionBars` in my app with `MaterialToolbars`. My app has a `ViewPager2` where each `Fragment` in the pager has a `RecyclerView`. Previously, scrolling down a list would hide the `TabLayout` but now they don't. I found if I remove the `MaterialToolbar` the tabs hide like they should. <androidx.coordinatorlayout.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/directory_layout" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent"> <com.google.android.material.appbar.AppBarLayout android:id="@+id/tabs_layout" android:layout_width="match_parent" android:layout_height="wrap_content"> <!-- Removing this fixes the scrolling issue --> <com.google.android.material.appbar.MaterialToolbar android:id="@+id/toolbar" android:layout_width="match_parent" android:layout_height="?attr/actionBarSize" android:elevation="4dp" /> <com.google.android.material.tabs.TabLayout android:id="@+id/tabs" android:layout_width="match_parent" android:layout_height="match_parent" app:layout_scrollFlags="scroll|enterAlways" app:tabBackground="@drawable/tabs_background" app:tabIndicatorHeight="0dp" app:tabMode="scrollable" /> </com.google.android.material.appbar.AppBarLayout> <androidx.viewpager2.widget.ViewPager2 android:id="@+id/pager" android:layout_width="match_parent" android:layout_height="wrap_content" app:layout_behavior="@string/appbar_scrolling_view_behavior" /> </androidx.coordinatorlayout.widget.CoordinatorLayout> protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_list); setSupportActionBar(findViewById(R.id.toolbar)); } What is also strange is that if I put a `LinearLayout` after the `CoordinatorLayout` (with no `Toolbar`), the scrolling also does not work. <androidx.coordinatorlayout.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/layout" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent"> <LinearLayout android:id="@+id/layout2" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical"> <com.google.android.material.appbar.AppBarLayout android:id="@+id/tabs_layout" android:layout_width="match_parent" android:layout_height="wrap_content"> <com.google.android.material.tabs.TabLayout android:id="@+id/tabs" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingBottom="4dp" android:visibility="gone" app:layout_scrollFlags="scroll|enterAlways" app:tabBackground="@drawable/tabs_background" app:tabIndicatorHeight="0dp" app:tabMode="scrollable" /> </com.google.android.material.appbar.AppBarLayout> <androidx.viewpager2.widget.ViewPager2 android:id="@+id/pager" android:layout_width="match_parent" android:layout_height="wrap_content" app:layout_behavior="@string/appbar_scrolling_view_behavior" /> </LinearLayout> </androidx.coordinatorlayout.widget.CoordinatorLayout>
I'm using [Multipass][1] on an Ubuntu host to launch multiple local VMs, with the goal to create a docker swarm over multiple VMs. Everything with `multipass` itself and the installation of Docker (I've used the scripts at https://get.docker.com/) has gone well. I've also been able to initialize the swarm by setting up `node1` as the manager through `docker swarm init --advertise-addr <VARIOUS_IPs>`, where `VARIOUS_IPs` has been any one of various I.Ps: 1. `127.0.0.1` (as per [this][2] SO post) 2. `172.17.0.1`, as per the output that I get when logging into `node1` through `multipass shell node` for the `docker0` interface: ```bash jason@jason-ubuntu-desktop:~$ multipass shell node1 Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-101-generic x86_64) . . . System load: 0.0 Processes: 97 Usage of /: 45.4% of 4.67GB Users logged in: 0 Memory usage: 31% IPv4 address for docker0: 172.17.0.1 Swap usage: 0% IPv4 address for ens3: 10.126.204.207 Expanded Security Maintenance for Applications is not enabled. . . . ``` 3. `10.126.204.207`, which is the I.P assigned to the interface `ens3` as you can see in the command above. 4. `192.168.2.6`, which is what `ifconfig -a` gives the interface `enp5s0` on the **HOST** machine: ```bash jason@jason-ubuntu-desktop:~$ ifconfig -a enp5s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.2.6 netmask 255.255.255.0 broadcast 192.168.2.255 inet6 2a02:85f:e0c9:8900:3523:25bb:2763:1923 prefixlen 64 scopeid 0x0<global> inet6 2a02:85f:e0c9:8900:b0e8:1b53:b5c4:eddc prefixlen 64 scopeid 0x0<global> inet6 fe80::4277:203e:fe56:63a8 prefixlen 64 scopeid 0x20<link> ether 08:bf:b8:75:50:9b txqueuelen 1000 (Ethernet) RX packets 3135402 bytes 4280258798 (4.2 GB) RX errors 0 dropped 19 overruns 0 frame 0 TX packets 2289436 bytes 233651001 (233.6 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 10189 bytes 1551793 (1.5 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 10189 bytes 1551793 (1.5 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 mpqemubr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.126.204.1 netmask 255.255.255.0 broadcast 10.126.204.255 inet6 fe80::5054:ff:fe50:214b prefixlen 64 scopeid 0x20<link> ether 52:54:00:50:21:4b txqueuelen 1000 (Ethernet) RX packets 616652 bytes 38339106 (38.3 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 815903 bytes 1211324469 (1.2 GB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tap-7d21c24c2a2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::3cd8:e7ff:fe95:b7b7 prefixlen 64 scopeid 0x20<link> ether 3e:d8:e7:95:b7:b7 txqueuelen 1000 (Ethernet) RX packets 78108 bytes 5871486 (5.8 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 106416 bytes 158301533 (158.3 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tap-9f0a4d14af6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::6073:fcff:fe8d:fe22 prefixlen 64 scopeid 0x20<link> ether 62:73:fc:8d:fe:22 txqueuelen 1000 (Ethernet) RX packets 80889 bytes 6172614 (6.1 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 106378 bytes 158520504 (158.5 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tap-f33ea83d210: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::f433:24ff:feb1:3f5f prefixlen 64 scopeid 0x20<link> ether f6:33:24:b1:3f:5f txqueuelen 1000 (Ethernet) RX packets 79189 bytes 5937738 (5.9 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 106441 bytes 158499276 (158.4 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ``` 5. `192.168.2.255`, which is what `ifconfig -a` returns as `broadcast` for the `enp5s0` interface, as you can see above. 6. `10.126.204.1`, which is what `ifconfig -a` returns for the `mpqemubr0` interface, as you can see above. 7. The public I.P associated with my router (not pasting that one for security reasons). No matter which I.P I use, I successfully start up a swarm, e.g, for `172.17.0.1`: ``` ubuntu@node1:~$ docker swarm init --advertise-addr 172.17.0.1 Swarm initialized: current node (1uih27t5jrmoe56hg6ko6zc7u) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token <TOKEN>172.17.0.1:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. ``` However, when I paste the generated `swarm join` command in either one of the other two nodes, after a wait of about 5 seconds, I get: ``` Error response from daemon: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.17.0.1:2377: connect: connection refused" ``` I'm wondering what I.P I should be advertising in order to make the worker nodes join the swarm started from the manager node. I also wanted to point out that the Ubuntu default firewall seems to be disabled: ```bash root@jason-ubuntu-desktop:/home/jason# ufw status Status: inactive ``` [1]: https://multipass.run/ [2]: https://stackoverflow.com/questions/69573675/getting-error-when-try-to-add-docker-swarm-manager-into-multipass-vm
Make your function public so you can call it everywhere in your code. After that you need to create a public boolean that would tell you wether to show report or not. It could look like this: ``` var shouldShowReport : boolean = false; ``` Then what you need to do is change your function a bit, by adding a boolean as a parameter like this: ``` function ShowReport(param1,param2...; showReport : boolean) : integer ``` Then you need to change your logic based on the passed boolean in a way that you can either show the report like now, or just return an integer without the previewModel. This is an example: ``` function ShowReport(param1,param2...; showReport : boolean) : integer begin //If you need to show the report then you call the PreviewModal if showReport then begin ... QuickRep1.PreviewModal; end; //Then you get the result anyway Result := ReportResult; end; ``` After that you need to figure out where you are going to call the function and change the value of **shouldShowReport** based on wether you need to show the report in that logic. And then pass the boolean as a parameter to the function when calling it: ``` //If this is the part where the customer doesn't want to see the whole report shouldShowReport := false; RepResult := ShowReport(param1,param2..., shouldShowReport); //repResult is an integer like you've shown already ``` And finally you need to show the result somehow. I don't know how you plan to do it but the simpliest way is with a label. You get it from the components and place it on you form. After that you can change it up a bit with the properties based on what you want. Then you need to give the result to the label caption property. Don't forget that your result is an integer and if you directly pass an integer to a caption you'd get an error. That's why you need to make it a string first. For the example I would assume you would name your label **lblResult** ``` lblResult.Caption := IntToStr(RepResult); ```
I have an activity with four fragments as tabs. Now I have a ViewModel (`FJViewModel`) for sharing filters between the activity and the fragments(tabs). In my activity, I initialize the filters table with four rows like this (for simplicity unrelated details are ommitted): ``` private FJViewModel fjViewModel; Filterj filterj; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); fjViewModel = new ViewModelProvider(this).get(FJViewModel.class); . . fjViewModel.getAllFilters().observe(this, new Observer<List<Filterj>>() { @Override public void onChanged(List<Filterj> filterjs) { if (filterjs.size() == 0) { filterj = new Filterj(); filterj.setId(0); //id=0 is for tab0 filterjs.add(filterj); fjViewModel.insert(filterjs.get(0)); filterj = new Filterj(); filterj.setId(1); //id=1 is for tab1 filterjs.add(filterj); fjViewModel.insert(filterjs.get(1)); } } } }); . . } ``` Inside my fragment (tab0) this is how I try to get the initial filters from the `FJViewModel`, but i get anull value inside the observer for the `filterj`. Although I checked hat the filters were changed inside the onChanged methos in the activity above. ``` @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { fjViewModel = new ViewModelProvider(requireActivity()).get( FJViewModel.class); fjViewModel.getFiltersById(0).observe(getViewLifecycleOwner(), new Observer<Filterj>() { @Override public void onChanged(Filterj filterj) { //Here the filterj is null when I check it filterj.setWithSoftware(false); fjViewModel.update(filterj); } }); ``` And this is my `FJViewModel` (all columns except the `id` in the `Filterj` Entity are nullable in the room databse): ``` public class FJViewModel extends AndroidViewModel { private FJRepository repo; private LiveData<List<Filterj>> allFilters; public FJViewModel(@NonNull Application application) { super(application); repo = new FJRepository(application); allFilters = repo.getAllFilters(); } public void insert(Filterj filter) { repo.insert(filter); } public void update(Filterj filter) { repo.update(filter); } public void delete(Filterj filter) { repo.delete(filter); } public LiveData<List<Filterj>> getAllFilters(){ return allFilters; } public LiveData<Filterj> getFiltersById(int id){ return repo.getFiltersById(id); } } ``` Now I am a bit confused as I am new to `LiveData`, why should i get a null for the initial value of the `filterj`? Also, when I change the `filterj` with e.g. a `Spinner` inside the fragment this problem goes away and I can clearly get the filters right inside my `onItemSelected` of the `Spinner`. Where am I going wrong?
ELF binary has inconsistency detected by ld.so: dl-call-libc-early-init.c: 37: Assertion `sym != NULL' failed
|binary|ld|elf|pwntools|
|sql|
To resolve issues related to Python environments being externally managed, because of homebrew installed python version, execute below commmand. ``` rm /opt/homebrew/Cellar/python\@3*/**/EXTERNALLY-MANAGED ```
failed to upload a table from sql file
I've been working on a python discord bot and wanted to containerize it, which has worked pretty well, but while testing one of the features (bot -> open API) via HTTPS I'm getting the following error: `ssl.SSLError: Cannot create a client socket with a PROTOCOL_TLS_SERVER context (_ssl.c:811)` I've read various articles and tutorials online but they either seem to half answer my question or partially relate to other applications altogether, such as configuring Nginx which I think is just muddying the water a little. So far I've encountered people mentioning to create and move some certs and one answer saying to include "--network host" into the dockerfile, but it doesn't seem like there is any issue with the network connectivity itself I was tempted to just change the request URL to use HTTP instead as there's no credentials or sensitive data being transmitted but would feel a lot more comfortable knowing it's using HTTPS instead. My dockerfile is as below (note: I added the 'RUN apt-get update'.... block after my investigations hoping that would generate a certificate and the error would magically clear up but that's not the case). *FROM python:3.10-bullseye COPY requirements.txt /app/ COPY ./bot/ /app RUN apt-get update \ && apt-get install openssl \ && apt-get install ca-certificates RUN update-ca-certificates WORKDIR /app RUN pip install -r requirements.txt COPY . . CMD ["python3", "-u", "v1.py"]* I tried a little bit basic of diagnostics through the container like checking the directories for certs and trying to curl to a HTTPS URL but being brand new to docker I'm not really sure what I'm looking for or how to progress any further so any help would be appreciated - Googling tutorials - Googling stackover flow + reddit questions - Basic (networking) diagnostics
I've been working on a python discord bot and wanted to containerize it, which has worked pretty well, but while testing one of the features (bot -> open API) via HTTPS I'm getting the following error: `ssl.SSLError: Cannot create a client socket with a PROTOCOL_TLS_SERVER context (_ssl.c:811)` I've read various articles and tutorials online but they either seem to half answer my question or partially relate to other applications altogether, such as configuring Nginx which I think is just muddying the water a little. So far I've encountered people mentioning to create and move some certs and one answer saying to include "--network host" into the dockerfile, but it doesn't seem like there is any issue with the network connectivity itself I was tempted to just change the request URL to use HTTP instead as there's no credentials or sensitive data being transmitted but would feel a lot more comfortable knowing it's using HTTPS instead. My dockerfile is as below (note: I added the 'RUN apt-get update'.... block after my investigations hoping that would generate a certificate and the error would magically clear up but that's not the case). *FROM python:3.10-bullseye COPY requirements.txt /app/ COPY ./bot/ /app RUN apt-get update \ && apt-get install openssl \ && apt-get install ca-certificates RUN update-ca-certificates WORKDIR /app RUN pip install -r requirements.txt COPY . . CMD ["python3", "-u", "v1.py"]* I tried a little bit basic of diagnostics through the container like checking the directories for certs and trying to curl to a HTTPS URL but being brand new to docker I'm not really sure what I'm looking for or how to progress any further so any help would be appreciated - Googling tutorials - Googling stackover flow + reddit questions - Basic (networking) diagnostics
I initialize my ViewModel in the Activity with several fragments as tabs, but the fragments(tabs) return null for the updated livedata
|java|android|null|android-livedata|
null
I need to add socket support in the backend for real-time chat. Initially, I implemented an automatic reply feature in the API for a mobile app. This feature automatically replies after a user sends a message to another person. Although the queue is working, it's not real-time. Now, I want to add a socket in the backend so the user can receive the automatic message in real time. ``` use ElephantIO\Client; use ElephantIO\Engine\SocketIO\Version2X; class UserChatbotAuto implements ShouldQueue { use Dispatchable, InteractsWithQueue, Queueable, SerializesModels; protected $request; protected $user; protected $curentUser; protected $token; public function __construct(array $request, $token, $curentUser, $user) { // $this->request = $request; $this->user = $user; $this->curentUser = $curentUser; $this->token = $token; } public function handle() { $user = $this->user; $curentUser = $this->curentUser; $request = $this->request; $token = $this->token; try { Log::info('start queue' ); $thread = $user->getThreadWithUser($curentUser); $option = [ // 'handshake' => [ 'auth' => [ 'token' => 'Bearer ' . $token, 'threadWith' => $thread->id ] // ] ]; $yourApiKey = config('services.openai.secret'); $client = OpenAI::client($yourApiKey); $result = $client->chat()->create([ 'model' => 'gpt-4', 'messages' => [ [ "role" => "system", "content" => "You are a mental health adviser, skilled in giving mental health related advice. Please answer in the language after the word question. No yapping" ], ['role' => 'user', 'content' => "give mental health advice for given question. my name is: " . $curentUser->name . ", only give the advice text don't write anything else. question: " . $request['message']], ], ]); $content_response = $result->choices[0]->message->content; Log::info('content_response: ' . $content_response); $message = Message::create([ 'thread_id' => $thread->id, 'user_id' => $user->id, 'body' => $content_response, ]); $client = new Client(new Version2X('http://19.......11:8001', $option)); $client->initialize(); $client->emit('sendMessageToUser', ['userId' => $user->id, 'message' => $content_response]); $client->close(); } catch (\Exception $e) { Log::error($e); } Log::info('done queue' ); } } ``` This is the socket configuration file named server.js, in the mobile, it still working if a real user chat to a real user and the result is both user chat in real time ``` require("dotenv").config(); const express = require("express"); const fetch = require("node-fetch"); const app = express(); const server = require("http").createServer(app); const mysql = require("mysql2"); const baseUrl = "http://1........1:8001"; const io = require("socket.io")(server, { cors: { origin: "*", }, }); // Create a connection pool const pool = mysql.createPool({ host: process.env.DB_HOST, user: process.env.DB_USERNAME, password: process.env.DB_PASSWORD, database: process.env.DB_DATABASE, waitForConnections: true, connectionLimit: 10, queueLimit: 0, }); const userConnectionDb = []; const findConnectionIndexById = (socketId) => { const index = userConnectionDb.findIndex( (user) => user.socketId === socketId ); if (index === -1) { return null; } return index; }; const findByUserId = (userId) => { const index = userConnectionDb.findIndex((user) => user.userId === userId); if (index === -1) { return null; } return index; }; const getSocketIdByUserId = (userId) => { const index = userConnectionDb.findIndex((user) => user.userId === userId); if (index !== -1) { return userConnectionDb[index].socketId; } else { return null; } }; const validateUser = async (authToken) => { try { let user = null; //console.lo; const endPoint = baseUrl + "/api/profile/socket-profile"; const options = { method: "GET", headers: { "Content-Type": "application/json", Accept: "application/json", Authorization: authToken, }, }; const response = await fetch(endPoint, options); if (!response.ok) { console.log({ status: response.status }); throw new Error("Network response was not OK"); } const responseData = await response.json(); const userData = { userId: responseData.id, }; user = userData; return { user, error: null }; } catch (error) { console.log(error); return { user: null, error: error }; } }; const sendMessageToServer = async (senderToken, receiver, message) => { try { let user = null; const endPoint = baseUrl + "/api/message/send-socket-message"; const options = { method: "POST", headers: { "Content-Type": "application/json", Accept: "application/json", Authorization: senderToken, }, body: JSON.stringify({ user_id: receiver, message }), }; const response = await fetch(endPoint, options); if (!response.ok) { console.log({ status: response.status }); throw new Error("Network response was not OK"); } const responseData = await response.json(); // console.log("message sent", responseData); return { data: responseData, error: null }; } catch (error) { return { data: null, error }; } }; // Middleware to handle authentication io.use(async (socket, next) => { try { const token = socket.handshake.auth.token; const threadWith = socket.handshake.auth.threadWith; // Perform authentication logic (e.g., verify the token) const { user, error } = await validateUser(token); if (error) throw new Error(error); if (!user) { // Authentication failed, reject the connection return next(new Error({ message: "Authentication failed", code: 401 })); } const userIndex = findByUserId(user.userId); if (userIndex !== null) { userConnectionDb.splice(userIndex, 1); } userConnectionDb.push({ userId: user.userId, socketId: socket.id, threadWith: threadWith || null, token, }); return next(); } catch (error) { console.log(error); return next(new Error({ message: "Server error", code: 500 })); } }); io.on("connection", (socket) => { console.log("New client connected Total users:", userConnectionDb.length); socket.on("message", (message) => { console.log("Received message:", message); }); socket.on("disconnect", () => { const userIndex = findConnectionIndexById(socket.id); if (userIndex !== null) { userConnectionDb.splice(userIndex, 1); } console.log("Client disconnected Total users:", userConnectionDb.length); }); // Handling the client's custom event socket.on("sendMessageToUser", async (data, callback) => { const { userId, message } = data; let callbackData = { success: true, online: false, data: null, }; const socketId = getSocketIdByUserId(userId); if (socketId) { const otherUserIndex = findByUserId(userId); const currentUserIndex = findConnectionIndexById(socket.id); if (otherUserIndex === null || currentUserIndex === null) { callbackData.success = false; callback(callbackData); return; } const currentUserId = userConnectionDb[currentUserIndex].userId; const senderToken = userConnectionDb[currentUserIndex].token; const threadWithId = userConnectionDb[otherUserIndex].threadWith; // console.log({ threadWithId, currentUserId }); if (threadWithId === currentUserId) { const { data, error } = await sendMessageToServer( senderToken, userId, message ); if (error) { console.log(error); return; } io.to(socketId).emit("customMessage", { sendBy: currentUserId, data: data, }); callbackData.online = true; callbackData.success = true; callbackData.data = data; } else { // console.log("Use is not on same thread"); } } else { // console.log(" no user online with this id "); } // Send acknowledgment back to the client callback(callbackData); }); }); const port = 8001; server.listen(port, () => { console.log(`Server is running on port ${port}`); }); ``` I tried calling the API to call the function and execute the queue above, but it returned an error: > ElephantIO\Exception\ServerConnectionFailureException: An error > occurred while trying to establish a connection to the server in > C:\xampp\htdocs\Project\tongle_latest\vendor\wisembly\elephant.io\src\Engine\SocketIO\Version1X.php:187 What can I do? Does anyone have any solution?
Resolving ElephantIO ServerConnectionFailureException: Error establishing connection to server
|php|laravel|websocket|
It sounds like you are on the right track using [`pl.DataFrame.join_asof`](https://docs.pola.rs/py-polars/html/reference/dataframe/api/polars.DataFrame.join_asof.html). To group by the symbol the `by` parameter can be used. ```python ( fr .join_asof( events, left_on="Date", right_on="Earnings_Date", by="Symbol", ) ) ``` ``` shape: (5, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ index ┆ Symbol ┆ Date ┆ Earnings_Date ┆ Event β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ u32 ┆ str ┆ date ┆ date ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ════════β•ͺ════════════β•ͺ═══════════════β•ͺ═══════║ β”‚ 0 ┆ A ┆ 2010-08-29 ┆ 2010-06-01 ┆ 1 β”‚ β”‚ 1 ┆ A ┆ 2010-09-01 ┆ 2010-09-01 ┆ 4 β”‚ β”‚ 2 ┆ A ┆ 2010-09-05 ┆ 2010-09-01 ┆ 4 β”‚ β”‚ 3 ┆ A ┆ 2010-11-30 ┆ 2010-09-01 ┆ 4 β”‚ β”‚ 4 ┆ A ┆ 2010-12-02 ┆ 2010-12-01 ┆ 7 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ ``` Now, I understand that you'd like each event to be matched *at most once*. I don't believe this is possible with `join_asof` alone. However, we can set all event rows that equal to the previous row to `Null`. For this, an [`pl.when().then()`](https://docs.pola.rs/py-polars/html/reference/expressions/api/polars.when.html) construct can be used. ```python ( fr .join_asof( events, left_on="Date", right_on="Earnings_Date", by="Symbol", ) .with_columns( pl.when( pl.col("Earnings_Date", "Event").is_first_distinct() ).then( pl.col("Earnings_Date", "Event") ).over("Symbol") ) ) ``` ``` shape: (5, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ index ┆ Symbol ┆ Date ┆ Earnings_Date ┆ Event β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ u32 ┆ str ┆ date ┆ date ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ════════β•ͺ════════════β•ͺ═══════════════β•ͺ═══════║ β”‚ 0 ┆ A ┆ 2010-08-29 ┆ 2010-06-01 ┆ 1 β”‚ β”‚ 1 ┆ A ┆ 2010-09-01 ┆ 2010-09-01 ┆ 4 β”‚ β”‚ 2 ┆ A ┆ 2010-09-05 ┆ null ┆ null β”‚ β”‚ 3 ┆ A ┆ 2010-11-30 ┆ null ┆ null β”‚ β”‚ 4 ┆ A ┆ 2010-12-02 ┆ 2010-12-01 ┆ 7 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ ```
Troubleshoot .readStream function not working in kafka-spark streaming (pyspark in colab notebook)
I am doing a detailed write up of purrr's accumulate function and for the most part, I understand its mechanics when the `.dir` is set to "forward", however when I set `.dir` to "backward" I am getting non-intuitive results. Below is understanding of accumulate when .dir is set to "forward": When using accumulate, the `.x` arg is passed through to your body's function via the '...'(or could be a second named argument) and the function's body's output is passed through to your function as an iterative input via a named argument (say "x") Below is an example of how I am using only one input of purrr's function (the iterative piece and ignoring the .x as an input) ```{r} library(tidyverse) accumulate( .x= 20:25 # this gets passed through to the body's function via `...` ,.f = function(x,...){ print(paste0("... is: ",...)) # this will print the .x args sequentially print(paste0("x is: ",x)) # this will print the .init command then function's body recursive input sum(x,1) # this function that will start with the previous output (or .init) and iteratively repeat (ignoring .x) } ,.init = 0 #the initial input to iterative function ,.dir = "forward" ) ``` When running this we validate our understanding above that function starts with 0 (from .init arg of 0) and then sequentially increases by 1 and ignores the .x input (only using them in the print command) The confusion arises when we keep the same function and change `.dir` to be "backward". ```{r} # comments are relative to what happens when .dir is set to forward accumulate( .x= 20:25 # this gets passed through to the function via `...` ,.f = function(x,...){ print(paste0("... is: ",...)) # this will print the .x args sequentially print(paste0("x is: ",x)) # this will print the .init command then recursive sum(x,1) this function that will start with the previous output (or .init) and iteratively repeat (ignoring .x) } ,.init = 0 #the initial input to iterative function ,.dir = "backward" ) ``` From the above, I would expect the ".x" args to be ignored and instead only the function's previous input to flow through function's body. Instead the .x args pass through to the function body via named "x" variable and instead the "..." args is ignored (the opposite of above) Thoughts on why the arguments switch? Is it important to understand? or from a practitioner's view can I just note the input logic switches when the .dir is set to backward?
As you mentioned in your comment checking if the user has permission is the easiest route to go. Admins = 'manage_options' but you can make it available for editors using 'edit_others_posts'. public static function check_for_saas_push() { if ( ! isset( $_REQUEST['json_product_push'] ) || ( isset( $_REQUEST['json_product_push'] ) && 'true' !== $_REQUEST['json_product_push'] ) ) return; error_reporting( E_ERROR ); if( current_user_can( 'manage_options' ) ) { // or use 'edit_others_posts' if ( ! empty( $_POST['product'] ) ) { // THE REST OF YOUR CODE HERE } } exit; } Would the below code be correct? public static function check_for_saas_push() { if ( ! isset( $_REQUEST['json_product_push'] ) || ( isset( $_REQUEST['json_product_push'] ) && 'true' !== $_REQUEST['json_product_push'] ) ) return; error_reporting( E_ERROR ); if( current_user_can( 'manage_options' ) ) { // or use 'edit_others_posts' if ( ! empty( $_POST['product'] ) ) { // THE REST OF YOUR CODE HERE $product = stripslashes( $_POST['product'] ); $product = json_decode( $product ); $download_url = Sputnik::API_BASE . '/download/' . $product->post_name . '.zip'; $thumb_url = $product->thumbnail_url; //Check if local product exists - if so, update it, if not, don't. $local = get_posts( array( 'pagename' => $product->post_name, 'post_type' => 'wpsc-product', 'post_status' => 'publish', 'numberposts' => 1 ) ); $product = (array) $product; unset( $product['guid'] ); unset( $product['post_date_gmt'] ); unset( $product['post_date'] ); require_once(ABSPATH . 'wp-admin/includes/media.php'); require_once(ABSPATH . 'wp-admin/includes/file.php'); require_once(ABSPATH . 'wp-admin/includes/image.php'); if ( ! empty( $local ) ) { $product['ID'] = $local[0]->ID; $new_id = wp_update_post( $product ); } else { unset( $product['ID'] ); // Doesn't exist, create it. Then, after created, add download URL and thumbnail. $new_id = wp_insert_post( $product ); } update_post_meta( $new_id, '_download_url', $download_url ); foreach ( $product['meta'] as $key => $val ) { if ( '_wpsc_product_metadata' == $key ) continue; if ( '_wpsc_currency' == $key ) continue; update_post_meta( $new_id, $key, $val[0] ); } $thumb = media_sideload_image( $thumb_url, $new_id, 'Product Thumbnail' ); if ( ! is_wp_error( $thumb ) ) { $thumbnail_id = get_posts( array( 'post_type' => 'attachment', 'post_parent' => $new_id ) ); if ( ! empty( $thumbnail_id ) ) { $thumbnail = set_post_thumbnail( $new_id, $thumbnail_id[0]->ID ); echo json_encode( array( 'set_thumbnail' => $thumbnail, 'post_id' => $new_id ) ); die; } die; } die; } exit; }
Appends to dataframes and numpy arrays are very expensive because each append must copy the entire data to a new memory location. Instead, you can try reading the file in chunks, processing the data, and appending back out. Here I've picked a chunk size of 100,000 but you can obviously change this. I don't know the column names of your CSV so I guessed at `'date_file'`. This should get you close: import pandas as pd date_first = '2008-11-01' date_last = '2008-11-10' df = pd.read_csv("data.csv", chunksize=100000) for chunk in df: chunk = chunk[(chunk['date_file'].str[:10] >= date_first) & (chunk['date_file'].str[:10] <= date_last)] chunk.to_csv('output.csv', mode='a') ---------- **Update 2024**: Things have changed a lot since I answered this. The current approach would be to use `polars` which can load data lazily. You would want to use [`scan_csv`](https://docs.pola.rs/py-polars/html/reference/api/polars.scan_csv.html) to lazily load data that meets your criteria and then use [`sink_csv`](https://docs.pola.rs/py-polars/html/reference/api/polars.LazyFrame.sink_csv.html) for the output. So, something like: import polars as pl df = ( pl.scan_csv("data.csv") .filter( pl.col("date_col").str.slice(0, length=10) >= date_first, pl.col("date_col").str.slice(0, length=10) <= date_last ) .sink_csv("output.csv") ) That will automatically batch the data loading for you and stream it back out to a new file. Note, the [`parquet`](https://en.wikipedia.org/wiki/Apache_Parquet) file format is more compact and efficient for handling data these days, so it might be worth streaming it back out into that format - though, it's not human-readable.
How can i get the converted value from Celsius to Fahrenheit to show in a cell in the workbook of my choice? Here is The Code for the problem: ``` Option Explicit Private Sub UserForm_Initialize() ComboBox1.AddItem "Inches to Centimeters" ComboBox1.AddItem "Feet to Meters" ComboBox1.AddItem "Celsius to Fahrenheit" ' Add more conversion options as needed End Sub Private Sub CommandButton1_Click() Dim inputValue As Double Dim conversionFactor As Double Dim convertedValue As Double Dim targetCell As Range inputValue = Val(TextBox1.Value) If IsNumeric(inputValue) Then Select Case ComboBox1.Value Case "Inches to Centimeters" conversionFactor = 2.54 Case "Feet to Meters" conversionFactor = 0.3048 Case "Celsius to Fahrenheit" convertedValue = (inputValue * 9 / 5) + 32 MsgBox inputValue & " degrees Celsius converted to Fahrenheit is " & convertedValue ' Add more cases for other conversion options End Select convertedValue = inputValue * conversionFactor ' Prompt the user to select a cell to put the converted value ' Code Here MsgBox "Please enter a valid numeric value." End If End Sub Private Sub CommandButton2_Click() Dim inputValue As Double Dim conversionFactor As Double Dim convertedValue As Double Dim targetCell As Range inputValue = Val(TextBox1.Value) If IsNumeric(inputValue) Then Select Case ComboBox1.Value Case "Inches to Centimeters" conversionFactor = 1 / 2.54 Case "Feet to Meters" conversionFactor = 1 / 0.3048 Case "Celsius to Fahrenheit" convertedValue = (inputValue - 32) * 5 / 9 MsgBox inputValue & " degrees Fahrenheit converted to Celsius is " & convertedValue ' Add more cases for other conversion options End Select convertedValue = inputValue * conversionFactor ' Prompt the user to select a cell to put the converted value ' Code Here Else MsgBox "Please enter a valid numeric value." End If End Sub ``` I want to add a piece of code that us a input box where I can select a cell in the workbook to display the Converted Value. I have placed comment blocks in my code to know where to type the code in.
I am trying to emulate this script which summarizes statuses of pods that are not running by namespace and reason. ```sh kubectl get po -A --no-headers | awk ' BEGIN { SUBSEP=" " format = "%-20s %20s %5s\n" printf format, "NAMESPACE", "STATUS", "COUNT" } !/Running/ {a[$1,$4]++} END { for (i in a) {split(i,t); printf format, t[1],t[2],a[i]} } ' | sort ``` This script produces output similar to this: ```sh $ notrunning NAMESPACE STATUS COUNT namespace-01 InvalidImageName 2 namespace-02 InvalidImageName 1 namespace-02 Init:ImagePullBackOff 1 namespace-03 CrashLoopBackOff 2 namespace-03 InvalidImageName 9 namespace-04 Init:ErrImagePull 1 ``` I can't find where kubectl is getting the status or reason. I'm trying code similar to this (leaving out some error checking for brevity). I am not getting the results I expect. ```go type PodSummary struct { NotRunning int Summary map[PodKey]int // Map of namespace+state to count } type PodKey struct { Namespace string Status string } func getPodSummary(kubeconfig, cluster string) PodSummary { clientset, _ := getClientsetForContext(kubeconfig, cluster) pods, _ := clientset.CoreV1().Pods("").List(context.TODO(), metav1.ListOptions{}) summary := PodSummary{Summary: make(map[PodKey]int)} for _, pod := range pods.Items { if pod.Status.Phase != "Running" && pod.Status.Phase != "Succeeded" { // need to check "Completed" also? podNS := pod.Namespace summary.NotRunning++ var pk PodKey for _,containerStatus := range pod.Status.ContainerStatuses { if containerStatus.State.Waiting != nil { pk = PodKey {podNS, string(containerStatus.State.Waiting.Reason)} break } else { // cannot find it, use this instead. pk = PodKey {podNS, string(pod.Status.Phase)} } } summary.Summary[pk]++ } } return summary } ``` I was expecting to get detailed reasons why the pods are failing. Instead I got results like "Pending" which isn't helpful or what I wanted.
The first thing I noticed was the lack of dots next to the php variables, I didn't check the code.
I tried installing the NIST NBIS and followed the procedures inside the REL folder but I kept on getting error. I tried the step by step command [Steps](https://i.stack.imgur.com/Sua9z.png) But I always get this error on my raspberry pi in step 2, > cmake config ``` gcc: unrecognized command line option '-m32' ``` Do you guys know what the problem is? Thank you.
How to setup nist nbis in raspbian raspberry pi 4
|cmake|raspberry-pi4|fingerprint|pi|
null
To better learn React, TypeScript, and Context / Hooks, I'm making a simple Todo app. However, the code needed to make the context feels cumbersome. For example, if I want to change what a Todo has, I have to change it in three places (ITodo interface, default context value, default state value). If I want to pass down something new, I have to do that in three places (TodoContext, TodoContext's default value, and value=). Is there a better way to not have to write so much code? ``` import React from 'react' export interface ITodo { title: string, body?: string, id: number, completed: boolean } interface TodoContext { todos: ITodo[], setTodos: React.Dispatch<React.SetStateAction<ITodo[]>> } export const TodoContext = React.createContext<TodoContext>({ todos: [{title: 'loading', body: 'loading', id: 0, completed: false}], setTodos: () => {} }) export const TodoContextProvider: React.FC<{}> = (props) => { const [todos, setTodos] = React.useState<ITodo[]>([{title: 'loading', body: 'loading', id: 0, completed: false}]) return ( <TodoContext.Provider value={{todos, setTodos}}> {props.children} </TodoContext.Provider> ) } ```
null
all I have a project that will host Grafana app (whole app not only dashboard/snapshots) inside my website inside an iframe, I'm trying to set a SSO so my users wouldn't have to login to my site then into their Grafana account inside my website. Grafana version and OS: - Grafana 10.4.1 enterprise on windows 10 I followed these instructions: - https://community.grafana.com/t/automatic-login-to-grafana-from-web-application/16801 - https://community.grafana.com/t/grafana-auto-login-from-angular-button-click/71813 I expected to have Grafana auto login the user and open their home route/dashboards that's my custom.ini file: ``` \[server\] protocol = http http_addr = 127.0.0.1 http_port = 8080 domain = 127.0.0.1 enforce_domain = false \[security\] allow_embedding = true \[auth\] login_cookie_name = grafana_session disable_login = false login_maximum_inactive_lifetime_duration = login_maximum_lifetime_duration = token_rotation_interval_minutes = 10 disable_login_form = false api_key_max_seconds_to_live = -1 \[auth.anonymous\] enabled = true org_name = Main Org. org_role = Viewer \[auth.basic\] enabled = false \[auth.proxy\] enabled = true header_name = X-WEBAUTH-USER header_property = username auto_sign_up = true sync_ttl = 60 whitelist = headers = enable_login_token = false ```
In my case it was the shared-process that was using all of the available CPU. I removed all extensions, turned off sync and automatic upgrades, opened a single file in a clean directory. Nothing helped. VS Code was completely unusable. It just started doing this recently. I am at the latest SW version of VS Code (1.87.2). I used taskkill to kill the shared-process. The CPU recovered to normal and I haven't seen any problems in VS Code. Mind you, I only use VS Code as a file editor so maybe haven't noticed the impact of killing the shared-process.
{"OriginalQuestionIds":[20380161],"Voters":[{"Id":2943403,"DisplayName":"mickmackusa","BindingReason":{"GoldTagBadge":"php"}}]}
It's the "Compilation specific hash". ### Origin of the hash The [output-template in angular-cli](https://github.com/angular/angular-cli/blob/2fc8076a4b72d77df3900a4e419e64bd8e5da9bc/packages/angular_devkit/build_angular/src/tools/webpack/utils/stats.ts#L421): ```javascript `Build at: ${w(new Date().toISOString())} - Hash: ${w(json.hash || '')} - Time: ${w('' + time)}ms`, ``` uses the JSON field `hash` from webpack's [Stats Data, Structure](https://webpack.js.org/api/stats/#structure): ```json { "version": "5.9.0", // Version of webpack used for the compilation "hash": "11593e3b3ac85436984a", // Compilation specific hash "time": 2469, // Compilation time in milliseconds ``` ### Purpose and usage of the hash You can expect it to be identical for multiple builds if the build artifact is identical and different if anything in the build artifact changed. You can use it for everything you've to identify a specific artifact, as long as you just care about the result and not from which actual build it originated. For example if you save your artifact to an artifact store, you can use the hash in the file name. That way you can easily find the matching artifact from your build logs, if you have to.
Visual Basic For Application Related Question
|excel|vba|
null
I would like to know how to load two models in memory and make inference with model1 or model2 like I want? int __stdcall BinaCpp_loadNN(const char* filename) { //loading the model each time causes overload. //The other option is to create a unique pointer which will be loaded once when first called. //Declare unique pointer: std::unique_ptr<cppflow::model> model; model = std::make_unique<cppflow::model>(filename); return model; //issue here }
To check a field for a particular value, there is no need to attach a persistent listener, you can only perform a [Query#get()][1] call. So assuming that you have two EditText objects: EditText previousPasswordEditText = findViewById(R.id.previous_password_edit_text); EditText newPasswordEditText = findViewById(new_password_edit_text); To check the value of the `password` field that exists in the database against the value that is introduced by the user inside the `previousPasswordEditText` and only then perform the update with the new password that was typed inside the `newPasswordEditText`, please use the following lines of code: DatabaseReference db = FirebaseDatabase.getInstance().getReference(); DatabaseReference userRef = db.child("user").child("user22"); userRef.get().addOnCompleteListener(new OnCompleteListener<DataSnapshot>() { @Override public void onComplete(@NonNull Task<DataSnapshot> task) { if (task.isSuccessful()) { DataSnapshot userSnapshot = task.getResult(); String oldPassword = userSnapshot.child("password").getValue(String.class); String previousPassword = previousPasswordEditText.getText().toString(); String newPassword = newPasswordEditText.getText().toString(); if (oldPassword.equals(previousPassword)) { Map<String, Object> updatePassword = new HashMap<>(); updatePassword.put("password", newPassword); userSnapshot.getRef().updateChildren(updatePassword).addOnCompleteListener(new OnCompleteListener<Void>() { @Override public void onComplete(@NonNull Task<Void> updateTask) { if (updateTask.isSuccessful()) { Log.d("TAG", "Update successful!"); } else { Log.d("TAG", "Failed with: " + updateTask.getException().getMessage()); } } }); } else { Log.d("TAG", "The oldPassword and the previousPassword don't match!"); } } } }); [1]: https://firebase.google.com/docs/reference/android/com/google/firebase/firestore/Query#get()
null
Hey the other day I was setting up arch mirror with rsync [Unit] Description=Sync repo Requires=network-online.target [Service] ExecStart=/bin/bash /root/sync-repo.sh User=root Group=root Type=oneshot And When I tried to run it with systemd I got an error $ systemctl start sync-repo Job for sync-repo.service failed because the control process exited with error code. See "systemctl status sync-repo.service" and "journalctl -xeu sync-repo.service" for details. $ journalctl -fu sync-repo Mar 30 07:50:23 arch-mirror bash[32312]: rsync error: error in socket IO (code 10) at clientserver.c(138) [Receiver=3.2.3] Mar 30 07:50:23 arch-mirror systemd[1]: sync-repo.service: Main process exited, code=exited, status=10/n/a Mar 30 07:50:23 arch-mirror systemd[1]: sync-repo.service: Failed with result 'exit-code'. Mar 30 07:50:23 arch-mirror systemd[1]: Failed to start Rsync backup. Mar 30 07:56:16 arch-mirror systemd[1]: Starting Sync repo... Mar 30 07:56:16 arch-mirror bash[34025]: rsync: [Receiver] failed to connect to mirror.23m.com (212.83.32.30): Permission denied (13) Mar 30 07:56:16 arch-mirror bash[34025]: rsync error: error in socket IO (code 10) at clientserver.c(138) [Receiver=3.2.3] Mar 30 07:56:16 arch-mirror systemd[1]: sync-repo.service: Main process exited, code=exited, status=10/n/a Mar 30 07:56:16 arch-mirror systemd[1]: sync-repo.service: Failed with result 'exit-code'. Mar 30 07:56:16 arch-mirror systemd[1]: Failed to start Sync repo. However, when trying to run the bash script directly the process completes successfuly. Os-release: NAME=AlmaLinux Version=9.3
You could wrap the slow part in an `#await` that defers to the next event loop, which causes the contents to be rendered separately (though still in one chunk). ```svelte {#await new Promise(r => setTimeout(r)) then} <!-- slow contents here --> {/await} ```
[enter image description here](https://i.stack.imgur.com/MkbCI.png) I'm beginning C. I tried to print "helloworld" but it has error I used visual studio code(Mac). installed C/C++,Code runner I just follow the C lecture, but occurred error. anyone know solution this error?
error in C "clang: error: linker command failed with exit code 1 (use -v to see invocation)"
|c|
null
My script add fields to a pointlayer. This happens while iterating a polygon layer. For each feature I look for the closest point from a dataset. This point gets written to a pointlayer (REL_LAWIS_profiles). Code: ``` ## D.4 Write LAWIS snowprofile layer lawislayer = QgsVectorLayer('Point?crs = epsg:4326', 'lawisprofiles', 'memory') lawislayer_path = str(outpath_REL + "lawis_layer.shp") _writer = QgsVectorFileWriter.writeAsVectorFormat(lawislayer,lawislayer_path,'utf-8',driverName='ESRI Shapefile') REL_LAWIS_profiles = iface.addVectorLayer(lawislayer_path,"REL_LAWIS_profiles", "ogr") ## D.5 LAWIS snowprofile layer write attribute fields lawisprovider = REL_LAWIS_profiles.dataProvider() lawisprovider.addAttributes([QgsField("REL_ID", QVariant.String),#QVariant.String QgsField("ID", QVariant.Int), QgsField("NAME", QVariant.String), QgsField("DATE", QVariant.String), QgsField("ALTIDUDE", QVariant.Int), QgsField("ASPECT", QVariant.String), QgsField("SLOPE", QVariant.Int), QgsField("SD", QVariant.Int), QgsField("ECTN1", QVariant.Int), QgsField("ECTN2", QVariant.Int), QgsField("ECTN3", QVariant.Int), QgsField("ECTN4", QVariant.Int), QgsField("COMMENTS", QVariant.String), QgsField("PDF", QVariant.String)]) REL_LAWIS_profiles.updateFields() ## get layer for data collection lawis_Pts = QgsProject.instance().mapLayersByName('REL_LAWIS_profiles')[0] ## look for closest point and get data for fields.... ``` In a second step features are added and values get assigned to the fields: ``` ## GET FIELD ID FROM lawis_pts rel_id_idx = feat.fields().lookupField('REL_ID') # feat because inside a for loop of anotehr layer id_idx = lawis_Pts.fields().lookupField('ID') name_idx = lawis_Pts.fields().lookupField('NAME') date_idx = lawis_Pts.fields().lookupField('DATE') l_alti_idx = lawis_Pts.fields().lookupField('ALTIDUDE') l_aspect_idx = lawis_Pts.fields().lookupField('ASPECT') l_slo_idx = lawis_Pts.fields().lookupField('SLOPE') l_sd_idx = lawis_Pts.fields().lookupField('SD') l_ectn_idx1 = lawis_Pts.fields().lookupField('ECTN1') l_ectn_idx2 = lawis_Pts.fields().lookupField('ECTN2') l_ectn_idx3 = lawis_Pts.fields().lookupField('ECTN3') l_ectn_idx4 = lawis_Pts.fields().lookupField('ECTN4') com_idx = lawis_Pts.fields().lookupField('COMMENTS') pdf_idx = lawis_Pts.fields().lookupField('PDF') ## ADD FEATURES TO lawis_Pts. lawis_Pts.startEditing() lawisfeat = QgsFeature() lawisfeat.setGeometry( QgsGeometry.fromPointXY(QgsPointXY(lawisprofile_long,lawisprofile_lat))) lawisprovider.addFeatures([lawisfeat]) lawis_Pts.commitChanges() ## CHANGE VALUES OF SELECTED FEATURE lawis_Pts.startEditing() for lfeat in selection: lawis_Pts.changeAttributeValue(lfeat.id(), rel_id_idx, REL_LAWIS_ID) lawis_Pts.changeAttributeValue(lfeat.id(), id_idx, LAWIS_id) lawis_Pts.changeAttributeValue(lfeat.id(), name_idx, LAWIS_NAME) lawis_Pts.changeAttributeValue(lfeat.id(), date_idx, LAWIS_DATE) lawis_Pts.changeAttributeValue(lfeat.id(), l_alti_idx, LAWIS_ALTIDUDE) lawis_Pts.changeAttributeValue(lfeat.id(), l_aspect_idx, LAWIS_ASPECT) lawis_Pts.changeAttributeValue(lfeat.id(), l_slo_idx, LAWIS_SLOPE) lawis_Pts.changeAttributeValue(lfeat.id(), l_sd_idx, LAWIS_SD) lawis_Pts.changeAttributeValue(lfeat.id(), l_ectn_idx1, LAWIS_ECTN1) lawis_Pts.changeAttributeValue(lfeat.id(), l_ectn_idx2, LAWIS_ECTN2) lawis_Pts.changeAttributeValue(lfeat.id(), l_ectn_idx3, LAWIS_ECTN3) lawis_Pts.changeAttributeValue(lfeat.id(), l_ectn_idx4, LAWIS_ECTN4) lawis_Pts.changeAttributeValue(lfeat.id(), com_idx, LAWIS_COMMENTS) lawis_Pts.changeAttributeValue(lfeat.id(), pdf_idx, LAWIS_PDFlink) lawis_Pts.commitChanges() ``` In the table the layer has 14 fields, but if the values are not written to the fields. I checked values but no issue there. Then i checked if all fields exists and field Nr 12 does (at least for python) not exist. With this: ``` fields = lawis_Pts.fields() for field in fields: print(field.name()) ``` I checked right after adding the fields if all fields get added. But there are only 13 fields (REL_ID,ID,NAME,DATE,ALTIDUDE,ASPECT,SLOPE,SD,ECTN1,ECTN2,ECTN3,COMMENTS,PDF). So I found out the problem is ECTN4. Also checken the Idx of ECTN4 by ``` print(l_ectn_idx4) ``` which gave me -1, which means it's not existing. But If I remove the layer which is added by the script and then add the layer manually and look for the field it is there, also using code. I assume there has to be a problem how I add the layer, but I just can't find the reason for this behavior. I'm thankful for any ideas!
I have came across a strange behaviour when using `UNWIND` in `CALL` sub queries. The following query returns no records: ``` WITH [] as a, [1] as b, [1,2] as c CALL { WITH a UNWIND a as row RETURN row as A } CALL { WITH b UNWIND b as row RETURN row as B } CALL { WITH c UNWIND c as row RETURN row as C } RETURN A, B, C ``` Result: ``` (no changes, no records) ``` I assume it's because of `UNWIND`-ing an empty list, [reduces][1] the number of rows to zero. However, when I change the `RETURN` clause on the first sub query to `collect(row)`, the query suddenly returns two records (because of list `c` with 2 entries): ``` WITH [] as a, [1] as b, [1,2] as c CALL { WITH a UNWIND a as row RETURN collect(row) as A } CALL { WITH b UNWIND b as row RETURN row as B } CALL { WITH c UNWIND c as row RETURN row as C } RETURN A, B, C ``` Result: ``` ╒═══╀═══╀═══╕ β”‚A β”‚B β”‚C β”‚ β•žβ•β•β•β•ͺ═══β•ͺ═══║ β”‚[] β”‚1 β”‚1 β”‚ β”œβ”€β”€β”€β”Όβ”€β”€β”€β”Όβ”€β”€β”€β”€ β”‚[] β”‚1 β”‚2 β”‚ β””β”€β”€β”€β”΄β”€β”€β”€β”΄β”€β”€β”€β”˜ ``` Why does `collect()` in the subquery alter the result of the query? [1]: https://neo4j.com/docs/cypher-manual/current/clauses/unwind/#unwind-using-unwind-with-an-empty-list
Neo4j CALL subquery with UNWIND returns 0 records
|neo4j|cypher|
With iOS 17 it has happened the same to me but widgets start working after rebooting the iPhone
I'm developing a car racing game in Unity using Dreamteck Splines from the Asset Store for track generation. Currently, the car automatically follows the spline's rotation and position based on speed, but I want to implement player control where swiping on the phone allows the player to rotate the car manually, overriding the spline's rotation temporarily and make the car move in the direction it faces. How can I achieve this while still maintaining automatic spline following when the player isn't swiping? Removing the spline override entirely isn't an option as I still want the car to follow the track's rotation and position when the track curves and is not straight and when the touch input is stationary. Any suggestions or solutions would be greatly appreciated. I attempted to disable spline control whenever touch movement occurs, enabling manual rotation, and implementing 'transform.Translate' functionality when touch movement ceases. Additionally, upon touch stabilization, the car automatically realigns with the center of the spline. But i want it to continue from where i last left it. I'll drop the code below: ``` public class playerController : MonoBehaviour { [SerializeField] private SplineFollower splineFollower; [Header("Touch Controls: ")] [SerializeField] private bool touchDetected = false; private Touch touch; [SerializeField] private bool isTouchStationery; [SerializeField] private float localTouchStationeryTime = 0f; [SerializeField] private float timeToReachStationery = 0.5f; [Header("Car Speed Controls: ")] [SerializeField] private float currentCarSpeed = 0; [SerializeField] private float minimumCarSpeed = 0; [SerializeField] private float maximumCarSpeed = 50; [SerializeField] private float carAccelerationRate = 5; [SerializeField] private float carDecelerationRate = 10; [Header("Car Rotation Controls: ")] [SerializeField][Range(0f, 20f)] private float rotationSpeed = 5f; [Header("Touch Readonly Variables")] [SerializeField] private Vector2 deltaPosition; [Header("Physics Components: ")] [SerializeField] private Rigidbody playerRigidBody; void Update() { if (Input.touchCount > 0) { touch = Input.GetTouch(0); switch (touch.phase) { case TouchPhase.Began: localTouchStationeryTime = 0f; touchDetected = true; break; case TouchPhase.Moved: splineFollower.follow = false; localTouchStationeryTime = 0f; isTouchStationery = false; deltaPosition = touch.deltaPosition; //splineFollower.motion.offset += new Vector2(deltaPosition.x * Time.deltaTime, 0); //splineFollower.motion.rotationOffset += new Vector3(0, deltaPosition.x * Time.deltaTime, 0); transform.Rotate(Vector3.up * deltaPosition.x * rotationSpeed * Time.deltaTime); break; case TouchPhase.Stationary: localTouchStationeryTime += Time.deltaTime; if(localTouchStationeryTime > timeToReachStationery) { isTouchStationery = true; splineFollower.follow = true; } break; case TouchPhase.Ended: localTouchStationeryTime = 0f; touchDetected = false; isTouchStationery = false; splineFollower.follow = true; break; case TouchPhase.Canceled: localTouchStationeryTime = 0f; touchDetected = false; isTouchStationery = false; splineFollower.follow = true; break; default: break; } } if (touchDetected) { currentCarSpeed += carAccelerationRate * Time.deltaTime; } else { currentCarSpeed -= carDecelerationRate * Time.deltaTime; } currentCarSpeed = Mathf.Clamp(currentCarSpeed, minimumCarSpeed, maximumCarSpeed); if(!isTouchStationery) { transform.Translate(transform.forward * currentCarSpeed * Time.deltaTime); } splineFollower.followSpeed = currentCarSpeed; } } ```
How to Override Spline Rotation for Player-Controlled Car in Racing Game?
|c#|unity-game-engine|
null
My script add fields to a pointlayer. This happens while iterating a polygon layer. For each feature I look for the closest point from a dataset. This point gets written to a pointlayer (REL_LAWIS_profiles). Code: ``` ## D.4 Write LAWIS snowprofile layer lawislayer = QgsVectorLayer('Point?crs = epsg:4326', 'lawisprofiles', 'memory') lawislayer_path = str(outpath_REL + "lawis_layer.shp") _writer = QgsVectorFileWriter.writeAsVectorFormat(lawislayer,lawislayer_path,'utf-8',driverName='ESRI Shapefile') REL_LAWIS_profiles = iface.addVectorLayer(lawislayer_path,"REL_LAWIS_profiles", "ogr") ## D.5 LAWIS snowprofile layer write attribute fields lawisprovider = REL_LAWIS_profiles.dataProvider() lawisprovider.addAttributes([QgsField("REL_ID", QVariant.String),#QVariant.String QgsField("ID", QVariant.Int), QgsField("NAME", QVariant.String), QgsField("DATE", QVariant.String), QgsField("ALTIDUDE", QVariant.Int), QgsField("ASPECT", QVariant.String), QgsField("SLOPE", QVariant.Int), QgsField("SD", QVariant.Int), QgsField("ECTN1", QVariant.Int), QgsField("ECTN2", QVariant.Int), QgsField("ECTN3", QVariant.Int), QgsField("ECTN4", QVariant.Int), QgsField("COMMENTS", QVariant.String), QgsField("PDF", QVariant.String)]) REL_LAWIS_profiles.updateFields() ## get layer for data collection lawis_Pts = QgsProject.instance().mapLayersByName('REL_LAWIS_profiles')[0] ## look for closest point and get data for fields.... ``` In a second step features are added and values get assigned to the fields: ``` ## GET FIELD ID FROM lawis_pts rel_id_idx = feat.fields().lookupField('REL_ID') # feat because inside a for loop of anotehr layer id_idx = lawis_Pts.fields().lookupField('ID') name_idx = lawis_Pts.fields().lookupField('NAME') date_idx = lawis_Pts.fields().lookupField('DATE') l_alti_idx = lawis_Pts.fields().lookupField('ALTIDUDE') l_aspect_idx = lawis_Pts.fields().lookupField('ASPECT') l_slo_idx = lawis_Pts.fields().lookupField('SLOPE') l_sd_idx = lawis_Pts.fields().lookupField('SD') l_ectn_idx1 = lawis_Pts.fields().lookupField('ECTN1') l_ectn_idx2 = lawis_Pts.fields().lookupField('ECTN2') l_ectn_idx3 = lawis_Pts.fields().lookupField('ECTN3') l_ectn_idx4 = lawis_Pts.fields().lookupField('ECTN4') com_idx = lawis_Pts.fields().lookupField('COMMENTS') pdf_idx = lawis_Pts.fields().lookupField('PDF') ## ADD FEATURES TO lawis_Pts. lawis_Pts.startEditing() lawisfeat = QgsFeature() lawisfeat.setGeometry( QgsGeometry.fromPointXY(QgsPointXY(lawisprofile_long,lawisprofile_lat))) lawisprovider.addFeatures([lawisfeat]) lawis_Pts.commitChanges() ## CHANGE VALUES OF SELECTED FEATURE lawis_Pts.startEditing() for lfeat in selection: lawis_Pts.changeAttributeValue(lfeat.id(), rel_id_idx, REL_LAWIS_ID) lawis_Pts.changeAttributeValue(lfeat.id(), id_idx, LAWIS_id) lawis_Pts.changeAttributeValue(lfeat.id(), name_idx, LAWIS_NAME) lawis_Pts.changeAttributeValue(lfeat.id(), date_idx, LAWIS_DATE) lawis_Pts.changeAttributeValue(lfeat.id(), l_alti_idx, LAWIS_ALTIDUDE) lawis_Pts.changeAttributeValue(lfeat.id(), l_aspect_idx, LAWIS_ASPECT) lawis_Pts.changeAttributeValue(lfeat.id(), l_slo_idx, LAWIS_SLOPE) lawis_Pts.changeAttributeValue(lfeat.id(), l_sd_idx, LAWIS_SD) lawis_Pts.changeAttributeValue(lfeat.id(), l_ectn_idx1, LAWIS_ECTN1) lawis_Pts.changeAttributeValue(lfeat.id(), l_ectn_idx2, LAWIS_ECTN2) lawis_Pts.changeAttributeValue(lfeat.id(), l_ectn_idx3, LAWIS_ECTN3) lawis_Pts.changeAttributeValue(lfeat.id(), l_ectn_idx4, LAWIS_ECTN4) lawis_Pts.changeAttributeValue(lfeat.id(), com_idx, LAWIS_COMMENTS) lawis_Pts.changeAttributeValue(lfeat.id(), pdf_idx, LAWIS_PDFlink) lawis_Pts.commitChanges() ``` In the table the layer has 14 fields, but if the values are not written to the fields. I checked values but no issue there. Then i checked if all fields exists and field Nr 12 does (at least for python) not exist. With this: ``` fields = lawis_Pts.fields() for field in fields: print(field.name()) ``` I checked right after adding the fields if all fields get added. But there are only 13 fields (REL_ID,ID,NAME,DATE,ALTIDUDE,ASPECT,SLOPE,SD,ECTN1,ECTN2,ECTN3,COMMENTS,PDF). So I found out the problem is ECTN4. Also checken the Idx of ECTN4 by ``` print(l_ectn_idx4) ``` which gave me -1, which means it's not existing. But If I remove the layer which is added by the script and then add the layer manually and look for the field it is there, also using code. I assume there has to be a problem how I add the layer, but I just can't find the reason for this behavior. I'm thankful for any ideas!
so the problem I am facing is when I run this command from qiskit import execute I am getting this error cannot import execute from qiskit I have tried asking every known source for the solution but no use so can anyone help me with this issue
i am not able to import execute from qiskit
|python|command|qiskit|
Understanding accumulate function when .dir is set to "backwards"
|r|purrr|
I have read a lot about calculating raw (standard) means and EMM. First are data based (descriptive stats), the latter are model based. I try to learn how to calculate both of them. In my example both results are equal. Variables Message and Relevance are factors with two levels. What did I do wrong ? Here is my code: ``` affect_data1 <- structure(list(Message = c( "Happy", "Happy", "Happy", "Happy", "Happy", "Dull", "Dull", "Dull", "Dull", "Dull", "Happy", "Happy", "Happy", "Happy", "Happy", "Dull", "Dull", "Dull", "Dull", "Dull" ), Relevance = c( "Low", "Low", "Low", "Low", "Low", "Low", "Low", "Low", "Low", "Low", "High", "High", "High", "High", "High", "High", "High", "High", "High", "High" ), PositiveAffect = c( 10, 12, 8, 11, 16, 15, 7, 8, 12, 9, 33, 44, 33, 35, 38, 2, 4.56, 5, 9.5, 8.5 ), Combined = structure(c( 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L ), levels = c( "HL", "DL", "HH", "DH" ), class = "factor", contrasts = structure(c( 1, -1, 0, 0, 0, 0, 1, -1, -0.5, -0.5, 0.5, 0.5 ), dim = 4:3, dimnames = list( c("HL", "DL", "HH", "DH"), c("c1", "c2", "") )))), row.names = c( NA, -20L ), class = c("tbl_df", "tbl", "data.frame")) library(emmeans) # Fit a linear model to the data including interactions modelar <- lm(PositiveAffect ~ Message * Relevance, data = affect_data1) emmeans_valueses <- emmeans(modelar, specs = ~ Message * Relevance) # Display the estimated marginal means summary(emmeans_valueses) Message Relevance emmean SE df lower.CL upper.CL Dull High 5.91 1.58 16 2.55 9.27 Happy High 36.60 1.58 16 33.24 39.96 Dull Low 10.20 1.58 16 6.84 13.56 Happy Low 11.40 1.58 16 8.04 14.76 # Now calculating just raw means, calculating means for Message and Relevance variables together: combined_means <- affect_data1 %>% group_by(Message, Relevance) %>% summarize(Mean_Positive_Affect = mean(PositiveAffect), .groups="keep") Message Relevance Mean_Positive_Affect <chr> <chr> <dbl> 1 Dull High 5.91 2 Dull Low 10.2 3 Happy High 36.6 4 Happy Low 11.4 ``` Both methods give the same results. Is it OK ? I used emmeans package, but if other ways exist feel free to show it, please. I would be grateful for explanation regarding this matter, thank you.
Can raw means and estimated marginal means be the same ? And when?
|r|model|mean|interaction|emmeans|
null
I found a solution. It is probably a dirty solution, but it works. The problem was to terminate my Activity just after the user manually grant the right MANAGE_EXTERNAL_STORAGE, knowing that : - even after the user grant the permission, my Activity did not have the right (1) - more over, if I end my Activity by finish() and start the app again, the new activity have not the right. I do not find explanation for that , It look like the new activity keep some context of the previous one. My solution is to kill my pid. I had implemented in my app the ability to launch bash commands. So I kill the pid of my Activity after launching new activity: static String HaraKiri = "sleep 2;PID=$(ps -ef | grep 'eu.eduphone.install' | grep -v 'grep' | grep -v 'eu.eduphone.install.' |awk '{ print $2 }');echo \"PID=$PID\";kill -15 \"$PID\""; ... private void Restart() { Intent intent = new Intent(); String name=getActivity().getPackageName(); Log.d("Thierry", "Restartn Name = "+name); intent.setComponent(new ComponentName(name, name+".MainActivity")); startActivity(intent); ShellExec(HaraKiri); } That works, the new activity can read the USB Dongle. **(1) About the reason what we must restart Activity to get the right after the user grant it :** In another discussion, https://github.com/termux/termux-app/issues71#issuecomment-1869222653 , https://github.com/agnostic-apollo says that : - Unreliable/Removable volumes like USB OTG devices that are only available on the /mnt/media_rw paths with their own filesystem (vfat/exfat) are assigned the root (0) owner and external_storage (1077) group. - If an app has been granted the MANAGE_EXTERNAL_STORAGE permission, then the external_storage (1077) group is added to list of groups that are assigned to the app process when its forked from zygote, allowing it to access unreliable/removable volumes with the external_storage (1077) group. My running activity is not in the group 1077 because it has been forked before this group was assigned to the app.
I want to create a new polars dataframe from numpy arrays, so I want to add the column-names when creating the dataframe (as I do with pandas). ``` df = pl.DataFrame(noisy_data.tolist(), columns=[f'col_{i}' for i in range(num_columns)]) ``` But polars does not like "columns" ``` TypeError: DataFrame.__init__() got an unexpected keyword argument 'columns' ``` In the Polats dataframe documentation I cannot see any parameter for defining the columns names ``` class polars.DataFrame( data: FrameInitTypes | None = None, schema: SchemaDefinition | None = None, *, schema_overrides: SchemaDict | None = None, strict: bool = True, orient: Orientation | None = None, infer_schema_length: int | None = 100, nan_to_null: bool = False, ) ``` I have seen that I can add the names after creating the dataframe. Is that the only option for polars? ``` new_column_names = [f'col_{i}' for i in range(num_columns)] df = df.with_columns(new_column_names) ```
how to create a polars dataframe giving the colum-names from a list
|python-3.x|python-polars|
null