instruction
stringlengths
0
30k
|sabre|sabredav|
null
Another possible solution, which first creates a list of `1` or `-1` from each value of column `Integer_Column` and then uses [`explode`][1]: (df.assign(Integer_Column = df['Integer_Column'] .map(lambda x: [np.sign(x)]*abs(x))) .explode('Integer_Column')) ---------- The following solution seems to be a little bit more efficient, but not much, because of [`explode`][1], as @mozway pertinently argues in a comment below. signs = np.sign(df['Integer_Column']) lengths = abs(df['Integer_Column']) (df.assign(Integer_Column = [np.full(abs_len, sign) for sign, abs_len in zip(signs, lengths)]) .explode('Integer_Column')) ---------- Output: String_Column String_Column_2 Integer_Column 0 A X 1 1 B Y -1 1 B Y -1 2 C Z 1 2 C Z 1 2 C Z 1 [1]: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html
null
I want to know which approach is better in which scenario: 1) React custom hooks for http requests, where we put states for loading, error and data and then inside functional component we just use that hook get data and then render it to ui and/or save it to redux. for example: ```javascript const useFetch = (query) => { const [status, setStatus] = useState('idle'); const [data, setData] = useState([]); useEffect(() => { if (!query) return; const fetchData = async () => { setStatus('fetching'); const response = await fetch( `https://hn.algolia.com/api/v1/search?query=${query}` ); const data = await response.json(); setData(data.hits); setStatus('fetched'); }; fetchData(); }, [query]); return { status, data }; }; ``` 2) Or we should use redux or redux toolkit's middleware to do http or any async work and then dispatch an action in that middleware to update redux store. for example:
|javascript|reactjs|redux|redux-toolkit|react-custom-hooks|
currently, I install mongodb on the C Drive and everytime I use backend to store something in mongodb, it will be automatically stored in the same folder as mongodb, which is C:/. But my next project will have to store a huge data of video and image, and if i keep save it on C:/, it will be full in no time. How can I config mongo and express so that my data will be stored in E:/(External memory) I don't want to install mongo on E:/, just want to store data on E:/, to save memory for my C drive
How to setting mongodb to store data in other disk
|mongodb|express|
null
You can use your same code with only one changes. You are mentioned format as "WEBP "in capital letters. Just change "webp" (lowercase) as shown here . Your code ``` from PIL import Image import glob, os for infile in glob.glob("*.jpg"): file, ext = os.path.splitext(infile) print(file) im = Image.open(infile).convert("RGB") im.save(file + ".webp", "webp") ``` Formatting reference is given below please check it. [![References][1]][1] Note - We are unable to use Broken or corrupted images. In these formats (.png,.jpg,.jpeg). It will throws error. Handle with try and except. Also, i checked the above code, works in Windows and Ubuntu OS (Linux) Tested Code ``` from PIL import Image import glob, os for infile in glob.glob("images/*.jpg"): file, ext = os.path.splitext(infile) print(file) im = Image.open(infile).convert("RGB") im.save(file + ".webp", "webp") ``` [![output][2]][2] [1]: https://i.stack.imgur.com/VGV9J.png [2]: https://i.stack.imgur.com/vDPXg.png
i am trying to delete a column with the name "1" on pandas but i am getting a keyerror Traceback (most recent call last): File "/usr/lib64/python3.11/tkinter/__init__.py", line 1967, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File "/home/tamil/RnD/PreProcessing-1.0.1/app/app.py", line 86, in <lambda> del_column_button = tk.Button(preprocessing_frame,text="Delete The Column",command=lambda:open_delete_column_window()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tamil/RnD/PreProcessing-1.0.1/app/app.py", line 113, in open_delete_column_window df = df.drop(column_name,axis = 1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tamil/.local/lib/python3.11/site-packages/pandas/core/frame.py", line 5568, in drop return super().drop( ^^^^^^^^^^^^^ File "/home/tamil/.local/lib/python3.11/site-packages/pandas/core/generic.py", line 4782, in drop obj = obj._drop_axis(labels, axis, level=level, errors=errors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tamil/.local/lib/python3.11/site-packages/pandas/core/generic.py", line 4824, in _drop_axis new_axis = axis.drop(labels, errors=errors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tamil/.local/lib/python3.11/site-packages/pandas/core/indexes/base.py", line 7069, in drop raise KeyError(f"{labels[mask].tolist()} not found in axis") KeyError: "['1'] not found in axis" i am using pandas and python 3.11
{"Voters":[{"Id":13302,"DisplayName":"marc_s"},{"Id":466862,"DisplayName":"Mark Rotteveel"},{"Id":16217248,"DisplayName":"CPlus"}]}
{"Voters":[{"Id":4541695,"DisplayName":"DVT"},{"Id":22180364,"DisplayName":"Jan"},{"Id":16217248,"DisplayName":"CPlus"}]}
Playing around this on my computer gives the following results. ```text test tests::v_0_bench ... bench: 1,157 ns/iter (+/- 55) test tests::v_1_bench ... bench: 776 ns/iter (+/- 41) test tests::v_2_bench ... bench: 1,741 ns/iter (+/- 61) test tests::v_3_bench ... bench: 424 ns/iter (+/- 18) test tests::v_4_bench ... bench: 443 ns/iter (+/- 41) ``` The buffer has been removed from the benchmark and `black_box()` is used in order to pretend the parameter is unpredictable and the result is used. The sequence of bytes has been made longer in order to let the algorithms actually work, and the invocations are repeated many times in order to introduce bigger differences in the results. Comparing `v_0` and `v_1` shows that a substantial part of the time is spent in checking the utf8 string. `v_2` is probably disappointing because of its `pow()` operation. `v_3` is clearly faster, but it does not check anything; the result could be meaningless because the bytes could contain anything but digits, and the resulting integer may also overflow if the sequence is too long! `v_4` is similar to `v_3` but considers an eventual starting `-` sign (because an `i32` is expected, not an `u32`); this first check does not seem to be too detrimental to performances. If we are absolutely certain that the sequence of bytes is correct, then we could lean towards the fastest version (`v_3` or `v_4`), but in any other situation we should [definitely prefer](https://www.reddit.com/r/Jokes/comments/5g2ay4/interviewer_i_heard_you_were_extremely_quick_at/) `v_0`. ```rust #![feature(test)] // cargo bench extern crate test; // not in Cargo.toml #[inline(always)] fn v_0(buf: &[u8]) -> i32 { std::str::from_utf8(buf).unwrap().parse().unwrap() } #[inline(always)] fn v_1(buf: &[u8]) -> i32 { unsafe { std::str::from_utf8_unchecked(buf) } .parse() .unwrap() } #[inline(always)] fn v_2(buf: &[u8]) -> i32 { buf.iter() .rev() .enumerate() .map(|(idx, val)| (val - b'0') as i32 * 10_i32.pow(idx as u32)) .sum() } #[inline(always)] fn v_3(buf: &[u8]) -> i32 { buf.iter().fold(0, |acc, d| acc * 10 + (d - b'0') as i32) } #[inline(always)] fn v_4(buf: &[u8]) -> i32 { let mut it = buf.iter(); let (neg, init) = if let Some(b) = it.next() { if *b == b'-' { (true, 0) } else { (false, (b - b'0') as i32) } } else { (false, 0) }; let num = it.fold(init, |acc, b| acc * 10 + (b - b'0') as i32); if neg { -num } else { num } } fn make_buf() -> Vec<u8> { b"1234567890".to_vec() } #[inline(always)] fn run( mut f: impl FnMut(&[u8]) -> i32, buf: &[u8], ) { for _ in 0..100 { test::black_box(f(test::black_box(buf))); } } #[bench] fn v_0_bench(b: &mut test::Bencher) { let buf = make_buf(); b.iter(|| run(v_0, &buf)) } #[bench] fn v_1_bench(b: &mut test::Bencher) { let buf = make_buf(); b.iter(|| run(v_1, &buf)) } #[bench] fn v_2_bench(b: &mut test::Bencher) { let buf = make_buf(); b.iter(|| run(v_2, &buf)) } #[bench] fn v_3_bench(b: &mut test::Bencher) { let buf = make_buf(); b.iter(|| run(v_3, &buf)) } #[bench] fn v_4_bench(b: &mut test::Bencher) { let buf = make_buf(); b.iter(|| run(v_4, &buf)) } fn main() { let buf = make_buf(); println!("v_0 {}", v_0(&buf)); println!("v_1 {}", v_1(&buf)); println!("v_2 {}", v_2(&buf)); println!("v_3 {}", v_3(&buf)); println!("v_4 {}", v_4(&buf)); } /* v_0 1234567890 v_1 1234567890 v_2 1234567890 v_3 1234567890 v_4 1234567890 */ ```
{"Voters":[{"Id":14732669,"DisplayName":"ray"},{"Id":1940850,"DisplayName":"karel"},{"Id":466862,"DisplayName":"Mark Rotteveel"}],"SiteSpecificCloseReasonIds":[18]}
|pdf|latex|pandoc|quarto|
```python import random def foo(row, col, d): N = row * col inds = set(random.sample(range(N), d)) arr = [1 if i in inds else 0 for i in range(N)] return [arr[i:(i + col)] for i in range(0, N, col)] foo(4, 5, 7) # [[1, 0, 0, 0, 0], [0, 0, 1, 0, 1], [0, 1, 0, 0, 0], [0, 1, 1, 1, 0]] ```
NextJS Docker build fails: fetch failed ECONNREFUSED
I am trying to run ``` from ydata_profiling import ProfileReport profile = ProfileReport(merged_data) profile.to_notebook_iframe() ``` in jupyter notebook. But I am getting an error: AttributeError: module 'numba' has no attribute 'generated_jit' I am running jupyter notebook in Docker container with requirements listed below: numpy==1.24.3 pandas==1.4.1 scikit-learn==1.4.1.post1 pyyaml==6.0 dvc==3.48.4 mlflow==2.11.1 seaborn==0.11.2 matplotlib==3.5.1 boto3==1.18.60 jupyter==1.0.0 pandoc==2.3 ydata-profiling==4.7.0 numba==0.59.1 I am using WSL Ubuntu in Visual Studio Code. Tried to build Docker image several times now with different versions of libraries. EDIT: Adding Traceback: AttributeError Traceback (most recent call last) Cell In[3], line 1 ----> 1 from ydata_profiling import ProfileReport 3 profile = ProfileReport(merged_data) 4 profile.to_notebook_iframe() File /usr/local/lib/python3.11/site-packages/ydata_profiling/__init__.py:14 10 warnings.simplefilter("ignore", category=NumbaDeprecationWarning) 12 import importlib.util # isort:skip # noqa ---> 14 from ydata_profiling.compare_reports import compare # isort:skip # noqa 15 from ydata_profiling.controller import pandas_decorator # isort:skip # noqa 16 from ydata_profiling.profile_report import ProfileReport # isort:skip # noqa File /usr/local/lib/python3.11/site-packages/ydata_profiling/compare_reports.py:12 10 from ydata_profiling.model import BaseDescription 11 from ydata_profiling.model.alerts import Alert ---> 12 from ydata_profiling.profile_report import ProfileReport 15 def _should_wrap(v1: Any, v2: Any) -> bool: 16 if isinstance(v1, (list, dict)): File /usr/local/lib/python3.11/site-packages/ydata_profiling/profile_report.py:25 23 from tqdm.auto import tqdm 24 from typeguard import typechecked ---> 25 from visions import VisionsTypeset 27 from ydata_profiling.config import Config, Settings, SparkSettings 28 from ydata_profiling.expectations_report import ExpectationsReport File /usr/local/lib/python3.11/site-packages/visions/__init__.py:4 1 """Core functionality""" 3 from visions import types, typesets, utils ----> 4 from visions.backends import * 5 from visions.declarative import create_type 6 from visions.functional import ( 7 cast_to_detected, 8 cast_to_inferred, 9 detect_type, 10 infer_type, 11 ) File /usr/local/lib/python3.11/site-packages/visions/backends/__init__.py:9 6 try: 7 import pandas as pd ----> 9 import visions.backends.pandas 10 from visions.backends.pandas.test_utils import pandas_version 12 if pandas_version[0] < 1: File /usr/local/lib/python3.11/site-packages/visions/backends/pandas/__init__.py:2 1 import visions.backends.pandas.traversal ----> 2 import visions.backends.pandas.types File /usr/local/lib/python3.11/site-packages/visions/backends/pandas/types/__init__.py:3 1 import visions.backends.pandas.types.boolean 2 import visions.backends.pandas.types.categorical ----> 3 import visions.backends.pandas.types.complex 4 import visions.backends.pandas.types.count 5 import visions.backends.pandas.types.date File /usr/local/lib/python3.11/site-packages/visions/backends/pandas/types/complex.py:7 5 from visions.backends.pandas.series_utils import series_not_empty, series_not_sparse 6 from visions.backends.pandas.types.float import string_is_float ----> 7 from visions.backends.shared.parallelization_engines import pandas_apply 8 from visions.types.complex import Complex 9 from visions.types.string import String File /usr/local/lib/python3.11/site-packages/visions/backends/shared/__init__.py:1 ----> 1 from . import nan_handling, parallelization_engines, utilities File /usr/local/lib/python3.11/site-packages/visions/backends/shared/nan_handling.py:34 30 # TODO: There are optimizations here, just have to define precisely the desired missing ruleset in the 31 # generated jit 32 if has_numba: ---> 34 @nb.generated_jit(nopython=True) 35 def is_missing(x): 36 """ 37 Return True if the value is missing, False otherwise. 38 """ 39 if isinstance(x, nb.types.Float): AttributeError: module 'numba' has no attribute 'generated_jit'
I want to create a Live Chat web widget - component, that can be embedded on ecommerce website and allow business owner to talk to users/customers (like Intercom). In my architecture design, I figured out that I need to store some kind of secret/token, that identifies conversation. System should remember conversation when user returns to ecommerce after some time. However I don't know where to put such a secret token (my web widget is loaded via iframe). Should I use localStorage? Or something else?
Where to store secret token for an embeddable web widget?
|web|architecture|local-storage|
{"Voters":[{"Id":6574038,"DisplayName":"jay.sf"},{"Id":22180364,"DisplayName":"Jan"},{"Id":466862,"DisplayName":"Mark Rotteveel"}],"SiteSpecificCloseReasonIds":[]}
I'm working on some code for a javascript implemenation of Conway's Game of Life Cellular Automata for a personal project, and I've reached the point of encoding the rules. I am applying the rules to each cell, then storing the new version in a copy of the grid. Then, when I'm finished calculating each cell's next state, I set the first grid's state to the second's one, empty the second grid, and start over. Here's the code I used for the rules: ```lang-js //10x10 grid let ecells = [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]; let cells = empty_cells; let new_cells = cells; let paused = true; function Cell(x, y) { return cells[y][x]; } function Nsum(i, j) { if (i >= 1 && j >= 1) { return Cell(i - 1, j) + Cell(i + 1, j) + Cell(i, j - 1) + Cell(i - 1, j - 1) + Cell(i + 1, j - 1) + Cell(i, j + 1) + Cell(i - 1, j + 1) + Cell(i + 1, j + 1); } } //One can manually change the state of the cells in the "cells" grid, //which works correctly. Then, one can run the CA by changing the "paused" //value to false. function simulation() { for (i = 0; i < cells[0].length; i++) { for (j = 0; j < cells.length; j++) { if (Cell(i, j)) { ctx.fillRect(20*i - 0.5, 20*j, 20, 20); if (!pause) { if (Nsum(i, j) == 2 || Nsum(i, j) == 3) ncells[j][i] = 1; else ncells[j][i] = 0; } } else { ctx.clearRect(20*i - 0.5, 20*j, 20, 20); if (!pause) { if (Nsum(i, j) == 3) ncells[j][i] = 1; else ncells[j][i] = 0; } } } } if (!pause) cells = ncells; ncells = ecells; requestAnimationFrame(simulation); } simulation(); ``` The rule logic is inside the nested for loop, Nsum is the function that calculates the neighborhood sum of the current cell. I say ncells[j][i] instead of ncells[i][j] because in a 2d array you address the row first. I didn't try much, but I can't imagine a solution. Help!
null
|r|dataframe|mutate|lemmatization|
I am receiving a json string of the form ```json { "people": [ { "Freddie": { "surname": "Mercury", "city": "London" } }, { "David": { "surname": "Bowie", "city": "Berlin" } } ] } ``` with the ultimate goal of recovering them as Proto messages of the kind ``` message Person { optional string name = 1; optional string surname = 2; optional string city = 3; } message Msg { repeated Person people = 1; } ``` What is the best practice - i.e. short of custom adhoc logic of manipulating the keys - to convert the above json string to a string sich as ``` { "people": [ { "name": "Freddie", "surname": "Mercury", "city": "London" }, { "name": "David", "surname": "Bowie", "city": "Berlin" } ] } ``` so that the correspondence between the json and the Proto is one-to-one, and therefore out-of-the-box deserializers could be used?
Here is one possible option using [`cKDTree.query`][1] from [tag:scipy]: ```py from scipy.spatial import cKDTree def knearest(gdf, **kwargs): notna = gdf["PPM_P"].notnull() arr_geom1 = np.c_[ gdf.loc[notna, "geometry"].x, gdf.loc[notna, "geometry"].y, ] arr_geom2 = np.c_[ gdf.loc[~notna, "geometry"].x, gdf.loc[~notna, "geometry"].y, ] dist, idx = cKDTree(arr_geom1).query(arr_geom2, **kwargs) k = kwargs.get("k") _ser = pd.Series( gdf.loc[notna, "PPM_P"].to_numpy()[idx].tolist(), index=(~notna)[lambda s: s].index, ) gdf.loc[~notna, "PPM_P"] = _ser[~notna].map(np.mean) return gdf N = 2 # feel free to make it 5, or whatever.. out = knearest(gdf.to_crs(3662), k=range(1, N + 1)) ``` Output (*with `N=2`*): ***NB**: Each red point (an FID having a null PPM_P), is associated with the N nearest green points)*. [![enter image description here][2]][2] Final GeoDataFrame (*with intermediates*): ```py # I filled some random FID with a PPM_P value to make the input meaningful FID PPM_P (OP) PPM_P (INTER) PPM_P geometry 0 0 34.919571 NaN 34.919571 POINT (842390.581 539861.877) 1 1 NaN 37.480218 37.480218 POINT (842399.476 539861.532) 2 2 NaN 35.567003 35.567003 POINT (842408.370 539861.187) 3 3 NaN 35.567003 35.567003 POINT (842420.229 539860.726) 4 4 36.214436 NaN 36.214436 POINT (842429.124 539860.381) 5 5 NaN 38.127651 38.127651 POINT (842438.018 539860.036) 6 6 NaN 40.431946 40.431946 POINT (842446.913 539859.691) 7 7 40.823028 NaN 40.823028 POINT (842458.913 539862.868) 8 8 NaN 37.871299 37.871299 POINT (842378.298 539851.425) 9 9 40.823028 NaN 40.823028 POINT (842390.158 539850.965) 10 10 40.040865 NaN 40.040865 POINT (842399.052 539850.620) 11 11 36.214436 NaN 36.214436 POINT (842407.947 539850.275) 12 12 34.919571 NaN 34.919571 POINT (842419.947 539853.452) 13 13 NaN 38.127651 38.127651 POINT (842428.841 539853.107) 14 14 40.040865 NaN 40.040865 POINT (842437.736 539852.761) 15 15 NaN 40.431946 40.431946 POINT (842449.595 539852.301) 16 16 NaN 40.431946 40.431946 POINT (842458.489 539851.956) 17 17 NaN 40.431946 40.431946 POINT (842467.384 539851.611) 18 18 NaN 40.431946 40.431946 POINT (842476.278 539851.266) 19 19 NaN 37.871299 37.871299 POINT (842368.981 539840.859) ``` [1]: https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.query.html [2]: https://i.stack.imgur.com/KefCq.png
I get **Security Issues** (CVE-2019-20444, CVE-2019-20445) with threat level 9 for all the versions of the jars flink-rpc-akka-loader, flink-rpc-akka while scanning. Anyone faced this issue? Please share your resolution. Thanks. [flink-rpc-akka-loader_Vulnerable][1] [1]: https://i.stack.imgur.com/pPzQJ.png
**Open Visual Studio Code. Go to the Extensions view by clicking on the square icon in the sidebar or pressing Ctrl+Shift+X. In the search bar, type "HTML" and look for an extension that provides HTML language support. One popular extension is simply called "HTML" and is provided by Visual Studio Code. Click on the extension and then click the "Install" button.**
I have the following ```yaml cassandra: image: cassandra:latest ports: - 9042:9042 volumes: - ./cassandra/image:/var/lib/cassandra environment: - CASSANDRA_AUTHENTICATOR=AllowAllAuthenticator - CASSANDRA_AUTHORIZER=AllowAllAuthorizer ``` The cassandra instance networking looks like this... ```json "Networks": { "cbusha-infra_default": { "IPAMConfig": null, "Links": null, "Aliases": [ "cbusha-infra-cassandra-1", "cassandra", "572f0770b41e" ], "MacAddress": "02:42:ac:1a:00:04", "NetworkID": "b44e49f0f195651a259b7b859fcadda128d359db18de4ab0a4e8b3efa4ed0e35", "EndpointID": "6d5fb1b98d2c427a760030a4804db29798893517b48616409575babe0f0f9ae8", "Gateway": "172.26.0.1", "IPAddress": "172.26.0.4", "IPPrefixLen": 16, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "DriverOpts": null, "DNSNames": [ "cbusha-infra-cassandra-1", "cassandra", "572f0770b41e" ] } } ``` I confirm I can connect locally using the IJ connection manager [![enter image description here][1]][1] I am now trying to connect so I have the following config from my spring app ```yaml spring: cassandra: contact-points: cassandra port: 9042 keyspace-name: cbusha local-datacenter: datacenter1 schema-action: CREATE_IF_NOT_EXISTS connect-timeout-millis: 30000 # 30 seconds read-timeout-millis: 30000 # 30 seconds ``` and I even use a function to make sure it is up first (I also added logic to test the keyspace is available as well)... ```java @SpringBootApplication public class BackendApplication { private static final Logger log = LoggerFactory.getLogger(BackendApplication.class); public static void main(String[] args) { waitForCassandra(); SpringApplication sa = new SpringApplication(BackendApplication.class); sa.addBootstrapRegistryInitializer(new MyBootstrapInitializer()); sa.run(args); } private static void waitForCassandra() { CqlSessionBuilder builder = CqlSession.builder(); AtomicInteger attempts = new AtomicInteger(); try (ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor()) { executor.scheduleAtFixedRate(() -> { try (CqlSession session = builder.build()) { ResultSet rs = session.execute("SELECT keyspace_name FROM system_schema.keyspaces WHERE keyspace_name = 'cbusha';"); if (rs.one() != null) { executor.shutdown(); } else { if (attempts.incrementAndGet() >= 12) { // 12 attempts * 10 seconds sleep = 2 minutes log.error("Keyspace cbusha does not exist - exiting after 2 minutes"); System.exit(1); } log.debug("Keyspace cbusha does not exist - sleeping"); } } catch (Exception e) { if (attempts.incrementAndGet() >= 12) { // 12 attempts * 10 seconds sleep = 2 minutes log.error("Cassandra is unavailable - exiting after 2 minutes"); System.exit(1); } log.debug("Cassandra is unavailable - sleeping"); } }, 0, 10, TimeUnit.SECONDS); } log.info("Cassandra is up - executing command"); } } ``` But when I try to start on docker I see the connection is accessible ``` 2024-03-24 13:33:45 17:33:45.055 [main] INFO com.cbusha.be.BackendApplication -- Cassandra is up - executing command ``` However, further down when it tries to create the channel I get the errors in [this gist][2]. I see it is using the proper profile and the docker dns is resolving here `endPoint=cassandra/172.26.0.4:9042` If I restart the container a little later it works. Why is this and is there a way to confirm the channel will work before starting similar to the connection? [1]: https://i.stack.imgur.com/u6qph.png [2]: https://gist.github.com/jrgleason/1c82e9c2f823dc76628dc1f529330969
I have a table with a string column that is formatted as json. I want to query my table by json. This is achievable with SQL ``` SELECT * FROM [db].[dbo].[table] WHERE JSON_VALUE(ColName, '$.jsonkey') = "value" ``` Is it possible with GraphQl and Hot Chocolate? I tried ``` public IQueryable<Class> GetById([ScopedService] AppDbContext context, string id) { return context.Classes.AsQueryable().Where(p => JObject.Parse(p.JsonCol)["id"].ToString() == id); } ``` Got an error: "message": "The LINQ expression 'DbSet<Platform>()\r\n .Where(p => JObject.Parse(p.JsonCol).get_Item(\"id\").ToString() == __id_0)' could not be translated. Any help would be appreciated.
I get **Security Issues** (CVE-2019-20444, CVE-2019-20445) with threat level 9 for all the versions of the jars flink-rpc-akka-loader, flink-rpc-akka while scanning. Anyone faced this issue? Please share your resolution. Thanks. [flink-rpc-akka-loader][1] [1]: https://i.stack.imgur.com/pPzQJ.png
Locally run `ansible-lint` `pre-commit` behaves differently than run during CI/CD. I use following ```yaml --- repos: - repo: https://github.com/ansible/ansible-lint rev: v24.2.1 hooks: - id: ansible-lint ``` That is because `/etc/ansible/ansible.cfg` is available locally which also configures e.g. collection path where private collections are installed. During the `ansible-lint` run a `requirements.yml` is picked up such that dependencies would be installed. Since the collections are private, I would need to hardcode an access token which seems wrong to me. I tried using ```yaml --- collections: - name: "git+https://<token name>:${token}@<private git repo" type: git version: "*" ``` and setting `export token=<token with access>` but this did not work. I also tried to use a `.ansible-lint` config with ```yaml --- mock_modules: - <namespace>.<name>.<role> ``` or ```yaml mock_roles: - <namespace>.<name>.<role> ``` but not if it helped. Related questions: - How do I make `ansible-lint` `pre-commit` behave the same locally and during CI/CD? - How to ignore locally available config? - How to specify authentication in `requirements.yml`, e.g. via an environment variable?
ansible-lint during pre-commit in CI/CD
|git|ansible|cicd|pre-commit.com|ansible-lint|
null
I have a task to submit on greedy algorithms. I wrote this code but I'm not sure what the time complexity of the algorithm is. This problem called "Rectangle Covering". This is the code: ``` MinGreedyCutReg(R[]) cut[] = null while R != null endIndex = 0 for i = 0 to R.length-1 if(endIndex < R[i].r) endIndex = R[i].r maxCount = 0 maxIndex = 0 for i = 0 to endIndex updateMax = 0 for j = 0 to R.length - 1 if(R[j].l <= i and R[j].r >= i) updateMax++ if(updateMax > maxCount): maxCount = updateMax maxIndex = i cut.add(maxIndex) for i = 0 to R.length - 1 if(R[i].l <= maxIndex and R[i].r >= maxIndex) R.remove(i) return cut ``` I am trying to find the time complexity of the algorithm. I think it is O(n * m) because of the nested loop. My friend tells me it is O(n^2) due to this nested loop. So basically I think the question is what time the nested loops can do and why.
Time complexity of Rectangle Covering algorithm
so as per [matias-quaranta][1] comment, this is a duplicate of [cosmos-db-trigger-is-not-being-run-when-inserting-new-document][2]. Currently CosmosDB does not work how a sane person would normally think, you need an SDK to trigger a pre or post trigger. Not Even the REST API can accept the trigger as a param. Just, the SDK. Glad i didn't spend more than the last 3 days in trying to figure this thing out. Thank you [matias-quaranta][1] [1]: https://stackoverflow.com/users/5641598/matias-quaranta [2]: https://stackoverflow.com/questions/60969216/cosmos-db-trigger-is-not-being-run-when-inserting-new-document
There seems to be a bug with SwiftUI's [`TabView`](https://developer.apple.com/documentation/swiftui/tabview) when used with [`.tabViewStyle(.page)`](https://developer.apple.com/documentation/swiftui/tabviewstyle/page) and any animated `.transition()` that moves the view horizontally (e.g. [`.move(edge: .leading/.trailing))`](https://developer.apple.com/documentation/swiftui/anytransition/move(edge:)), [`.slide`](https://developer.apple.com/documentation/swiftui/anytransition/slide), [`.offset()`](https://developer.apple.com/documentation/swiftui/anytransition/offset(x:y:)), etc.) in landscape orientation. When the tab view transitions in, the content appears off-center and the animation goes back and forth before it stabilizes. Did anyone else experience this and is there any known workaround? [![TabViewTransitionBug][1]][1] ```swift import SwiftUI struct ContentView: View { @State private var showTabView = false var body: some View { VStack { Button("Toggle TabView") { showTabView.toggle() } Spacer() if showTabView { TabView { Text("Page One") Text("Page Two") } .tabViewStyle(.page) .transition(.slide) } } .animation(.default, value: showTabView) .padding() } } #Preview { ContentView() } @main struct TabViewTransitionBugApp: App { var body: some Scene { WindowGroup { ContentView() } } } ``` Tested on Xcode 15.3 (15E204a), iOS 17.3.1 iPhone, iOS 17.4 Simulator. *Bug report `FB13687638` filed with Apple.* ## Update It seems to be related to safe areas as it doesn't happen on Touch ID devices. [![TabViewTransitionBugNoSafeAreas][2]][2] ## Update As suggested in the answer below, adding `.ignoresSafeArea(edges: .horizontal)` on the TabView, changing the animation to `easeInOut` and removing the `.padding()` from the VStack fixes the initial transition, but swiping between tabs is now off-center. [![TabViewTransitionBugStill][3]][3] [1]: https://i.stack.imgur.com/eGJds.gif [2]: https://i.stack.imgur.com/WWWN0.gif [3]: https://i.stack.imgur.com/C6Wo1.gif
A quick and dirty solution is to move methods and the variable from the subclasses to `Injector` like so: ``` @dataclass(frozen=True, slots=True) class Injector: outer_diameter_injector: float side_wall_thickness_injector: float number_input_tangential_holes: float diameter_input_tangential_holes: float length_input_tangential_holes: float relative_length_twisting_chamber: float diameter_injector_nozzle: float relative_length_injector_nozzle: float angle_nozzle_axis: float mass_flow_rate: float viscosity: float injector_type: str cross_sectional_area_one_passage_channel: float # From ScrewInjector @property def diameter_twisting_chamber_injector(self) -> float: """Возвращает диаметр камеры закручивания центробежной форсунки""" return self.outer_diameter_injector - 2 * self.side_wall_thickness_injector @property def relative_length_tangential_hole(self) -> float: """Возвращает отношение длины входного тангенциального к его диаметру""" return self.length_input_tangential_holes / self.diameter_input_tangential_holes @property def length_twisting_chamber(self) -> float: """Возвращает длину камеры закручивания центробежной форсунки""" return self.relative_length_twisting_chamber * self.diameter_twisting_chamber_injector @property def radius_twisting_chamber_injector(self) -> float: """Возвращает радиус камеры закручивания центробежной форсунки""" return self.diameter_twisting_chamber_injector / 2 @property def radius_input_tangential_holes(self) -> float: """Возвращает радиус входных тангенциальных отверстий""" return self.diameter_input_tangential_holes / 2 @property def radius_tangential_inlet(self) -> float: """Возвращает величину радиуса, на котором расположена ось входного тангенциального отверстия от оси форсунки""" return self.radius_twisting_chamber_injector - self.radius_input_tangential_holes @property def length_injector_nozzle(self) -> float: """Возвращает длину сопла форсунки""" return self.relative_length_injector_nozzle * self.diameter_injector_nozzle @property def radius_injector_nozzle(self) -> float: """Возвращает радиус сопла форсунки""" return self.diameter_injector_nozzle / 2 @property def reynolds_number(self) -> float: """Возвращает число Рейнольдса""" return (4 * self.mass_flow_rate) / (math.pi * self.viscosity * self.diameter_input_tangential_holes * math.sqrt(self.number_input_tangential_holes)) @property def coefficient_friction(self) -> float: """Возвращает коэффициент трения""" return 10 ** ((25.8 / (math.log(self.reynolds_number, 10))**2.58) - 2) @property def equivalent_geometric_characteristic_injector(self) -> float: """Возвращает эквивалентную геометрическую характеристику""" if self.injector_type == ConstructiveTypes.SCREW_INJECTOR.value: geometric_characteristics = ScrewInjector.geometric_characteristics_screw_injector elif self.injector_type == ConstructiveTypes.CENTRIFUGAL_INJECTOR.value: geometric_characteristics = CentrifugalInjector.geometric_characteristics_centrifugal_injector return geometric_characteristics / (1 + self.coefficient_friction / 2 * self.radius_tangential_inlet * (self.radius_tangential_inlet + self.diameter_input_tangential_holes - self.radius_injector_nozzle)) # From ScrewInjector @property def geometric_characteristics_screw_injector(self) -> float: """Возвращает геометрическую характеристику шнековой форсунки""" return (math.pi * self.radius_tangential_inlet * self.radius_injector_nozzle) / \ (self.number_input_tangential_holes * self.cross_sectional_area_one_passage_channel) # From CentrifugalInjector @property def geometric_characteristics_centrifugal_injector(self) -> float: """Возвращает геометрическую характеристику центробежной форсунки""" if self.angle_nozzle_axis == AngularValues.RIGHT_ANGLE.value: return (self.radius_tangential_inlet * self.radius_injector_nozzle) / \ (self.number_input_tangential_holes * self.radius_input_tangential_holes**2) else: return (self.radius_tangential_inlet * self.radius_injector_nozzle) / \ (self.number_input_tangential_holes * self.radius_input_tangential_holes**2) * self.angle_nozzle_axis ``` If there will be more subclasses, a more involved rewrite may be necessary.
Two ways Orders::max('id'); with softdelete/trashed Orders::withTrashed()->max('id');
{"Voters":[{"Id":3001761,"DisplayName":"jonrsharpe"},{"Id":14732669,"DisplayName":"ray"},{"Id":466862,"DisplayName":"Mark Rotteveel"}]}
|reactjs|node.js|docker|next.js|docker-compose|
i want to establish one way tls authentication system between app service and function app. Inside app service and function app i've uploaded private and public certificates from "certificate" blade. now whenever i make request to function app api i get 403, which certificate (private/public) should i send in request header (X-ARR-ClientCert), should i do this step manually from code ? if yes then what's the purpose of uploading certificates from "certificate" blade and how to load them from nodejs? this is how i'm invoking function app api from app service: const response = await axios.get( "https://test-az-fn.azurewebsites.net/api/HttpTrigger1" ); i've enabled client certificate mode to "required" as shown below inside function app, added "WEBSITE_LOAD_CERTIFICATES" inside application settings of app service. [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/UAMwu.png
mutual tls authentication between app service and function app
|azure|azure-functions|azure-web-app-service|
{"OriginalQuestionIds":[8896320],"Voters":[{"Id":2802040,"DisplayName":"Paulie_D","BindingReason":{"GoldTagBadge":"css"}}]}
I've been working on a Python discord bot and wanted to containerize it, which has worked pretty well, but while testing one of the features (bot -> open API) via HTTPS using the http3 package and I'm getting the following error: ``` ssl.SSLError: Cannot create a client socket with a PROTOCOL_TLS_SERVER context (_ssl.c:811) ``` I've read various articles and tutorials online but they either seem to half answer my question or partially relate to other applications altogether, such as configuring Nginx, which I think is just muddying the water a little. So far I've encountered people mentioning to create and move some certs and one answer saying to include `--network host` into the `Dockerfile`, but it doesn't seem like there is any issue with the network connectivity itself. I was tempted to just change the request URL to use HTTP instead as there's no credentials or sensitive data being transmitted but would feel a lot more comfortable knowing it's using HTTPS instead. My Dockerfile is as below (note: I added the `RUN apt-get update ...` block after my investigations hoping that would generate a certificate and the error would magically clear up but that's not the case). ```lang-docker FROM python:3.10-bullseye COPY requirements.txt /app/ COPY ./bot/ /app RUN apt-get update \ && apt-get install openssl \ && apt-get install ca-certificates RUN update-ca-certificates WORKDIR /app RUN pip install -r requirements.txt COPY . . CMD ["python3", "-u", "v1.py"] ``` I tried a little bit basic of diagnostics through the container like checking the directories for certs and trying to curl to a HTTPS URL.
Looks like there's an already accepted answer. But @user23919330 was onto something. The most obvious idea would be to convert this: for (size_t i = 0; i < SIZE; i++, a_ptr++, b_ptr++, c_ptr++, data++) { *a_ptr = data->a; *b_ptr = data->b; *c_ptr = data->c; } To this: for (size_t i = 0; i < SIZE; i++) { a_ptr[i] = data[i].a; } for (size_t i = 0; i < SIZE; i++) { b_ptr[i] = data[i].b; } for (size_t i = 0; i < SIZE; i++) { c_ptr[i] = data[i].c; } That is, let each read and write from sequential addresses. Take advantage of caching. Trying this out now.... **Update**. I modified OP's code such that it would run multiple times and it would print the timings (in milliseconds) ``` int testrun() { DWORD start, end; Struct* data = malloc(sizeof(Struct) * SIZE); for (size_t i = 0; i < SIZE; i++) { data[i].a = i; data[i].b = i; data[i].c = i; } start = GetTickCount(); for (int i = 0; i < 500000; i++) { do_work(data); } end = GetTickCount(); printf("%u\n", end - start); // print the number of milliseconds free(data); } int main() { for (int j = 0; j < 10; j++) { testrun(); } } ``` Compiled for release and got these run times: 5344 5953 5438 5718 5282 4531 4578 4609 4532 4500 Or just about 5 seconds on average per test test runs. Now let's see what happens when I change `do_work` to be this: ``` void do_work(Struct* data) { int32_t* a = malloc(sizeof(int32_t) * SIZE), *b = malloc(sizeof(int32_t) * SIZE), *c = malloc(sizeof(int32_t) * SIZE); int32_t* a_ptr = a, *b_ptr = b, *c_ptr = c; for (size_t i = 0; i < SIZE; i++) { a_ptr[i] = data[i].a; } for (size_t i = 0; i < SIZE; i++) { b_ptr[i] = data[i].b; } for (size_t i = 0; i < SIZE; i++) { c_ptr[i] = data[i].c; } free(a); free(b); free(c); } ``` Recompiled and ran and got these times: 2406 2344 2328 2266 2344 2359 2266 2328 2359 2266 Or 2.3 seconds on average. I say there's a case to be made about taking advantage of sequential addresses.
I'm trying to setup a scheduled task with ECS Fargate.Task was dockerized and will be run through AWS ECS with Fargate. Unfortunately the service I want to run needs to access an API of a partner where the IP needs to be whitelisted. I see that for each execution of the task with Fargate a new ENI with an different IP is assigned. How is it possible to assign a static IP to a AWS ECS Fargate Task?
How can I have an ECS Fargate scheduled job access API with a ip whitelist policy?
I made a client-server side app using React.js and Expres.js. I also used a local database made on mysql workbench. I want to deploy my app to app-engine on google cloud, for this purpose I exported my db and now it is on the cloud. The problem arises when I want ot deploy the app. The client sode shows, but whenever I try to do any function the app crashes and I get an error CONNECTION REFUSED. So far I have tried: - using the app engine template from the documentation - using keys - using solutions from gemini, copilot and chatgpt This is my app.yaml ``` runtime: nodejs18 env: standard instance_class: F2 handlers: # Catch all handler to index.html - url: /.* script: auto secure: always redirect_http_response_code: 301 automatic_scaling: target_cpu_utilization: 0.65 min_instances: 1 max_instances: 10 env_variables: INSTANCE_CONNECTION_NAME: "hidden" DB_PRIVATE_IP: , INSTANCE_HOST: "hidden" DB_PORT: "3306" DB_USER: "root" DB_PASS: "Ljubljana2023" DB_NAME: "social_media" vpc_access_connector: name: projects/hidden ``` ``` // import userRoutes from "./routes/users.js"; // import postRoutes from "./routes/posts.js"; // import likeRoutes from "./routes/likes.js"; // import commentRoutes from "./routes/comments.js"; // import relationshipRoutes from "./routes/relationships.js"; // import authRoutes from "./routes/auth.js"; // import cookieParser from "cookie-parser"; // import cors from "cors"; // import multer from "multer"; // import Express from "express"; // const app = Express(); // //middlewares // app.use((req, res, next) => { // res.header("Access-Control-Allow-Credentials", true); // res.header("Access-Control-Allow-Origin", "*"); // res.header("Access-Control-Allow-Methods", "GET, POST, OPTIONS"); // res.header("Access-Control-Allow-Headers", "Content-Type"); // next(); // }); // app.use(Express.json()); // app.use( // cors({ // origin: "http://localhost:3000", // }) // ); // app.use(cookieParser()); // const storage = multer.diskStorage({ // destination: function (req, file, cb) { // cb(null, "../client/public/upload"); // }, // filename: function (req, file, cb) { // cb(null, Date.now() + file.originalname); // }, // }); // const upload = multer({ storage: storage }); // app.post("/api/upload", upload.single("file"), (req, res) => { // const file = req.file; // console.log("Received file:", file); // res.status(200).json(file.filename); // }); // app.use("/api/users", userRoutes); // app.use("/api/posts", postRoutes); // app.use("/api/likes", likeRoutes); // app.use("/api/comments", commentRoutes); // app.use("/api/relationships", relationshipRoutes); // app.use("/api/auth", authRoutes); // app.listen(8800, () => { // console.log("API working!!"); // }); // /*const {Sequelize} = require('sequelize'); // // Create a new Sequelize instance. // const sequelize = new Sequelize('social_media', 'root', 'password', { // host: '/cloudsql/third-current-418819:us-central1:systems3db', // dialect: 'mysql', // }); // // Test the connection to the database. // sequelize // .authenticate() // .then(() => { // console.log('Connection to the database has been established successfully.'); // }) // .catch(err => { // console.error('Unable to connect to the database:', err); // }); // */ import userRoutes from "./routes/users.js"; import postRoutes from "./routes/posts.js"; import likeRoutes from "./routes/likes.js"; import commentRoutes from "./routes/comments.js"; import relationshipRoutes from "./routes/relationships.js"; import authRoutes from "./routes/auth.js"; import cookieParser from "cookie-parser"; import cors from "cors"; import multer from "multer"; import Express from "express"; import mysql from "mysql2/promise"; const app = Express(); // Get the environment variables const { INSTANCE_HOST, DB_PORT, DB_USER, DB_PASS, DB_NAME, INSTANCE_CONNECTION_NAME, } = process.env; // Create a connection pool const pool = mysql.createPool({ host: INSTANCE_HOST, user: DB_USER, database: DB_NAME, password: DB_PASS, port: DB_PORT, socketPath: /cloudsql/${INSTANCE_CONNECTION_NAME}, waitForConnections: true, connectionLimit: 10, queueLimit: 0, }); // Now you can use the pool to query your database app.post("/users", async (req, res) => { try { const [rows, fields] = await pool.execute("SELECT * FROM users"); res.json(rows); } catch (err) { res.status(500).json(err); } }); // etc. //middlewares app.use((req, res, next) => { res.header("Access-Control-Allow-Credentials", true); res.header("Access-Control-Allow-Origin", "*"); res.header("Access-Control-Allow-Methods", "GET, POST, OPTIONS"); res.header("Access-Control-Allow-Headers", "Content-Type"); next(); }); app.use(Express.json()); app.use( cors({ origin: "http://localhost:3000", }) ); app.use(cookieParser()); const storage = multer.diskStorage({ destination: function (req, file, cb) { cb(null, "../public/upload"); }, filename: function (req, file, cb) { cb(null, Date.now() + file.originalname); }, }); const upload = multer({ storage: storage }); app.post("/api/upload", upload.single("file"), (req, res) => { const file = req.file; console.log("Received file:", file); res.status(200).json(file.filename); }); app.use("/api/users", userRoutes); app.use("/api/posts", postRoutes); app.use("/api/likes", likeRoutes); app.use("/api/comments", commentRoutes); app.use("/api/relationships", relationshipRoutes); app.use("/api/auth", authRoutes); const PORT = process.env.PORT || 8800; app.listen(PORT, () => { console.log(API working P:${PORT}!!); }); ```
Deployment through app engine, cloud sql database, problem connecting with server code, doesn't connect
i am trying to delete a column with the name "1" on pandas
|python|pandas|dataframe|
null
Use this: select jsonb_object_keys(json_stuff) from table; (Or just `json_object_keys` if you're using just JSON.) The PostgreSQL JSON documentation is quite good. [Take a look](http://www.postgresql.org/docs/9.5/static/functions-json.html). And as it is stated in the documentation, the function only gets the outer most keys. So if the data is a nested JSON structure, the function will not return any of the deeper keys.
I was getting the exact same error, while pushing my code on github. Then I went to github discussions. Solution - The slow or incosistent internet connection may cause this problem, make sure you've steady and fast internet connection, or it'll give you - > error: RPC failed; HTTP 400 curl 92 HTTP/2 stream 5 was not closed cleanly: > CANCEL (err 8) > send-pack: unexpected disconnect while reading sideband packet > fatal: the remote end hung up unexpectedly > Everything up-to-date the above errors. If this method doesn't solve your problem : Refer to this GitHub discussion page - [RPC failed error discussion page][1] [1]: https://github.com/orgs/Homebrew/discussions/4069
Solidity: getStudentNotifications Retrieves notifications to other metamask account instead of one metamask account
I have a table that I created using angular-slickgrid AngularGridInstance in that I can add new columns, reorder the columns, shrink and expand the columns If I expand the columns too much that it goes beyond the screen then a Horizontal scroll bar comes which is fine. However, when I shrink columns excessively, a gap appears on the right side, which is undesirable. How can I ensure that the last column covers any gap created by shrinking columns? I tried using forceFitColumns = true. But if I use that then it causes problems when I expand the columns other columns get shrunk and no scroll bar is displauyed. So I was thinking is there any possible way in which I can forceFitColumns and can have Horizontalscroll bar bothe at the same time.
Is it possible to have forceFitColumns and horizontalScrollBar both at the same time
|angular|slickgrid|angular-slickgrid|
null
I'm trying to Implement a K-means algorithm, with semi-random choosing of the initial centroids. I'm using Python as a way to process the data using numpy to choose initial centers and stable API in order to implement the iterative part of K-means in C. However, when I am entering relatively large datasets, I get Segmentation Error (core dumped), so far I tried to manage memory better and free all the Global array before go back to python, also I tried to free all local array before end of the function. this is the code in python: ``` def Kmeans(K, iter , eps ,file_name_1, file_name_2): compound_df = get_compound_df(file_name_1,file_name_2) N , d = int(compound_df.shape[0]) , int(compound_df.shape[1]) data = np.array(pd.DataFrame.to_numpy(compound_df),dtype=float) assert int(iter) < 1000 and int(iter) > 1 and iter.isdigit() , "Invalid maximum iteration!" assert 1 < int(K) and int(K) < N , "Invalid number of clusters!" PP_centers = k_means_PP(compound_df,int(K)) actual_centroids = [] for center_ind in PP_centers: actual_centroids.append(data[center_ind]) actual_centroids = np.array(actual_centroids,dtype=float) data = (data.ravel()).tolist() actual_centroids = (actual_centroids.ravel()).tolist() print(PP_centers) print(f.fit(int(K),int(N),int(d),int(iter),float(eps),actual_centroids,data)) ``` this is the code in C, that manages the ```PyObject``` creation, this is the python object being returned to the ```Kmeans``` function ``` PyObject* convertCArrayToDoubleList(double* arr){ int i, j; PyObject* K_centroid_list = PyList_New(K); if(!K_centroid_list) return NULL; for(i=0;i<K;++i){ PyObject* current_center = PyList_New(d); if(!K_centroid_list){ Py_DECREF(K_centroid_list); return NULL; } for(j=0;j<d;++j){ PyObject* num = PyFloat_FromDouble(arr[i*d+j]); if(!num){ Py_DECREF(K_centroid_list); Py_DECREF(current_center); return NULL; } PyList_SET_ITEM(current_center,j,num); } PyList_SET_ITEM(K_centroid_list,i,current_center); } return K_centroid_list; } ``` I ran valgrind on some samples, there were some leaks of memory but I couldn't identify the leak... I also tried various freeing and Py_DECREF combinations and attempt to reduce the leakage, but with no avail... Thanks for helping!
Bit of an R shiny novice so bear with me. I am trying to convert some R code I've written into an R shiny app so others can use it more readily. The code utilizes a package called IPDfromKM. The main function of issue is getpoints(), which in R will generate a plot and the user will need to click the max X and max Y coordinates, followed by clicking through the entire KM curve, which extracts the coordinates into a data frame. However, I cannot get this to work in my R shiny app. There is a [link ](https://biostatistics.mdanderson.org/shinyapps/IPDfromKM/)to the working R shiny app from the creator This is the getpoints() code: ``` getpoints <- function(f,x1,x2,y1,y2){ ## if bitmap if (typeof(f)=="character") { lfname <- tolower(f) if ((strsplit(lfname,".jpeg")[[1]]==lfname) && (strsplit(lfname,".tiff")[[1]]==lfname) && (strsplit(lfname,".bmp")[[1]]==lfname) && (strsplit(lfname,".png")[[1]]==lfname) && (strsplit(lfname,".jpg")[[1]]==lfname)) {stop ("This function can only process bitmap images in JPEG, PNG,BMP, or TIFF formate.")} img <- readbitmap::read.bitmap(f) } else if (typeof(f)=="double") { img <- f } else { stop ("Please double check the format of the image file.") } ## function to read the bitmap and points on x-axis and y-axis axispoints <- function(img){ op <- par(mar = c(0, 0, 0, 0)) on.exit(par(op)) plot.new() rasterImage(img, 0, 0, 1, 1) message("You need to define the points on the x and y axis according to your input x1,x2,y1,y2. \n") message("Click in the order of left x-axis point (x1), right x-axis point(x2), lower y-axis point(y1), and upper y-axis point(y2). \n") x1 <- as.data.frame(locator(n = 1,type = 'p',pch = 4,col = 'blue',lwd = 2)) x2 <- as.data.frame(locator(n = 1,type = 'p',pch = 4,col = 'blue',lwd = 2)) y1 <- as.data.frame(locator(n = 1,type = 'p',pch = 3,col = 'red',lwd = 2)) y2 <- as.data.frame(locator(n = 1,type = 'p',pch = 3,col = 'red',lwd = 2)) ap <- rbind(x1,x2,y1,y2) return(ap) } ## function to calibrate the points to the appropriate coordinates calibrate <- function(ap,data,x1,x2,y1,y2){ x <- ap$x[c(1,2)] y <- ap$y[c(3,4)] cx <- lm(formula = c(x1,x2) ~ c(x))$coeff cy <- lm(formula = c(y1,y2) ~ c(y))$coeff data$x <- data$x*cx[2]+cx[1] data$y <- data$y*cy[2]+cy[1] return(as.data.frame(data)) } ## take the points ap <- axispoints(img) message("Mouse click on the K-M curve to take the points of interest. The points will only be labled when you finish all the clicks.") takepoints <- locator(n=512,type='p',pch=1,col='orange4',lwd=1.2,cex=1.2) df <- calibrate(ap,takepoints,x1,x2,y1,y2) par() return(df) } ``` I'm a bit at a loss in how to execute this in my main panel. I've tried using plotOutput(), imageOutput(), and calling variations of the below functions, but nothing pops up or seems to work like it does in R studio. Will I need to split out the components of the function into individual steps? ``` createPoints<-reactive({ #Read File file <- input$file1 ext <- tools::file_ext(file$datapath) req(file) validate(need(ext == "png", "Please upload a .png file")) ##should run the function that generates a plot for clicking coordinates and stores them in a data frame points<-getpoints(file,x1=0, x2=input$Xmax,y1=0, y2=100) return(points) }) ``` ``` output$getPointsPlot<-renderPlot( createPoints() ) ```
|reactjs|google-app-engine|deployment|mysql-workbench|
null
In GLSL I iterate over a float buffer, that contains the coordinates and a couple properties of elements to render. I was curious about shaders (don't have much experience with them) and wanted to obsessively optimise it. When looking at how it gets compiled (I'm using WebGL+Spector.js), I notice that in the loop where I access the array it clamps the accesses to the size of the buffer. I understand this is heavily machine dependent, and is done to ensure no out of bounds accesses, but is there not a way to avoid these checks or guarantee to the compiler out of bounds accesses aren't possible (eg. adding a condition to the loop)? I'm mostly curious about the (small) performance impact these operations have (6 `clamp`s, and int + float casts per iteration!) Any other potential optimisation tips are welcome, I'm very new to this and super interested in it. I thought about maybe passing the data as an array of `vec3`s instead to reduce array accesses to 2/element. Not sure if it would improve it though ! ### Original code: ```c // maxRelevantIndex is a uniform, elementSize is a const = 6, elements is of size 60 for (int i = 0; i < maxRelevantIndex; i += elementSize) { float x1 = elements[i]; float y1 = elements[i + 1]; float x2 = elements[i + 2]; float y2 = elements[i + 3]; float br = elements[i + 4]; float color = elements[i + 5]; ... } ``` ### Decompiled: ```glsl for (int _uAV = 0; (_uAV < _uU); (_uAV += 6)) { float _uk = _uV[int(clamp(float(_uAV), 0.0, 59.0))]; float _ul = _uV[int(clamp(float((_uAV + 1)), 0.0, 59.0))]; float _um = _uV[int(clamp(float((_uAV + 2)), 0.0, 59.0))]; float _un = _uV[int(clamp(float((_uAV + 3)), 0.0, 59.0))]; float _uAW = _uV[int(clamp(float((_uAV + 4)), 0.0, 59.0))]; float _uAC = _uV[int(clamp(float((_uAV + 5)), 0.0, 59.0))]; ... } ```
GLSL clamps indices on array access
|glsl|glsles|
I have a json like [ { "url": "https://drive.google.com/file/d/1tO-qVknlH0PLK9CblQsyd568ZiptdKff/view?usp=share_link", "title": "&#8211; Flexibility" }, { "url": "https://drive.google.com/open?id=11_sR8X13lmPcvlT3POfMW3044f3wZdra", "title": "&#8211; Pronouns" } ] I got it using `curl -Lfs "https://motivatedsisters.com/2019/07/08/arabic-review-sr-rahat-basit/" | rg -o '<li>.*?href="(.*?)".*?</a> (.*?)<\/li>' -r '{"url": "$1", "title": "$2"}' | jq -s '.'`. I have a command in my machine named `unescape_html`, a python scipt to unescape the html (replace &#8211; with appropriate character). How can I apply this function on each of the titles using `jq`. For example: I want to run: unescape_html "&#8211; Flexibility" unescape_html "&#8211; Pronouns" The expected output is: [ { "url": "https://drive.google.com/file/d/1tO-qVknlH0PLK9CblQsyd568ZiptdKff/view?usp=share_link", "title": "– Flexibility" }, { "url": "https://drive.google.com/open?id=11_sR8X13lmPcvlT3POfMW3044f3wZdra", "title": "– Pronouns" } ] **Update 1:** If `jq` doesn't have that feature, then i am also fine that the command is applied on `rg`. I mean, can i run `unescape_html` on `$2` on the line: rg -o '<li>.*?href="(.*?)".*?</a> (.*?)<\/li>' -r '{"url": "$1", "title": "$2"}' Any other bash approach to solve this problem is also fine. The point is, i need to run `unescape_html` on `title`, so that i get the expected output. **Update 2:** The following command: curl -Lfs "https://motivatedsisters.com/2019/07/08/arabic-review-sr-rahat-basit/" | rg -o '<li>.*?href="(.*?)".*?</a> (.*?)<\/li>' -r '{"url": "$1", "title": "$2"}' | jq -s '.' | jq 'map(.title |= @sh "unescape_html \(.)")' gives: [ { "url": "https://drive.google.com/file/d/1tO-qVknlH0PLK9CblQsyd568ZiptdKff/view?usp=share_link", "title": "unescape_html '&#8211; Flexibility'" }, { "url": "https://drive.google.com/open?id=11_sR8X13lmPcvlT3POfMW3044f3wZdra", "title": "unescape_html '&#8211; Pronouns'" } ] Just not evaluating the commands. The following command works: curl -Lfs "https://motivatedsisters.com/2019/07/08/arabic-review-sr-rahat-basit/" | rg -o '<li>.*?href="(.*?)".*?</a> (.*?)<\/li>' -r '{"url": "$1", "title": "$2"}' | jq -s '.' | jq 'map(.title |= sub("&#8211;"; "–"))' But it only works for `&#8211;`. It will not work for other reserved characters.
How to run an R function getpoints() from IPDfromKM package in an R shiny app which in R pops up a plot that utilizes clicks to capture coordinates?
|r|shiny|survival-analysis|shinyapps|
null
Heres a much simpler version of the accepted answer, without using combine ``` class SearchManager: NSObject, ObservableObject { @Published var completions: [MKLocalSearchCompletion] = [] private let completer = MKLocalSearchCompleter() override init() { super.init() completer.resultTypes = .address completer.delegate = self } func search(for term: String) { completer.queryFragment = term } } extension SearchManager: MKLocalSearchCompleterDelegate { func completerDidUpdateResults(_ completer: MKLocalSearchCompleter) { completions = completer.results } func completer(_ completer: MKLocalSearchCompleter, didFailWithError error: Error) { print("Error: \(error.localizedDescription)") } } ``` and heres the SwiftUI view ``` struct Test: View { @StateObject var searchManager = SearchManager() @State private var searchTerm: String = "" @State private var searchTimer: Timer? @FocusState var isFocused: Bool @State private var addresses: [CNPostalAddress] = [] var body: some View { List { Section { Button("Start typing your street address and you will see a list of possible matches."){ isFocused = false} } Section { TextField("Address", text: $searchTerm) .focused($isFocused, equals: true) if isFocused && !searchTerm.isEmpty { ForEach(searchManager.completions, id: \.self) { completion in Button { isFocused = false getMapItem(from: completion) } label: { VStack(alignment: .leading) { Text(completion.title) Text(completion.subtitle) .font(.system(.caption)) } } } } } .onChange(of: searchTerm) { newTerm in startSearchTimer() } Section{ ForEach(addresses, id: \.self){ add in Text(add.street) Text(add.city) Text(add.state) Text(add.postalCode) Text(add.country) } } } } private func getMapItem(from localSearchCompletion: MKLocalSearchCompletion) { let searchRequest = MKLocalSearch.Request(completion: localSearchCompletion) let search = MKLocalSearch(request: searchRequest) search.start { (response, error) in guard error == nil, let mapItems = response?.mapItems else { return } var postalAddresses: [CNPostalAddress] = [] for item in mapItems { let clPlacemark = CLPlacemark(placemark: item.placemark) guard let address = clPlacemark.postalAddress else { return } postalAddresses.append(address) } self.addresses = postalAddresses } } private func startSearchTimer() { searchTimer?.invalidate() // Invalidate previous timer if exists searchTimer = Timer.scheduledTimer(withTimeInterval: 0.25, repeats: false) { _ in searchManager.search(for: searchTerm) } } } ```
I am receiving a json string of the form ```json { "people": [ { "Freddie": { "surname": "Mercury", "city": "London" } }, { "David": { "surname": "Bowie", "city": "Berlin" } } ] } ``` with the ultimate goal of recovering them as Proto messages of the kind ``` message People { optional string name = 1; optional string surname = 2; optional string city = 3; } message Msg { repeated People folks = 1; } ``` What is the best practice - i.e. short of custom adhoc logic of manipulating the keys - to convert the above json string to a string sich as ``` { "people": [ { "name": "Freddie", "surname": "Mercury", "city": "London" }, { "name": "David", "surname": "Bowie", "city": "Berlin" } ] } ``` so that the correspondence between the json and the Proto is one-to-one, and therefore out-of-the-box deserializers could be used?
Custom rewriter for json
|json|protocol-buffers|
A **[with expression][1]** produces a ***copy of its operand*** with the specified properties and fields modified. Starting with C# 10, the LHS operand of a **with** expression can now be - **records**, **structs** or **anonymous** types. **Records**: public record Person(string FirstName, string LastName); public static class Program { public static void Main() { Person person = new("Nancy", "Davolio"); Console.WriteLine(person); // output: Person { FirstName = Nancy, LastName = Davolio } var personCreatedUsingWithExpression = person with { FirstName = "Dave" }; Console.WriteLine(personCreatedUsingWithExpression); // output: Person { FirstName = Dave, LastName = Davolio } } } **Records containing reference type member**: Only the reference to a member instance is copied when an operand is copied. Both the copy and original operand have access to the same reference-type instance. > public class ExampleWithReferenceType { > public record TaggedNumber(int Number, List<string> Tags) > { > public string PrintTags() => string.Join(", ", Tags); > } > > public static void Main() > { > var original = new TaggedNumber(1, new List<string> { "A", "B" }); > > var copy = original with { Number = 2 }; > Console.WriteLine($"Tags of {nameof(copy)}: {copy.PrintTags()}"); > // output: Tags of copy: A, B > > original.Tags.Add("C"); > Console.WriteLine($"Tags of {nameof(copy)}: {copy.PrintTags()}"); > // output: Tags of copy: A, B, C > } > } **Structs**: Since a `record` in C# is technically a `struct`, it is easier to remember that `with` expression works with `struct`s as well. public readonly struct Coords { public Coords(double x, double y) { X = x; Y = y; } public double X { get; init; } public double Y { get; init; } public override string ToString() => $"({X}, {Y})"; } public static void Main() { var p1 = new Coords(0, 0); Console.WriteLine(p1); // output: (0, 0) var p2 = p1 with { X = 3 }; Console.WriteLine(p2); // output: (3, 0) var p3 = p1 with { X = 1, Y = 4 }; Console.WriteLine(p3); // output: (1, 4) } **Anonymous Types**: var apple = new { Item = "apples", Price = 1.35 }; var onSale = apple with { Price = 0.79 }; [1]: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/operators/with-expression
We are running a build pipeline on Azure DevOps. NPM install task fails with the following error: ```npm ERR! command failed npm ERR! command C:\Windows\system32\cmd.exe /d /s /c node index.js --exec install npm ERR! Installing Cypress (version: 4.2.0) npm ERR! npm ERR! 25l[11:02:24] Downloading Cypress [started] npm ERR! [11:02:24] Downloading Cypress [failed] npm ERR! 25hThe Cypress App could not be downloaded. npm ERR! npm ERR! Does your workplace require a proxy to be used to access the Internet? If so, you must configure the HTTP_PROXY environment variable before downloading Cypress. Read more: https://on.cypress.io/proxy-configuration npm ERR! npm ERR! Otherwise, please check network connectivity and try again: npm ERR! npm ERR! ---------- npm ERR! npm ERR! URL: https://download.cypress.io/desktop/4.2.0?platform=win32&arch=x64 npm ERR! Error: certificate has expired npm ERR! npm ERR! ---------- npm ERR! npm ERR! Platform: win32 (10.0.20348) npm ERR! Cypress Version: 4.2.0 npm ERR! 25h ``` Regarding the proxy issue, I don't believe it actually exists because I tried downloading on the agent that's running the build and everything works as expected. Does anyone have experience with the second error about the expired certificate? What certificate is the error about? Node version: 16.20.1 Cypress version: 4.2.0 Edit: I tried updating Node and Cypress version but it did not help. Edit 2: The build is working fine locally.
I have a class with variadic template parameters and I am trying to write a member function that unpacks the parameters into a tuple, so that I can use a structured binding in order to access the individual parameters. ``` #include <string> #include <tuple> #include <vector> #include <map> #include <memory> struct TypeErasedParameter { private: const std::string m_strId; protected: TypeErasedParameter(std::string id) : m_strId(id) {} public: const std::string & GetId() { return m_strId; } template <typename T> T& GetValue(); }; template <typename T> struct Parameter : public TypeErasedParameter { T parameterValue; Parameter(const std::string id, T value) : TypeErasedParameter(id), parameterValue(value) {} }; template <typename T> T& TypeErasedParameter::GetValue() { return dynamic_cast<Parameter<T>*>(this)->parameterValue; } // ParameterPack is basically a container that stores a bunch of TypeErasedParameters struct ParameterPack { std::map<std::string, std::unique_ptr<TypeErasedParameter>> m_mapParamsById; template<typename ParameterType> ParameterType& GetParameter(const std::string& strParamName) { return m_mapParamsById.at(strParamName)->GetValue<ParameterType>(); } }; template <typename ...Args> struct ParameterUnpacker { public: std::vector<std::string> m_vecIds; ParameterUnpacker(std::vector<std::string> vecIds) : m_vecIds(vecIds) {} const std::vector<std::string>& GetIds() { return m_vecIds; } template<size_t... Is> std::tuple<Args& ...> UnpackParametersHelper( ParameterPack& parameters, std::index_sequence<Is...>) { const auto& vecIds = GetIds(); // TODO: Return a tuple of the parameters by matching the Ids with the Is order of the parameters return std::make_tuple< Args& ...>((parameters.GetParameter<typename std::tuple_element<Is, std::tuple<Args...>>::type>(vecIds.at(Is))) ...); } std::tuple<Args& ...> UnpackParameters( ParameterPack& parameters) { // parameters contains as many Ids and types as Args... return UnpackParametersHelper(parameters, std::index_sequence_for<Args...>{}); } }; int main(int argc, char* argv[]) { ParameterPack paramPack; paramPack.m_mapParamsById["doubleParam"] = std::make_unique<Parameter<double>>("doubleParam", 2.245); paramPack.m_mapParamsById["intParam"] = std::make_unique<Parameter<int>>("intParam", 5); ParameterUnpacker<double, int> unpacker({ "doubleParam" , "intParam" }); auto[doubleparamout, intparamout] = unpacker.UnpackParameters(paramPack); return 0; } ``` Edit: The error I get when trying to compile is the following: "could not convert 'std::make_tuple(_Elements&& ...) [with _Elements = {double&, int&}]((* &(& parameters)->ParameterPack::GetParameter<int>((* &(& vecIds)->std::vector<std::__cxx11::basic_string<char> >::at(1)))))' from 'tuple<double, int>' to 'tuple<double&, int&>' return std::make_tuple< Args& ...>((parameters.GetParameter<typename std::tuple_element<Is, std::tuple<Args...>>::type>(vecIds.at(Is))) ...);" Thanks to @n. m. could be an AI I added a more concise and usable example with the error message, see http://coliru.stacked-crooked.com/a/e832f7f9c1994d7e Thanks a lot in advance, let me know if you need any further information!
No programming is required. Create a shortcut to your program on the desktop. Right-click it, Properties, Layout tab. Adjust the Width property of the screen buffer and window size. But you can do it programmatically too, those Windows API functions you found are wrapped by the Console class as well. You first have to make sure that the console buffer is large enough, then you can set the window size to a size that's at least as large as the buffer. For example: static void Main(string[] args) { Console.BufferWidth = 100; Console.SetWindowSize(Console.BufferWidth, 25); Console.ReadLine(); }
**_Originally asked on Swift Forums: https://forums.swift.org/t/using-bindable-with-a-observable-type/70993_** I'm using SwiftUI environments in my app to hold a preferences object which is an @Observable object But I want to be able to inject different instances of the preferences object for previews vs the production code so I've abstracted my production object in to a `Preferences` protocol and updated my Environment key's type to: ```swift protocol Preferences { } @Observable final class MyPreferencesObject: Preferences { } @Observable final class MyPreviewsObject: Preferences { } // Environment key private struct PreferencesKey: EnvironmentKey { static let defaultValue : Preferences & Observable = MyPreferencesObject() } extension EnvironmentValues { var preferences: Preferences & Observable { get { self[PreferencesKey.self] } set { self[PreferencesKey.self] = newValue } } } ``` The compiler is happy with this until I go to use `@Bindable` in my code where the compiler explodes with a generic error, eg: ```swift @Environment(\.preferences) private var preferences // ... code @Bindable var preferences = preferences ``` If I change the environment object back to a conforming type eg: ```swift @Observable final class MyPreferencesObject() { } private struct PreferencesKey: EnvironmentKey { static let defaultValue : MyPreferencesObject = MyPreferencesObject() } extension EnvironmentValues { var preferences: MyPreferencesObject { get { self[PreferencesKey.self] } set { self[PreferencesKey.self] = newValue } } } ``` Then `@Bindable` is happy again and things compile. Specifically the compiler errors with: > Failed to produce diagnostic for expression; please submit a bug report (Swift.org - Contributing) On the parent function the @Bindable is inside of and with a > Command SwiftCompile failed with a nonzero exit code In the app target. Is this a known issue/limitation? Or am I missing something here?
- Loading data from cells to an arrya to improve the efficiency _Microsoft documentation:_ > [Range.ClearContents method (Excel)](https://learn.microsoft.com/en-us/office/vba/api/excel.range.clearcontents?WT.mc_id=M365-MVP-33461&f1url=%3FappId%3DDev11IDEF1%26l%3Den-US%26k%3Dk(vbaxl10.chm144095)%3Bk(TargetFrameworkMoniker-Office.Version%3Dv16)%26rd%3Dtrue) > [Range.Offset property (Excel)](https://learn.microsoft.com/en-us/office/vba/api/excel.range.offset?WT.mc_id=M365-MVP-33461&f1url=%3FappId%3DDev11IDEF1%26l%3Den-US%26k%3Dk(vbaxl10.chm144169)%3Bk(TargetFrameworkMoniker-Office.Version%3Dv16)%26rd%3Dtrue) > [ReDim statement](https://learn.microsoft.com/en-us/office/vba/language/reference/user-interface-help/redim-statement?WT.mc_id=M365-MVP-33461&f1url=%3FappId%3DDev11IDEF1%26l%3Den-US%26k%3Dk(vblr6.chm1008999)%3Bk(TargetFrameworkMoniker-Office.Version%3Dv16)%26rd%3Dtrue) ```vb Option Explicit Sub Demo() Dim i As Long, j As Long Dim arrData, rngData As Range Dim arrRSum, iR As Long, iNum As Long Const FIRSTCOL = "G" Const FIRSTROW = 5 Const LASTROW = 36 Const MAXHR = 10 ' modify as needed ' get the table Set rngData = Range(Range("F1").End(xlToRight), FIRSTCOL & LASTROW) With rngData ' clear table .Resize(.Rows.Count - FIRSTROW + 1).Offset(FIRSTROW - 1).ClearContents ' load data into an arrya arrData = .Value End With ReDim arrRSum(1 To UBound(arrData)) ' loop throgh Cols For j = LBound(arrData, 2) To UBound(arrData, 2) iR = 0 iNum = arrData(2, j) If iNum > 0 Then ' pupulate the table For i = FIRSTROW To UBound(arrData) If arrRSum(i) < MAXHR Then arrData(i, j) = 1 iR = iR + 1 arrRSum(i) = arrRSum(i) + 1 If iR = iNum Then Exit For End If Next End If Next ' write output to sheet rngData.Value = arrData End Sub ``` [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/974xx.png
[from here][1] > If you try to add a standard HTML tag inside the React Three Fiber > Canvas element, then you will get an error similar to Uncaught Error: > R3F: Div is not part of the THREE namespace! Did you forget to extend? > > The error happens because the React Three Fiber reconciler renders > into a Three.js scene, and not the HTML document. So, a div, span, h1 > or any other HTML tag will mean nothing to the Three.js renderer. > Instead, you need to render it to the HTML document, and for that you > use the React-DOM reconciler. This is a simple as putting your HTML > tag outside the Canvas element. [1]: https://sbcode.net/react-three-fiber/use-gltf/
With input: ``` import re text = "Of the hundreds of thousands of high school wrestlers, only a small percentage know what it’s like to win a state title. {{Elided}} is part of that percentage. The Richmond junior joined that group by winning… Premium Content is available to subscribers only. Please login here to access content or go here to purchase a subscription." paywall_keywords = ["login", "subscription", "purchase a subscription", "subscribers"] ``` Form pattern for filter: ``` patt = re.compile('|'.join(['.*' + k for k in paywall_keywords])) '.*login|.*subscription|.*purchase a subscription|.*subscribers' ``` Split text by sentences: ``` phrases = text.split(sep='.') ['Of the hundreds of thousands of high school wrestlers, only a small percentage know what it’s like to win a state title', ' {{Elided}} is part of that percentage', ' The Richmond junior joined that group by winning… Premium Content is available to subscribers only', ' Please login here to access content or go here to purchase a subscription', ''] ``` *nltk indeed helps splitting by "..." too* Find hits: ``` found = list(filter(patt.match, phrases)) [' The Richmond junior joined that group by winning… Premium Content is available to subscribers only', ' Please login here to access content or go here to purchase a subscription'] ``` Eliminate those and reform the text: ``` '.'.join([p for p in phrases if p not in found]) 'Of the hundreds of thousands of high school wrestlers, only a small percentage know what it’s like to win a state title. {{Elided}} is part of that percentage.' ``` References: - [Regular Expressions: Search in list](https://stackoverflow.com/q/3640359/12846804) - [Appending the same string to a list of strings in Python](https://stackoverflow.com/q/2050637/12846804) - [How to concatenate (join) items in a list to a single string](https://stackoverflow.com/q/12453580/12846804)
I work on a webForm project (.NET FrameWork) And I have this exception : ``` <!-- System.Web.HttpCompileException (0x80004005): c:\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files\root\a1f84433\8a878a8d\App_Web_genericusercontrol.ascx.9c4a4a40.c8huzmqq.0.cs(180): error CS0234: The type or namespace name 'Services' does not exist in the namespace 'Cnbp.Cbk' (are you missing an assembly reference?) at System.Web.Compilation.AssemblyBuilder.Compile() at System.Web.Compilation.BuildProvidersCompiler.PerformBuild() at System.Web.Compilation.BuildManager.CompileWebFile(VirtualPath virtualPath) at System.Web.Compilation.BuildManager.GetVPathBuildResultInternal(VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile, Boolean throwIfNotFound, Boolean ensureIsUpToDate) at System.Web.Compilation.BuildManager.GetVPathBuildResultWithNoAssert(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile, Boolean throwIfNotFound, Boolean ensureIsUpToDate) at System.Web.Compilation.BuildManager.GetVPathBuildResult(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile, Boolean ensureIsUpToDate) at System.Web.UI.TemplateControl.LoadControl(VirtualPath virtualPath) at Cnbp.Cbk.FrontOffice.ContainerClient.Controls.SubGenericUserControl.GenerateBlocks(Boolean isReadOnly, Boolean editByBlock, LinkButton calcButton, ContributionBlockUserControl& blocCtrlContribution) at Cnbp.Cbk.FrontOffice.ContainerClient.Controls.SubContainerUserControl.BindControls() --> ``` .the DLL in question already exists and is well referenced on the project but is not detected during the live ascx compilation .PI: the dlls are in the GAC .I tried lots of solutions but none worked, someone has a solution please. Thanks in advance.
{"Voters":[{"Id":5021945,"DisplayName":"Irish Redneck"}],"DeleteType":1}
I'm doing training where I use two projects where, project A will have the artifacts that will be used and, in project B, I will build using a deployment group. During the execution of the release pipeline, the following error appears: > Failed in getBuild with error: Error: VS800075: The project with id > 'vstfs:///Classification/TeamProject/0f8301ca-c8a9-4e51-bf35-73bd978f8d8d' > does not exist, or you do not have permission to access it. > at RestClient.<anonymous> (C:\azagent\A2\_work\_tasks\DownloadBuildArtifacts_a433f589-fce1-4460-9ee6-44a624aeb1fb\0.236.1\node_modules\typed-rest-client\RestClient.js:202:31) > at Generator.next (<anonymous>) > at fulfilled (C:\azagent\A2\_work\_tasks\DownloadBuildArtifacts_a433f589-fce1-4460-9ee6-44a624aeb1fb\0.236.1\node_modules\typed-rest-client\RestClient.js:6:58) > at process.processTicksAndRejections (node:internal/process/task_queues:95:5) { statusCode: 400, > result: [Object], responseHeaders: [Object] } Error: VS800075: The > project with id > 'vstfs:///Classification/TeamProject/0f8301ca-c8a9-4e51-bf35-73bd978f8d8d' > does not exist, or you do not have permission to access it. Following some suggestions from Microsoft forums, I tried to add the only user I have to the Build Administrator group of project A and also to be safe I added it to project B without success. I also tried to recreate the project, deployment group without success. I used this reference link: https://developercommunity.visualstudio.com/t/error-vs800075-when-downloading-artifact-from-anot/1146258
I'm trying to create a code for **perfectly optimal chess endgame**. This code for chess endgame is my currently best [one](https://pastebin.com/zkcbgANy) import chess def simplify_fen_string(fen): parts = fen.split(' ') simplified_fen = ' '.join(parts[:4]) # Zachováváme pouze informace o pozici return simplified_fen def evaluate_position(board): #print(f"Position: {board.fen()}") if board.is_checkmate(): ### print(f"Position: {board.fen()}, return -1000") return -1000 # Mat protihráči elif board.is_stalemate() or board.is_insufficient_material() or board.can_claim_draw(): ### print(f"Position: {board.fen()}, return 0") return 0 # Remíza else: #print(f"Position: {board.fen()}, return None") return None # Hra pokračuje def create_AR_entry(result, children, last_move): return {"result": result, "children": children, "last_move": last_move, "best_child": None} def update_best_case(best_case): if best_case == 0: return best_case if best_case > 0: return best_case - 1 else: return best_case + 1 def update_AR_for_mate_in_k(board, AR, simplified_initial_fen, max_k=1000): evaluated_list = [] #print(f"") for k in range(1, max_k + 1): print(f"K = {k}") changed = False for _t in range(2): # Zajistíme, že pro každé k proběhne aktualizace dvakrát print(f"_t = {_t}") for fen in list(AR.keys()): #print(f"Fen = {fen}, looking for {simplified_initial_fen}, same = {fen == simplified_initial_fen}") board.set_fen(fen) if AR[fen]['result'] is not None: if fen == simplified_initial_fen: print(f"Finally we found a mate! {AR[fen]['result']}") return continue # Pokud již máme hodnocení, přeskočíme # Získáme výchozí hodnoty pro nejlepší a nejhorší scénář best_case = float("-inf") #worst_case = float("inf") nones_present = False best_child = None for move in board.legal_moves: #print(f"Move = {move}") board.push(move) next_fen = simplify_fen_string(board.fen()) #AR[fen]['children'].append(next_fen) if next_fen not in AR: AR[next_fen] = create_AR_entry(evaluate_position(board), None, move) evaluated_list.append(next_fen) if ((len(evaluated_list)) % 100000 == 0): print(f"Evaluated: {len(evaluated_list)}") board.pop() #for child in AR[fen]['children']: next_eval = AR[next_fen]['result'] if next_eval is not None: if (-next_eval > best_case): best_case = max(best_case, -next_eval) best_child = next_fen #worst_case = min(worst_case, -next_eval) else: nones_present = True if nones_present: if best_case > 0: AR[fen]['result'] = update_best_case(best_case) AR[fen]['best_child'] = best_child changed = True else: # Aktualizace hodnocení podle nejlepšího a nejhoršího scénáře #if worst_case == -1000: # Pokud všechny tahy vedou k matu, hráč na tahu může být matován v k tazích # AR[fen] = -1000 + k # changed = True #elif best_case <= 0: # Pokud nejlepší scénář není lepší než remíza, znamená to remízu nebo prohru # AR[fen] = max(best_case, 0) # Zabráníme nastavení hodnoty méně než 0, pokud je remíza možná # changed = True #elif best_case == 1000: # Pokud existuje alespoň jeden tah, který vede k matu protihráče, hráč na tahu může vynutit mat v k tazích # AR[fen] = 1000 - k # changed = True AR[fen]['result'] = update_best_case(best_case) AR[fen]['best_child'] = best_child changed = True ### print(f"Position = {fen}, results = {best_case} {nones_present} => {AR[fen]['result']}") if (fen == "8/8/3R4/8/8/5K2/8/4k3 b - -" or fen == "8/8/3R4/8/8/5K2/8/5k2 w - -"): print("^^^^^^^^") # remove here #break #if not changed: #break # Pokud nedošlo k žádné změně, ukončíme smyčku #if not changed: #break # Ukončíme hlavní smyčku, pokud nedošlo ke změně v poslední iteraci def print_draw_positions(AR): """ Vytiskne všechny remízové pozice (hodnota 0) zaznamenané v slovníku AR. """ print("Remízové pozice:") for fen, value in AR.items(): if True or (value > 990 and value < 1000): print(f"FEN>: {fen}, Hodnota: {value}","\n",chess.Board(fen),"<\n") def find_path_to_end(AR, fen): if AR[fen]['result'] is None: print(f"Unfortunately, there is no path that is known to be the best") fen_i = fen print(chess.Board(fen_i),"\n<") path = fen while AR[fen_i]['best_child'] is not None: fen_i = AR[fen_i]['best_child'] print(chess.Board(fen_i),"\n<") path = path + ", " + fen_i print(f"Path is: {path}") def main(): initial_fen = "1k6/5P2/2K5/8/8/8/8/8 w - - 0 1" initial_fen_original = "8/8/8/8/3Q4/5K2/8/4k3 w - - 0 1" initial_fen_mate_in_one_aka_one_ply = "3r1k2/5r1p/5Q1K/2p3p1/1p4P1/8/8/8 w - - 2 56" initial_fen_mate_in_two_aka_three_plies = "r5k1/2r3p1/pb6/1p2P1N1/3PbB1P/3pP3/PP1K1P2/3R2R1 b - - 4 28" initial_fen_mated_in_two_plies = "r5k1/2r3p1/p7/bp2P1N1/3PbB1P/3pP3/PP1K1P2/3R2R1 w - - 5 29" mate_in_two_aka_three_plies_simple = "8/8/8/8/3R4/5K2/8/4k3 w - - 0 1" mated_in_one_aka_two_plies_simple = "8/8/3R4/8/8/5K2/8/4k3 b - - 1 1" mate_in_one_aka_one_ply_simple = "8/8/3R4/8/8/5K2/8/5k2 w - - 2 2" initial_fen = mate_in_two_aka_three_plies_simple initial_fen = "1k6/5P2/2K5/8/8/8/8/8 w - - 0 1" initial_fen = "1k6/8/2K5/8/8/8/8/8 w - - 0 1" initial_fen = "8/8/8/8/8/7N/1k5K/6B1 w - - 0 1" initial_fen = "7K/8/k1P5/7p/8/8/8/8 w - - 0 1" simplified_fen = simplify_fen_string(initial_fen) board = chess.Board(initial_fen) AR = {simplified_fen: {"result": None, "last_move": None, "children": None, "best_child": None}} # Inicializace AR s počáteční pozicí update_AR_for_mate_in_k(board, AR, simplified_fen, max_k=58) # Aktualizace AR #print_draw_positions(AR) print(f"AR for initial fen is = {AR[simplified_fen]}") find_path_to_end(AR, simplified_fen) main() However,for initial fen = "8/8/8/4k3/2K4R/8/8/8 w - - 0 1" it doesn't give the optimal result like this one: https://lichess.org/analysis/8/8/8/4k3/2K4R/8/8/8_w_-_-_0_1?color=white Rather, it gives 27 plies [like this](https://pastebin.com/hZ6AaBZe) while lichess.com link above gives 1000-977==23 plies. Finding the bug will be highly appreciated.
I am a beginner in C# and cpp, and When I'm trying to build the FiveM launcher from the the [FiveM launcher OpenSource code](https://github.com/citizenfx/fivem) FiveM launcher OpenSource code, I'm getting this error after doing fxd build. ``` "C:\Users\Utilisateur\Desktop\dev\fivem\launcher\fivem\code\build\five\CitizenMP.sln" (default target) (1) -> "C:\Users\Utilisateur\Desktop\dev\fivem\launcher\fivem\code\build\five\CitiMono.csproj.metaproj" (default target) (76) -> "C:\Users\Utilisateur\Desktop\dev\fivem\launcher\fivem\code\build\five\CitiMono.csproj" (default target) (105) -> (CoreCompile target) -> C:\Users\Utilisateur\Desktop\dev\fivem\launcher\fivem\code\client\clrcore\InternalManager.cs(620,13): error CS0103: The name 'EnhancedStackTrace' does not exist in the current context [C:\Users\Utilisateur\Desktop\dev\fivem\launcher\fivem\code\build\five\CitiMono.csproj] ``` And this is part of the code of the *InternalManager.cs* line 575 to line 634 where I get the error for the ***EnhancedStackTrace***. ```cs [SecuritySafeCritical] private void PrintError(string where, Exception what) { ScriptHost.SubmitBoundaryEnd(null, 0); var stackTrace = new StackTrace(what, true); #if IS_FXSERVER var stackFrames = stackTrace.GetFrames(); #else IEnumerable<StackFrame> stackFrames; // HACK: workaround to iterate inner traces ourselves. // TODO: remove this once we've updated libraries var fieldCapturedTraces = typeof(StackTrace).GetField("captured_traces", BindingFlags.NonPublic | BindingFlags.Instance); if (fieldCapturedTraces != null) { var captured_traces = (StackTrace[])fieldCapturedTraces.GetValue(stackTrace); // client's mscorlib is missing this piece of code, copied from https://github.com/mono/mono/blob/ef848cfa83ea16b8afbd5b933968b1838df19505/mcs/class/corlib/System.Diagnostics/StackTrace.cs#L181 var accum = new List<StackFrame>(); foreach (var t in captured_traces ?? Array.Empty<StackTrace>()) { for (int i = 0; i < t.FrameCount; i++) accum.Add(t.GetFrame(i)); } accum.AddRange(stackTrace.GetFrames()); stackFrames = accum; } else stackFrames = stackTrace.GetFrames(); #endif var frames = stackFrames .Select(a => new { Frame = a, Method = a.GetMethod(), Type = a.GetMethod()?.DeclaringType }) .Where(a => a.Method != null && (!a.Type.Assembly.GetName().Name.Contains("mscorlib") && !a.Type.Assembly.GetName().Name.Contains("CitizenFX.Core") && !a.Type.Assembly.GetName().Name.StartsWith("System"))) .Select(a => new { name = EnhancedStackTrace.GetMethodDisplayString(a.Method).ToString(), sourcefile = a.Frame.GetFileName() ?? "", line = a.Frame.GetFileLineNumber(), file = $"@{m_resourceName}/{a.Type?.Assembly.GetName().Name ?? "UNK"}.dll" }); var serializedFrames = MsgPackSerializer.Serialize(frames); var formattedStackTrace = FormatStackTrace(serializedFrames); if (formattedStackTrace != null) { Debug.WriteLine($"^1SCRIPT ERROR in {where}: {what.GetType().FullName}: {what.Message}^7"); Debug.WriteLine("{0}", formattedStackTrace); } } ``` And the **EnhancedStackTrace** is also being used in the file *MonoComponentHost.cpp* and this part of the code where it is being used it's from line 267 to line 327. ```cpp static void InitMono() { // initializes the mono runtime fx::mono::MonoComponentHostShared::Initialize(); mono_thread_attach(mono_get_root_domain()); g_rootDomain = mono_get_root_domain(); mono_add_internal_call("CitizenFX.Core.GameInterface::PrintLog", reinterpret_cast<void*>(GI_PrintLogCall)); mono_add_internal_call("CitizenFX.Core.GameInterface::fwFree", reinterpret_cast<void*>(fwFree)); #ifndef IS_FXSERVER mono_add_internal_call("CitizenFX.Core.GameInterface::TickInDomain", reinterpret_cast<void*>(GI_TickInDomain)); #endif mono_add_internal_call("CitizenFX.Core.GameInterface::GetMemoryUsage", reinterpret_cast<void*>(GI_GetMemoryUsage)); mono_add_internal_call("CitizenFX.Core.GameInterface::WalkStackBoundary", reinterpret_cast<void*>(GI_WalkStackBoundary)); mono_add_internal_call("CitizenFX.Core.GameInterface::SnapshotStackBoundary", reinterpret_cast<void*>(GI_SnapshotStackBoundary)); std::string platformPath = MakeRelativeNarrowPath("citizen/clr2/lib/mono/4.5/CitizenFX.Core.dll"); auto scriptManagerAssembly = mono_domain_assembly_open(g_rootDomain, platformPath.c_str()); if (!scriptManagerAssembly) { FatalError("Could not load CitizenFX.Core.dll.\n"); } auto scriptManagerImage = mono_assembly_get_image(scriptManagerAssembly); bool methodSearchSuccess = true; MonoMethodDesc* description; #define method_search(name, method) description = mono_method_desc_new(name, 1); \ method = mono_method_desc_search_in_image(description, scriptManagerImage); \ mono_method_desc_free(description); \ methodSearchSuccess = methodSearchSuccess && method != NULL MonoMethod* rtInitMethod; method_search("CitizenFX.Core.RuntimeManager:Initialize", rtInitMethod); method_search("CitizenFX.Core.RuntimeManager:GetImplementedClasses", g_getImplementsMethod); method_search("CitizenFX.Core.RuntimeManager:CreateObjectInstance", g_createObjectMethod); method_search("System.Diagnostics.EnhancedStackTrace:GetMethodDisplayString", g_getMethodDisplayStringMethod); #if !defined(IS_FXSERVER) || defined(_WIN32) method_search("CitizenFX.Core.InternalManager:TickGlobal", g_tickMethod); #endif if (!methodSearchSuccess) { FatalError("Couldn't find one or more CitizenFX.Core methods.\n"); } MonoObject* exc = nullptr; mono_runtime_invoke(rtInitMethod, nullptr, nullptr, &exc); if (exc) { fx::mono::MonoComponentHostShared::PrintException(exc); return; } } ``` I've tried fixing by asking for some people from the cfx.re community but it wasn't really helpful! I also checked on internet and the only thing that I Found is the error guide from Microsoft for his error error ***CS0103***, but it was helpless! And the others that is getting this errors isn't really helpful for me... So now I'm asking for you guys to help me please as I don't see where else I can find any answer and Fast answer as well! And yes I've followed all the process from [the little documentation](https://github.com/citizenfx/fivem/blob/master/docs/building.md).
Why am I getting this error ? error CS0103: The name 'EnhancedStackTrace' does not exist in the current context
|c#|c++|build|compiler-errors|fivem|
null
I am trying to make a to do list. MY app take a data successfully. But when I want to delete a data from the list. it is showing typeerror: passdata.map is not a function. I am just learning react and I can not figure out where the problem is. N:B: My code is in between two components. ``` `import React, { useState } from "react"; import { OutputData } from "./OutputData"; export const Form = () => { const [data, setData] = useState(""); const [val, setVal] = useState([]); const changed = (event) => { setData(event.target.value); }; const clicked = () => { setVal((oldVal) => { return [...oldVal, data]; }); setData(" "); }; const del = (id)=>{ setVal((newVal)=>{ return newVal.filter((val,index)=>{ return id !== index; }); }); setVal(""); } return ( <div> <div> <h1>To Do List</h1> <input type="text" onChange={changed} /> <button onClick={clicked}> <span>+</span> </button> </div> <div> <OutputData passData={val} del={del}></OutputData> </div> </div> ); }; ``` My Second Component Here: ``` import React from 'react' export const OutputData = ({passData,index,del}) => { return ( <div> { passData.map((passData,index)=>{ return( <div className='lidiv' key={index}> <li>{passData}</li> <button onClick={() => del(index)}>Delete</button> </div> ) }) } </div> ) } ``` I want to know why is this happened & How to solve this problem
|talend|
I would consider [this previous answer][1] that you can combine with a mask to hide a portion of the circle: <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-css --> svg { width:200px; transform:rotate(90deg); /* control the rotation here */ } <!-- language: lang-html --> <svg viewBox="-3 -3 106 106"> <defs> <mask id="m" > <rect x="-3" y="-2" width="100%" height="100%" fill="white"/> <!-- update the 120 below to increase/decrease the visible part--> <circle cx="50" cy="50" r="50" stroke-dasharray="120, 1000" fill="transparent" stroke="black" stroke-width="8"/> </mask> </defs> <!-- The circumference of the circle is 2*PI*R ~ 314.16 if we want N dashed we use d=314.16/N For N = 20 we have d=15.71 For a gap of 5 we will have "10.71,5" (d - gap,gap) --> <circle cx="50" cy="50" r="50" stroke-dasharray="10.71, 5" fill="transparent" stroke="black" stroke-width="5" mask="url(#m)" /> </svg> <!-- end snippet --> A pure CSS solution but with low support actually due to the conic-gradient() <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-css --> .box { --d:4deg; /* distance between dashes */ --n:30; /* number of dashes */ --c:#000; /* color of dashes */ --b:2px; /* control the thickness of border*/ --m:60deg; /* the masked part */ --r:0deg; /* rotation */ width: 180px; display:inline-block; aspect-ratio: 1; border-radius:50%; transform:rotate(var(--r)); background: repeating-conic-gradient( var(--c) 0 calc(360deg/var(--n) - var(--d)), transparent 0 calc(360deg/var(--n))); mask: conic-gradient(#000 var(--m),#0000 0) intersect, radial-gradient(farthest-side,#0000 calc(100% - var(--b) - 1px),#000 calc(100% - var(--b)) calc(100% - 1px),#0000); } body { background:linear-gradient(to right,yellow,pink); } <!-- language: lang-html --> <div class="box"></div> <div class="box" style="--n:20;--b:5px;width:150px;--c:blue;--m:20deg"></div> <div class="box" style="--n:8;--d:20deg;--b:10px;width:130px;--c:red;--m:180deg"></div> <div class="box" style="--n:18;--d:12deg;--b:8px;width:100px;--c:green;--r:90deg"></div> <div class="box" style="--n:10;--d:20deg;--b:3px;width:100px;--c:purple;--r:120deg;--m:120deg"></div> <!-- end snippet --> [1]: https://stackoverflow.com/a/60586691/8620333