instruction
stringlengths
0
30k
|database|devops|keycloak|customization|keycloak-rest-api|
null
> Can a base-class (constructor) detect if a derived constructor threw an exception? No. The base class constructor is executed before the derived class constructor. By the time you throw the exception, the base class subobject is already constructed. Assuming you want to stay with inheritance you can turn the logic around to avoid the warning: #include <iostream> struct Base { void close() { closed = true; } void open() { closed = false; } ~Base() { if (!closed) std::cout << "Warning: I should be closed!" << std::endl; } private: bool closed = true; }; struct Derived : public Base { Derived() : Base() { throw 0; Base::open(); } }; int main() { try { Derived derived; } catch (...) {} } [Live Demo](https://godbolt.org/z/xhsbqjPqd) The `Base` initializes `closed = true` and when constructing the `Derived` does not throw, it calls `open()`. If you can give up on inheritance, make `Base` a member of `Derived` to be in control of order of initialization.
Add Proguard Rule > `-keep class io.grpc.internal.DnsNameResolverProvider { *; } -dontwarn io.grpc.internal.DnsNameResolverProvider -dontwarn io.grpc.internal.PickFirstLoadBalancerProvider`
Graham Lea's answer was great. Here I'll add a couple of improvements Jackson has made in recent years. - `ALLOW_TRAILING_DECIMAL_POINT_FOR_NUMBERS`*(Since 2.14)*: Numbers may have a trailing decimal point. - `ALLOW_LEADING_PLUS_SIGN_FOR_NUMBERS`*(Since 2.14)*: Numbers may begin with an explicit plus sign.
I use a IActionResult to get a List<Object>. If I have a list, the result is ok. If I use a List<User>, the elements of the result are empty. If I use a generic List, it works: ``` public IActionResult GetData() { var erg1 = _database.table.Select(x => new { x.Id, x.firstname, x.lastname }); return Ok(erg1); } ``` I get a List with all Elements: ``` [ { "id": "732fdbd7-c878-45e8-8c43-5f795a697f6d", "firstname": "Uwe", "lastname": "Gabbert" }, { "id": "5288f9ea-25a2-4ffc-a711-7c0b2cf49c38", "firstname": "User", "lastname": "Test" } ] ``` But, If I use a separate class for the elements, I get a empty List: ``` public IActionResult GetData() { var erg2 = _database.table.Select(x => new User( x.Id, x.firstname, x.lastname )); return Ok(erg2); } ``` In erg2 are 2 Items of User. But the return of getData is: ``` [ {}, {} ] ``` here the User-class: ``` public class BenutzerSuche { public string Id = ""; public string Firstname = ""; public string Lastname = ""; public BenutzerSuche(string id, string firstname, string lastname) { Id = id; Firstname = firstname; Lastname = lastname; } } ```
override public func tintColorDidChange() { super.tintColorDidChange() if traitCollection.userInterfaceStyle == .dark { ... } else { ... } }
I want to read a file (object) from S3 bucket in Mule 4. I have access to read/write file only to this bucket. Hence when I enter access key & secret in global config of AWS S3 connector and do 'Test connection', I am getting Access Denied error. The way we can access in WinSCP is using using remote directory option from Advanced tab. They we are giving bucket name and then we can get access to this specific bucket. I tried providing "https://s3.amazonaws.com/bucket-name" in Custom Service Endpoint under Advance tab of S3 global config. But it throws error like below : *The request signature we calculated does not match the signature you provided. Check your key and signing method. (Service: S3, Status Code: 403, Request ID: ZESE26MHAX5EP13G, Extended Request ID: jIji2hGErWw+oiaFsXOhHS4DFB5v44e9qSbkF+6ifqq8xTqn1Hde2hPRkiG9bW8vkSoXyg98bQ0=)* Not sure how to achieve in Mule using S3 connector. Kindly help. [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/a5w4B.png
Well the error is quite clear, you try to access `http://127.0.0.1:8000/`, but there is no `path` in your `urls.py` that matches this. You can visit <code>http://127.0.0.1:8000/<b>members/</b></code>, or <code>http://127.0.0.1:8000/<b>admin/</b></code> given these point to views and are not just includes, but you can not point to the root directly, since you did not define such path.
You don't have to read the entire file in at once as there is an argument with the `read_csv()` function. You would just need to modify your code to ``` df1 <- read_csv( "sample-data.csv", col_names=c("D","B"), col_select=c("D","B") ) ``` If the file does not in fact contain a header than you would call column index in the `col_select` instead and you would also want to include the `col_names` argument ``` df1 <- read_csv( "sample-data.csv", col_names=c("D","B"), col_select=c(4,2), col_names=F ) ```
A return type changed from "const Bar& getBar()" to "const Bar getBar()". If i use: const auto& bar = getBar(); and the return type changes. I have a reference on a temporary object. If i use: const auto bar = getBar(); i always make a copy of Bar. What is The best practice for this problem? ``` lang-c++ class Bar { public: Bar() = default; Bar(const Bar&) = default; Bar(Bar&&) = delete; Bar& operator=(const Bar&) = delete; Bar& operator=(const Bar&&) = delete; void setValue(int i); int value() const; private: int m_i = 0; }; class Foo { public: Foo(Bar& bar) : m_bar(bar) {} Foo(const Foo&) = delete; Foo(Foo&&) = delete; Foo& operator=(const Foo&) = delete; Foo& operator=(const Foo&&) = delete; const Bar& giveMeBar(); private: Bar& m_bar; }; int main() { auto bar = std::make_unique<Bar>(); auto foo = std::make_unique<Foo>(*bar.get()); const auto& barFromFoo = foo->giveMeBar(); bar->setValue(2); std::cout << "original bar: " << bar->value() << std::endl; std::cout << "bar from foo: " << barFromFoo.value() << std::endl; }```
Handling Type Changes with auto in C++
|c++|reference|return-type|auto|
null
Re: Postgres, it depends: `select 1::numeric(4,2); 1.0` vs `select 1::numeric; 1`. This is due to [Postgres Numeric](https://www.postgresql.org/docs/current/datatype-numeric.html#DATATYPE-NUMERIC-DECIMAL): >(The SQL standard requires a default scale of 0, i.e., coercion to integer precision. We find this a bit useless. If you're concerned about portability, always specify the precision and scale explicitly.) In the DuckDB case [Duckdb Numeric](https://duckdb.org/docs/sql/data_types/numeric): >The default WIDTH and SCALE is DECIMAL(18, 3), if none are specified So in DuckDB CLI: `select 1::numeric; 1.000` vs `select 1::numeric(4,0); 1` Declare your field with a `0` scale, though if you are doing that just make the field an integer.
I've been trying to do something pretty simple, but with no success. I have a tensor (say ```X``` of shape ```(None, 128)``` containing some scores, in other words each batch has 128 scores. Now I apply ```Y = tf.math.top_k(X, k=a).indices``` here ```a``` indicates the top ```a``` scores. Let us consider for simplicity, ```a = 95```. Then the shape of tensor ```Y``` will be ```(None, 95)```. Till here it is fine. Now my original ```data``` tensor is of shape ```(None, 3969, 128)```. I wanted to do some operation on the datas having top_k scores. So I extracted the datas using: ``` ti = tf.reshape(Y, [Y.shape[-1], -1]) # Here ti is of shape (95, None) fs = tf.gather(X, ti[:, 0], axis=-1) # Here fs is of shape (None, 3969, 95) ``` and then did my operation by say ```Z = fs * 0.7 # Here Z is of shape (None, 3969, 95)```. This was also fine. Now I want to create a new tensor ```F``` such that, firstly ```F``` is of shape ```(None, 3969, 128)```, containing all the unchanged datas (datas whose scores do not fall in top_k) and modified datas (datas whose scores falls under top_k and have been modified in ```Z```) but, the order of these datas will be same as in original datas i.e., modified datas should still be in their original position. **Here is where I am stuck**. I am relatively new with TensorFlow, so apologies if I'm missing anything simple or being unclear. Have been stuck with it for a few days now. Thanks!
You are getting the errors because you have not indented after your if-statements, the engine expects to run everything **one indentation in** from the if-statement if it evaluates to `true`. Hence solving your errors for each if-statement would simply be i.e. ``` yaml - ${{ if contains(parameters.host, '_T1_') }}: - task: Maven@3 # <-- One indentation in from if-statement # Task continues here ``` When they are currently (in your example) on the same indentation, the engine does not understand what you want to run when the if-statement evaluated to `true`, hence it is telling you it expected at least one key-pair to be found after the if-statement one indentation in.
I'm using AWS S3 and MongoDB in a full-stack react application. When I add a new recept in front-end it works normally. But once in a while it changes the rndImageName which I create on back-end. Images uploads to S3 fine. Is there better way to do this? ``` multer.js const randomImageName = (bytes = 32) => crypto.randomBytes(bytes).toString('hex') let rndImageName = randomImageName(); awsRouter.post('/recipeImages', upload.single('file'), async (req, res) => { const buffer = await sharp(req.file.buffer).resize({ height: 1920, width: 1080, fit: 'contain' }).toBuffer() const params = { Bucket: bucketName, Key: 'recipeImages/' + rndImageName, Body: buffer, ContentType: req.file.mimetype } const command = new PutObjectCommand(params) await s3.send(command) }) recipes.js const { rndImageName } = require('./multer') recipesRouter.post('/', async (request, response, next) => { const body = request.body const decodedToken = jwt.verify(getTokenFrom(request), process.env.SECRET) if (!decodedToken.id) { return response.status(401).json({ error: 'token invalid' }) } const user = await User.findById(decodedToken.id) const recipe = new Recipe({ id: body.id, recipe_name: body.recipe_name, ingredients: body.ingredients, instructions: body.instructions, speed: body.speed, category: body.category, main_ingredient: body.main_ingredient, diet: body.diet, likes: body.likes || 0, comments: body.comments || [], created: new Date(), imageName: rndImageName, user: user._id }) const savedRecipe = await recipe.save() user.recipes = user.recipes.concat(savedRecipe._id) await user.save() }) ``` I want to insert the rndImageName (which I set up for S3 file name) to mongoDB objects so I can create signedUrl for a specific recipe. Now it seems to use the same randomImageName again. Also changing the previously added recipe's imageName.
Adding new object to mongoDB changes the previously added object
|javascript|reactjs|mongodb|amazon-web-services|amazon-s3|
null
I have a couple of DLL library projects created with Visual Studio. One of them is using C# and the other is using C++/CLI. The library written in C++/CLI provides a C interface to the C# library so it can be called from Dart through FFI. I would like to create a Flutter plugin using FFI to wrap these libraries into a Dart interface and then use them inside a Flutter app. I have written all the Dart FFI code, and some CMake code using [ExternalProject_Add](https://cmake.org/cmake/help/latest/module/ExternalProject.html#command:externalproject_add) calling to `msbuild`. I also found [include_external_msproject](https://cmake.org/cmake/help/latest/command/include_external_msproject.html), but I could not get it to work. I have managed to get it working, but I'm not happy with the build system setup. The VS solution is hardcoded to build in Release mode in the CMake script. I would like to do a debug build of the VS project when building the debug version of the flutter application. When I print the value of `CMAKE_BUILD_TYPE`, it is empty. I'm not sure how to proceed from here to detect the build type of the Flutter app. I need to know the build type in the CMake script because the Debug configuration outputs are put in a `Debug` folder, and Release configuration outputs are in the `Release` folder. I need to know whoch folder to use when setting the `my_plugin_bundled_libraries` variable for Flutter. Is this the right approach to integrate an external VS project? If not, how should I integrate the external project so that it will build correctly in both release and debug builds?
Dynamic Gradient in Vega Via Signals
I have a ligh Python app which should perform a very simple task, but keeps crashing due to OOM. ## What app should do 1. Loads data from `.parquet` in to dataframe 2. Calculate indicator using `stockstats` package 3. Merge freshly calculated data into original dataframe -> **here is crashes** 4. Store dataframe as `.parquet` ### Where is crashes ```python df = pd.merge(df, st, on=['datetime']) ``` ### Using - Python `3.10` - `pandas~=2.1.4` - `stockstats~=0.4.1` - Kubernetes `1.28.2-do.0` (running in Digital Ocean) Here is the strange thing, the dataframe is very small (`df.size` is `208446`, file size is `1.00337 MB`, mem usage is `1.85537 MB`). Measured ```python import os file_stats = os.stat(filename) file_size = file_stats.st_size / (1024 * 1024) # 1.00337 MB df_mem_usage = dataframe.memory_usage(deep=True) df_mem_usage_print = round(df_mem_usage.sum() / (1024 * 1024), 6 # 1.85537 MB df_size = dataframe.size # 208446 ``` ### Deployment info App is deployed into Kubernetes using Helm with following resources set ```yaml resources: limits: cpu: 1000m memory: 6000Mi requests: cpu: 1000m memory: 1000Mi ``` <s>I am using nodes with 4vCPU + 8 GB memory and the node not under performance pressure.</s> I have created dedicated node pool with **8 vCPU + 16 GB** nodes, but same issue. ```clie kubectl top node test-pool NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% test-pool-j8t3y 38m 0% 2377Mi 17% ``` Pod info ``` kubectl describe pod xxx ... State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: OOMKilled Exit Code: 137 Started: Sun, 24 Mar 2024 16:08:56 +0000 Finished: Sun, 24 Mar 2024 16:09:06 +0000 ... ``` Here is CPU and memory consumption from Grafana. I am aware that very short Memory or CPU spikes will be hard to see, but from long term perspective, the app does not consume a lot of RAM. On the other hand, from my experience we are using the same `pandas` operations on containers with less RAM and dataframes are much much bigger with not problems. [![Grafana stats][1]][1] How should I fix this? What else should I debug in order to prevent OOM? ### Data and code example Original dataframe (named `df`) ```python datetime open high low close volume 0 2023-11-14 11:15:00 2.185 2.187 2.171 2.187 19897.847314 1 2023-11-14 11:20:00 2.186 2.191 2.183 2.184 8884.634728 2 2023-11-14 11:25:00 2.184 2.185 2.171 2.176 12106.153954 3 2023-11-14 11:30:00 2.176 2.176 2.158 2.171 22904.354082 4 2023-11-14 11:35:00 2.171 2.173 2.167 2.171 1691.211455 ``` New dataframe (named `st`). <br> Note: If `trend_orientation = 1` => `st_lower = NaN`, if `-1 => st_upper = NaN` ```python datetime supertrend_ub supertrend_lb trend_orientation st_trend_segment 0 2023-11-14 11:15:00 0.21495 NaN -1 1 1 2023-11-14 11:20:00 0.21495 NaN -10 1 2 2023-11-14 11:25:00 0.21495 NaN -11 1 3 2023-11-14 11:30:00 0.21495 NaN -12 1 4 2023-11-14 11:35:00 0.21495 NaN -13 1 ``` Code example ```python import pandas as pd import multiprocessing import numpy as np import stockstats def add_supertrend(market): try: # Read data from file df = pd.read_parquet(market, engine="fastparquet") # Extract date columns date_column = df['datetime'] # Convert to stockstats object st_a = stockstats.wrap(df.copy()) # Generate supertrend st_a = st_a[['supertrend', 'supertrend_ub', 'supertrend_lb']] # Add back datetime columns st_a.insert(0, "datetime", date_column) # Add trend orientation using conditional columns conditions = [ st_a['supertrend_ub'] == st_a['supertrend'], st_a['supertrend_lb'] == st_a['supertrend'] ] values = [-1, 1] st_a['trend_orientation'] = np.select(conditions, values) # Remove not required supertrend values st_a.loc[st_a['trend_orientation'] < 0, 'st_lower'] = np.NaN st_a.loc[st_a['trend_orientation'] > 0, 'st_upper'] = np.NaN # Unwrap back to dataframe st = stockstats.unwrap(st_a) # Ensure correct date types are used st = st.astype({ 'supertrend': 'float32', 'supertrend_ub': 'float32', 'supertrend_lb': 'float32', 'trend_orientation': 'int8' }) # Add trend segments st_to = st[['trend_orientation']] st['st_trend_segment'] = st_to.ne(st_to.shift()).cumsum() # Remove trend value st.drop(columns=['supertrend'], inplace=True) # Merge ST with DF df = pd.merge(df, st, on=['datetime']) # Write back to parquet df.to_parquet(market, compression=None) except Exception as e: # Using proper logger in real code print(e) pass def main(): # Using fixed market as example, in real code market is fetched market = "BTCUSDT" # Using multiprocessing to free up memory after each iteration p = multiprocessing.Process(target=add_supertrend, args=(market,)) p.start() p.join() if __name__ == "__main__": main() ``` Dockerfile ``` FROM python:3.10 ENV PYTHONFAULTHANDLER=1 \ PYTHONHASHSEED=random \ PYTHONUNBUFFERED=1 \ PYTHONPATH=. # Adding vim RUN ["apt-get", "update"] # Get dependencies COPY requirements.txt . RUN pip3 install -r requirements.txt # Copy main app ADD . . CMD main.py ``` --- ### Lukasz Tracewskis suggestion > Use [Node-pressure Eviction][2] in order to test whether pod even can allocate enough memory on nodes I have done: - created new node pool: `8vCPU + 16 GB RAM` - ensured that only my pod (and some system ones) will be deployed on this node (using tolerations and affinity) - run a stress test with no OOM or other errors ```yaml ... image: "polinux/stress" command: ["stress"] args: ["--vm", "1", "--vm-bytes", "5G", "--vm-hang", "1"] ... ``` ```yaml kubectl top node test-pool-j8t3y NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% test-pool-j8t3y 694m 8% 7557Mi 54% ``` Node description ``` Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system cilium-24qxl 300m (3%) 0 (0%) 300Mi (2%) 0 (0%) 43m kube-system cpc-bridge-proxy-csvvg 100m (1%) 0 (0%) 75Mi (0%) 0 (0%) 43m kube-system csi-do-node-tzbbh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m kube-system disable-systemd-upgrade-timer-mqjsk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m kube-system do-node-agent-dv2z2 102m (1%) 0 (0%) 80Mi (0%) 300Mi (2%) 43m kube-system konnectivity-agent-wq5p2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m kube-system kube-proxy-gvfrv 0 (0%) 0 (0%) 125Mi (0%) 0 (0%) 43m scanners data-gw-enrich-d5cff4c95-bkjkc 100m (1%) 1 (12%) 1000Mi (7%) 6000Mi (43%) 2m33s ``` The pod did not crash due to OOM. So it is very likely that the issue will be inside code, somewhere. ### Detailed memory monitoring I have inserted memory measurement into multiple points. I am measuring both dataframe size and memory usage using `psutil`. ```python import psutil total = round(psutil.virtual_memory().total / 1000 / 1000, 4) used = round(psutil.virtual_memory().used / 1000 / 1000, 4) pct = round(used / total * 100, 1) logger.info(f"[Current memory usage is: {used} / {total} MB ({pct} %)]") ``` Memory usage - prior read data from file - RAM: `938.1929 MB` - after df loaded - df_mem_usage: `1.947708 MB` - RAM: `954.1181 MB` - after ST generated - df_mem_usage of ST df: `1.147757 MB` - RAM: `944.9226 MB` - line before df merge - df_mem_usage: `945.4223 MB` ### Not using `multiprocessing` In order to "reset" memory every iteration, I am using `multiprocessing`. However I wanted to be sure that this does not cause troubles. I have removed it and called the `add_supertrend` directly. But it ended up in OOM, so I do not think this is the problem. ### Real data As suggested by Lukasz Tracewski, I am sharing real data which are causing the OOM crash. Since they are in `parquet` format, I cannot use services like pastebin and I am using GDrive instead. I will use this folder to share any other stuff related to this question/issue. - [GDrive folder][3] ### Upgrade pandas to `2.2.1` Sometimes plain pacakge upgrade might help, so I have decide to try using upgrading pandas to `2.2.1` and also `fastparquet` to `2024.2.0` (*newer pandas required newer fastparquet*). `pyarrow` was also updated to `15.0.0`. It seemed to work during first few iterations, but than crashed with OOM again. ### Using Dask I remembered that when I used to solve complex operations with dataframes, I used dask. So I tried to use it in this case as well. Without success. OOM again. Using `dask` `2024.3.1`. ```python import dask.dataframe as dd # mem usage 986.452 MB ddf1 = dd.from_pandas(df) # mem usage 1015.37 MB ddf2 = dd.from_pandas(st) # mem usage 1019.50 MB df_dask = dd.merge(ddf1, ddf2, on='datetime') # mem usage 1021.56 MB df = df_dask.compute() <- here it crashes ¯\_(ツ)_/¯ ``` ### Duplicated datetimes During investigating data with dask, I have noticed that there are duplicate records for `datetime` columns. This is definitely wrong, datetime has to be unique. I think this might cause the issue. I will investigate that further. ```python df.tail(10) datetime open high low close volume 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 ``` I have implemented a fix which removes duplicate records in the other component that prepares data. Fix looks like this and I will monitor whether this will help or not. ```python # Append gathered data to df and write to file df = pd.concat([df, fresh_data]) # Drop duplicates df = df.drop_duplicates(subset=["datetime"]) ``` [1]: https://i.stack.imgur.com/QerkZ.png [2]: https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/ [3]: https://drive.google.com/drive/folders/1nCbG4SUoniZGhwULEtnvs4OtUmXP6BXG?usp=sharing
If the software that you use to open the content allows opening anything other than file then yes. It may be a named or unnamed pipe, shared memory, OLE IStream interface, etc. Otherwise it may be complicated but [still possible][1]. In the worst case you can use ordinary temporary files, but do not forget to use the flags ``FILE_FLAG_DELETE_ON_CLOSE`` and ``FILE_ATTRIBUTE_TEMPORARY`` to avoid polluting file system. [1]: https://learn.microsoft.com/en-us/windows/win32/projfs/projected-file-system
null
{"OriginalQuestionIds":[142132],"Voters":[{"Id":2756409,"DisplayName":"TylerH"},{"Id":1974224,"DisplayName":"Cristik"},{"Id":874188,"DisplayName":"tripleee"}]}
Nevermind, turns out I wasn't creating the S3 object ARN right. With: ```yaml Resource: !Sub - arn:aws:s3:::${OtaImageBucket}/* - OtaImageBucket: !ImportValue OTAFirmwareS3BucketArn ``` My target ARN had `arn:aws:s3:::` twice. Fixed that and it worked. P.S The reason ith worked on my dev env, was that the bucket allowed public access(probably some POC).
I want to make a k- means segmentation with python. My code works on jpg images, but when I try it with geotiff, it only makes the image black and white. How can ı solve this problem? Here is my code below; ``` import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.image as mpimg from sklearn.cluster import KMeans from PIL import Image Image.MAX_IMAGE_PIXELS = None from google.colab import drive drive.mount('/content/drive') # Dosya yolu, Google Drive'ınızda dosyanın bulunduğu yol # Örnek: '/content/drive/My Drive/dosya_yolu/orfototo-tae7.tif' file_path = '/content/drive/MyDrive/Colab Notebooks/ortofoto.tif' # TIFF dosyasını oku image = mpimg.imread(file_path) [text]([https://stackoverflow.com](https://stackoverflow.com)) # Dosyayı göster plt.imshow(image) plt.show() X=image.reshape(-1, 4) kmeans=KMeans(n_clusters=2, n_init=10).fit(X) segmented_img=kmeans.cluster_centers_[kmeans.labels_] segmented_img=segmented_img.reshape(image.shape) plt.imshow(segmented_img/255) ```
Segmentation with Geotiff image
|python|machine-learning|image-segmentation|
null
I've tried using $\partial$ in a ggplot title but unfortunately it did not render correctly. In fact, when I try this example (https://uliniemann.com/blog/2022-02-21-math-annotations-in-ggplot2-with-latex2exp/), a lot of symbols are wrong. What could be the reason for this? # Copied example from link library(ggplot2) library(dplyr, warn.conflicts = FALSE) df <- tibble(x = rep(1:3, times = 3), y = rep(1:3, each = 3)) %>% mutate(z = c( r"($\alpha \beta \gamma$)", r"($\Alpha \Beta \Gamma$)", r"($\chi \psi \omega$)", r"($x^n + y^n = z^n$)", r"($\int_0^1 x^2 + y^2 \ dx$)", r"($\sum_{i=1}^{\infty} \, \frac{1}{n^s} = \prod_{p} \frac{1}{1 - p^{-s}}$)", r"($\left[ \frac{N} { \left( \frac{L}{p} \right) - (m+n) } \right]$)", r"($S = \{z \in \bf{C}\, |\, |z|<1 \} \, \textrm{and} \, S_2=\partial{S}$)", r"($\frac{1+\frac{a}{b}}{1+\frac{1}{1+\frac{1}{a}}}$)" )) ggplot(df) + geom_text( aes( x = x, y = y, label = TeX(z, output = "character") ), parse = TRUE, size = 12/.pt ) + scale_x_continuous(expand = c(0.25, 0)) + scale_y_reverse(expand = c(0.25, 0)) **Expected outcome:** [![enter image description here][1]][1] **My outcome:** [![enter image description here][2]][2] **My info:** > sessionInfo() R version 4.2.2 (2022-10-31 ucrt) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 10 x64 (build 19045) Matrix products: default locale: [1] LC_COLLATE=English_United Kingdom.utf8 LC_CTYPE=English_United Kingdom.utf8 [3] LC_MONETARY=English_United Kingdom.utf8 LC_NUMERIC=C [5] LC_TIME=English_United Kingdom.utf8 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] dplyr_1.1.3 latex2exp_0.9.6 cowplot_1.1.1 ggplot2_3.5.0 loaded via a namespace (and not attached): [1] Rcpp_1.0.11 pillar_1.9.0 compiler_4.2.2 RColorBrewer_1.1-3 [5] R.methodsS3_1.8.2 viridis_0.6.4 R.utils_2.12.2 base64enc_0.1-3 [9] bitops_1.0-7 tools_4.2.2 ciftiTools_0.12.2 digest_0.6.33 [13] evaluate_0.22 lifecycle_1.0.3 tibble_3.2.1 gtable_0.3.4 [17] viridisLite_0.4.2 pkgconfig_2.0.3 rlang_1.1.1 RNifti_1.5.0 [21] cli_3.6.1 oro.nifti_0.11.4 rstudioapi_0.15.0 gifti_0.8.0 [25] yaml_2.3.7 xfun_0.40 fastmap_1.1.1 gridExtra_2.3 [29] stringr_1.5.0 withr_2.5.1 knitr_1.44 xml2_1.3.5 [33] generics_0.1.3 vctrs_0.6.4 grid_4.2.2 tidyselect_1.2.0 [37] glue_1.6.2 R6_2.5.1 fansi_1.0.5 rmarkdown_2.25 [41] sessioninfo_1.2.2 farver_2.1.1 magrittr_2.0.3 htmltools_0.5.6.1 [45] splines_4.2.2 scales_1.3.0 abind_1.4-5 colorspace_2.1-0 [49] labeling_0.4.3 utf8_1.2.3 stringi_1.7.12 munsell_0.5.0 [53] R.oo_1.25.0 [1]: https://i.stack.imgur.com/erEqZ.png [2]: https://i.stack.imgur.com/BDEdP.png
We have setup Azure App insights(APIM) for the API logs tracking. In logs we are storing API url,API request payload,API response message,API response error code. Now I need to create one api which will read the error log messages in APIM and return API error message in response. From API we will hit a Qusto query which will fetch all the APIM logs and then those logs should be returned in the above api with the following response: { "Id": "1", "ApiUrl": "https://pd.myweb/user/{resource_id}?userId=SAM", "ApiErrormessage": "User SAM does not have access to the resource 1100" } I am expecting to create a Get API in dotnet core using c# which will return the response as below: Get: fetch/logs **Response**: { "Id": "1", "ApiUrl": "https://pd.myweb/user/{resource_id}?userId=SAM", "ApiErrormessage": "User sam does not have access to the resource 1100" }
How to read the API error message from Azure APIM using the qusto query hit from c# code in Rest API
|c#|azure|.net-core|azure-api-management|
null
I use a IActionResult to get a List<Object>. If I have a list, the result is ok. If I use a List<User>, the elements of the result are empty. If I use a generic List, it works: ``` public IActionResult GetData() { var erg1 = _database.table.Select(x => new { x.Id, x.firstname, x.lastname }); return Ok(erg1); } ``` I get a List with all Elements: ``` [ { "id": "732fdbd7-c878-45e8-8c43-5f795a697f6d", "firstname": "Uwe", "lastname": "Gabbert" }, { "id": "5288f9ea-25a2-4ffc-a711-7c0b2cf49c38", "firstname": "User", "lastname": "Test" } ] ``` But, If I use a separate class for the elements, I get a empty List: ``` public IActionResult GetData() { var erg2 = _database.table.Select(x => new User( x.Id, x.firstname, x.lastname )); return Ok(erg2); } ``` In erg2 are 2 Items of User. But the return of getData is: ``` [ {}, {} ] ``` here the User-class: ``` public class User { public string Id = ""; public string Firstname = ""; public string Lastname = ""; public User(string id, string firstname, string lastname) { Id = id; Firstname = firstname; Lastname = lastname; } } ```
The error message indicates that the Azure Resource Provider for the AKS (Azure Kubernetes Service) service, specifically for the version '2022-01-02-preview', `is not registered` in your Azure subscription for the 'uksouth' location. Register az provider register --namespace Microsoft.ContainerService ![enter image description here](https://i.imgur.com/54wCFfr.png) and verify the same using `az provider show --namespace Microsoft.ContainerService --query "registrationState"` ![enter image description here](https://i.imgur.com/roXpoPy.png) The specific API version '2022-01-02-preview' might not be available in your region. While upgrading the AzureRM provider can often resolve these types of issues by aligning with the supported API versions, it's also possible that the error is due to a regional limitation or an incorrect API version. To check the available API versions for the `Microsoft.ContainerService` resource provider, you can use: az provider show --namespace Microsoft.ContainerService --query "resourceTypes[?resourceType=='managedClusters'].apiVersions[]" -o tsv ![enter image description here](https://i.imgur.com/06h0pij.png) If a different API version is needed, you might have to adjust your Terraform configuration. Upgrading to the latest version of Azurerm may potentially resolve the API issues you are encountering. However, upgrading AzureRM could introduce breaking changes. You should review the [release notes](https://github.com/hashicorp/terraform-provider-azurerm/releases) of the AzureRM provider to understand any new features, fixes, and breaking changes introduced since version 2.99. There may be compatibility considerations or risks associated with upgrading to the latest version. It is recommended to review the release notes for the latest version of Azurerm to ensure compatibility with your current environment and to test the upgrade in a non-production environment before implementing it in your production environment. If a different API version is needed, you might have to adjust your Terraform configuration. If you want, you can even pin the provider version to avoid unexpected changes. This practice ensures that your configurations are tied to a specific version of the provider, reducing the likelihood of an unexpected behavior change. example- ```yaml terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "~> 3.95.0" # Replace with the desired version } } } ``` ![enter image description here](https://i.imgur.com/w8e0dOs.png) Refer to [azurerm 3.95 release notes](https://github.com/hashicorp/terraform-provider-azurerm/releases)
null
I've connected my powerbi for direct flow of the data with pgadmin which directly get the data from the aws. But I can't view my tables which I selected in the table view section. It shows as "this table cannot be shown because it is not in import mode". I tried checking with the settings and it is all fine but don't know how to proceed further. [![enter image description here](https://i.stack.imgur.com/SkXYa.png)](https://i.stack.imgur.com/SkXYa.png) I tried connecting again with the pgadmin and still getting the same issue.
this table cannot be shown because it is not in import mode
|postgresql|powerbi|powerquery|pgadmin|
null
I think this solution can help you. 1. Create ```__init__.py``` file in the src module 2. modify your import instructions in the ```messaging.py``` like this: ``` from src.auth import auth ``` 3. use auth functionality
I set up mailtrain to send emails using AWS SES. However, the emails bounce with the following reason: " The security token included in the request is invalid." [Image 3](https://i.stack.imgur.com/QRJWn.png) Additionally, emails get sent successfully sometimes, but majority of time I get the error.
This helped me (node v 20, using Docker) RUN rm -rf server/node_modules/sharp && npm --prefix server install sharp
You can try, OpenPubkey to SSH Without SSH Keys. Following are the links for more information. - https://www.docker.com/blog/how-to-use-openpubkey-to-ssh-without-ssh-keys/ - https://www.bastionzero.com/blog/why-you-should-use-bastionzero-to-secure-access-to-your-k8s-clusters?utm_content=248533620&utm_medium=social&utm_source=linkedin&hss_channel=lcp-71618213
> server doesn't seem to be responding Step 0: -------- One server-side SLOC, `stream = ZMQStream( socket )` calls a function, that is not MCVE-documented and must and does fail to execute to yield any result: `"ZMQStream" in dir()` confirms this with **`False`** **Remedy:** repair the MCVE and also `print( zmq.zmq_version )` + `"ZMQStream" in dir()` confirmation ---------- Step 1: ------- Always prevent infinite deadlocks, unless due reason exists not to do so, with setting prior to doing respective `.bind()` or `.connect()` **`<aSocket>.setsockopt( zmq.LINGER, 0 )`**. Hung forever applications and un-released (yes, you read it correctly, infinitely blocked) resources are not welcome in distributed computing systems. ---------- Step 2: ------- Avoid a blind distributed-mutual-deadlock the **`REQ/REP`** is always prone to run into. It will happen, one just never knows when. You may read heaps of details about this on Stack Overflow. And remedy? May (and shall, where possible) avoid using the blocking-forms of `.recv()`-s (fair `.poll()`-s are way smarter-design-wise, resources-wise) may use additional sender-side signalisation before "throwing" either side into infinitely-blocking `.recv()`-s (yet a network delivery failure or other reason for a silent message drop may cause soft-signaling to flag sending, which did not result in receiving and mutual-deadlocking, where hard-wired behaviour moves both of the REQ/REP side into waiting one for the other, to send a message (which the counterparty will never send, as it is also waiting for `.recv()`-ing the still not received one from the (still listening) opposite side )) ---------- Last, but not least: -------------------- The ZeroMQ Zen-of-Zero has also a Zero-Warranty - as messages are either fully delivered (error-free) or not delivered at all. The `REQ/REP` mutual deadlocks are best resolvable if one never falls into them (ref. `LINGER` and `poll()` above)
I wrote Kotlin extension function for that: fun View.isViewFullyVisible(): Boolean { val localVisibleRect = Rect() getLocalVisibleRect(localVisibleRect) return localVisibleRect.top == 0 && localVisibleRect.bottom == height }
Build, rebuild, and clean operations were being skipped. Unloading and reloading didn't help, and neither did restarting Visual Studio. Once I removed the project from the solution and added it back, it is no longer skipped. To remove it, in Solution Explorer, right-click the project > Remove > OK. To add it back, in Solution Explorer, right-click the solution > Add > Existing Project and select your project. (Once you've added your projects back to the solution, any non-removed projects with references to the removed projects will have lost those references. So you'll need to add back references to the projects that were removed and then readded.)
I'm trying to implement RAG with Mistral 7B LLM in google colab but when I try to query i get an error here's my code : ``` index=VectorStoreIndex.from_documents(documents,service_context=service_context) query_engine = index.as_query_engine() response=query_engine.query("my question") ``` the last line gives me this error : --------------------------------------------------------------------- ``` KeyError Traceback (most recent call last) <ipython-input-22-8e2dbdba5aa9> in <cell line: 1>() ----> 1 response=query_engine.query("my question") 40 frames /usr/local/lib/python3.10/dist-packages/llama_index/core/prompts/base.py in format(***failed resolving arguments***) 194 195 mapped_all_kwargs = self._map_all_vars(all_kwargs) --> 196 prompt = self.template.format(**mapped_all_kwargs) 197 198 if self.output_parser is not None: KeyError: 'query' ``` in the LlamaIndex docs it says that this is the correct usage pattern so idk where the problem is
KeyError: 'query' when calling query from query_engine
|artificial-intelligence|large-language-model|llama-index|retrieval-augmented-generation|mistral-7b|
null
The problem is caused by spark version 3.4.2. Upgrading the spark version from 3.4.2 to 3.5.1 solved the problem spark upgrade procedure ----------------------- remove old spark version sudo rm -rf /opt/spark/ download [Apache Spark 3.5.1 tgz archive][1] into `Downloads/` unpack the archive tar xvf spark-3.5.1-bin-hadoop3.tgz move the file to install optimized directory sudo mv spark-3.5.1-bin-hadoop3 /opt/spark in case you did not have spark on your machine before, add environment variables vim ~/.bashrc paste this # per spark export SPARK_HOME=/opt/spark export PATH=$PATH:$SPARK_HOME/bin reload the file source ~/.bashrc check that the installed verison is the 3.5.1 pyspark --version [1]: https://www.apache.org/dyn/closer.lua/spark/spark-3.5.1/spark-3.5.1-bin-hadoop3.tgz
{"Voters":[{"Id":7658051,"DisplayName":"Tms91"}]}
I have this document structure: { "name": "ab", "grades": [ { "grade": "A", "score": 1 }, { "grade": "A", "score": 12 }, { "grade": "A", "score": 7 } ], "borough": "Manhattan2" } Assignment is to write a query to find the restaurants that have all grades with a score greater than 5. The solution to the problem is following: db.restaurants.find({ "grades": { "$not": { "$elemMatch": { "score": { "$lte": 5 } } } } }) I have troubles understanding proposed solution. So far I have only used `$elemMatch` to match at least one element of array or elements in array's inner objects (`grades.score`), but how the heck `$not` is "somehow making" `$elemMatch` to check for **all** `grades.score` in this object? I do understand general idea, "don't look at score less the equal to 5, and what remains is what we need", but I cannot comprehend what does this code snippet returns: "$not": { "$elemMatch": { "score": { "$lte": 5 } } } If was asked what does this query do before running & testing it, I would say that it find first `score` that is greater then 5 and takes that document, but that is wrong, and I cannot figure why. I see that order of fields and keywords plays some role but don't see the connection.
You can do this in tidyverse without any additional packages if you are prepared to use a little maths: ``` r make_pie <- function(x, y, size, groups, n, rownum, aspect = 1) { angles <- c(0, 2*pi * cumsum(n)/sum(n)) do.call("rbind", Map(function(a1, a2, g) { xvals <- c(0, sin(seq(a1, a2, len = 30)) * size, 0) * aspect + x yvals <- c(0, cos(seq(a1, a2, len = 30)) * size, 0) + y data.frame(x = xvals, y = yvals, group = g, rownum = rownum) }, head(angles, -1), tail(angles, -1), groups)) } pies <- d %>% mutate(r = row_number(), y = as.numeric(factor(cat))) %>% rowwise() %>% group_map(~ with(.x, make_pie(50, y, 0.4, aspect = 15, c("val", "neg"), c(val, total - val), r))) %>% bind_rows() d %>% mutate(neg = total - val) %>% select(-total) %>% pivot_longer(-1) %>% ggplot(aes(value, cat, fill = name)) + geom_col() + geom_polygon(aes(x = x, y = y, fill = group), data = pies) + scale_fill_manual(values = c("#FDE725FF", "#440154FF"), guide = 'none') + theme_minimal() ``` [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/oZz6F.jpg
null
I am trying to make a private youtube channel. When I follow instructions I've found like from tutorials like [this](https://castos.com/make-a-youtube-channel-private/), the channel dissapears completely. When I search the YouTube help center directly I don't even see a hit for how to make a channel private, just how to "hide" the channel. Is it possible to create a private channel and then invite specific users to that channel as described in this tutorial: https://castos.com/make-a-youtube-channel-private/ I do not want to just make videos private, I'd like to the whole channel private as it's a concept in development and is not ready to be public.
Making a YouTube Channel Private
|youtube|
null
I have the problem, in the previuos code I was calling the availableOn as argument that's why it didn't work, here is the correct code: @script <script> $wire.on('appointment-modal', async (event) => { let data = await event.availableOn; console.log(data); const modalBody = document.getElementById('modal-body'); data.forEach(time => { console.log(time.day) let timeElement = document.createElement('p'); timeElement.textContent = time.start_time + ' - ' + time.end_time; modalBody.appendChild(timeElement); }); const myModal = new bootstrap.Modal('#showTimes'); myModal.show(); }); </script> @endscript
`response.json()` returns a promise, you should wait until it will be fulfilled. To do that you can use `Promise.all` with array of two elements: `statusCode` and `response.json()` call: return fetch(url) .then(response => { const statusCode = response.status; const data = response.json(); return Promise.all([statusCode, data]); }) .then(([res, data]) => { console.log(res, data); }) .catch(error => { console.error(error); return { name: "network error", description: "" }; }); //EDIT you can create a function who process the response function processResponse(response) { const statusCode = response.status; const data = response.json(); return Promise.all([statusCode, data]).then(res => ({ statusCode: res[0], data: res[1] })); } and use it the then() return fetch(url) .then(processResponse) .then(res => { const { statusCode, data } = res; console.log("statusCode",statusCode); console.log("data",data); }) .catch(error => { console.error(error); return { name: "network error", description: "" }; });
{"OriginalQuestionIds":[819161],"Voters":[{"Id":10289265,"DisplayName":"Hao Wu"},{"Id":87015,"DisplayName":"Salman A"},{"Id":162698,"DisplayName":"Rob"}]}
The [handle class](https://www.mathworks.com/help/matlab/handle-classes.html) and its copy-by-reference behavior is the natural way to implement linkage in Matlab. It is, however, possible to implement a linked list in Matlab without OOP. And an abstract list which does *not* splice an existing array in the middle to insert a new element -- as complained in [this comment](https://stackoverflow.com/questions/1413860/matlab-linked-list#comment23877880_1422443). (Although I do have to use a Matlab data type somehow, and adding new element to an existing Matlab array requires memory allocation somewhere.) The reason of this availability is that we can model linkage in ways other than pointer/reference. The reason is *not* [closure](https://en.wikipedia.org/wiki/Closure_(computer_programming)) with [nested functions](https://www.mathworks.com/help/matlab/matlab_prog/nested-functions.html). I will nevertheless use closure to encapsulate a few *persistent* variables. At the end, I will include an example to show that closure alone confers no linkage. And so [this answer](https://stackoverflow.com/a/1421186/3181104) as written is incorrect. At the end of the day, linked list in Matlab is only an academic exercise. Matlab, aside from aforementioned handler class and classes inheriting from it (called subclasses in Matlab), is purely copy-by-value. Matlab will optimize and automate how copying works under the hood. It will avoid deep copy whenever it can. That is probably the better take-away for OP's question. The absence of reference in its core functionality is also why linked list is not obvious to make in Matlab. ------------- ##### Example Matlab linked list: ```lang-matlab function headNode = makeLinkedList(value) % value is the value of the initial node % for simplicity, we will require initial node; and won't implement insert before head node % for the purpose of this example, we accommodate only double as value % we will also limit max list size to 2^31-1 as opposed to the usual 2^48 in Matlab vectors m_id2ind=containers.Map('KeyType','int32','ValueType','int32'); % pre R2022b, faster to split than to array value m_idNext=containers.Map('KeyType','int32','ValueType','int32'); %if exist('value','var') && ~isempty(value) m_data=value; % stores value for all nodes m_id2ind(1)=1; m_idNext(1)=0; % 0 denotes no next node m_id=1; % id of head node m_endId=1; %else % m_data=double.empty; % % not implemented %end headNode = struct('value',value,... % note: this field is for convenince; but is access only 'get',@getValue,... 'set',@set,... 'next',@next,... 'head',struct.empty,... 'push_back',@addEnd,... 'addAfter',@addAfter,... 'deleteAt',@deleteAt,... 'nodeById',@makeNode,... 'id',m_id); function value=getValue(node) if isempty(node) warning("Node is empty.") value=double.empty; else value=id2val(node.id.); end end function set(node,val) if isempty(node) warning("Node is empty.") else m_data(m_id2ind(node.id))=val; end end function nextNode=next(node) if m_idNext(node.id)==0 warning('There is no next node.') nextNode=struct.empty; else nextNode=makeNode(m_idNext(node.id)); end end function node=makeNode(id) if isKey(m_id2ind,id) node=struct('value',id2val(id),... % note: this field is for convenince; but is access only 'get',@getValue,... 'set',@set,... 'next',@next,... 'head',headNode,... 'push_back',@addEnd,... 'addAfter',@addAfter,... 'deleteAt',@deleteAt,... 'nodeById',@makeNode,... 'id',id); else warning('No such node!') node=struct.empty; end end function temp=id2val(id) temp=m_data(m_id2ind(id)); end function addEnd(value) addAfter(value,m_endId); end function addAfter(value,id) m_data(end+1)=value; temp=numel(m_data);% new id will be new list length if (id==m_endId) m_idNext(temp)=0; else m_idNext(temp)=temp+1; end m_id2ind(temp)=temp; m_idNext(id)=temp; m_endId=temp; end function deleteAt(id) %delete to free memory does not make sense with the chosen data type. But can work if m_data had been a cell array by setting a particular element empty end end ``` With the above .m file, the following runs: ```lang-matlab >> clear all % remember to clear all before creating a new list >> headNode = makeLinkedList(1); >> headNode.push_back(2); >> headNode.push_back(3); >> node2=headNode.next(headNode); >> node2.get(node2) ans = 2 >> node3=node2.next(node2); >> node3.get(node3) ans = 3 >> node4=node3.next(node3); Warning: There is no next node. > In makeLinkedList/next (line 52) >> nodeNot4=node3.head.next(node3.head); >> node2.set(node2,222) >> nodeNot4.get(nodeNot4) ans = 222 ``` `.next()`, `.get()`, `.set()` etc in the above can take any valid node `struct` as input -- not limited to itself. Similarly, `.push_back()`, `.insertAfter()`, `.head` etc can be done from any node. But that node needs to be passed in manually because a non-OOP [`struct`](https://www.mathworks.com/help/matlab/ref/struct.html) in Matlab cannot reference itself implicitly and automatically. It does not have a `this` pointer or `self` reference. In the above example, nodes are given unique IDs, a dictionary is used to map ID to data (index) and to map ID to next ID. (With pre-R2022 `containers.Map()`, it's more efficient to have 2 dictionaries even though we have the same key and same value type across the two.) So when inserting new node, we simply need to update the relevant next ID. (Double) array was chosen to store the node values (which are doubles) and that is the data type Matlab is designed to work with and be efficient at. As long as no new allocation is required to append an element, insertion is constant time. Matlab automates the management of memory allocation. Since we are not doing array operations on the underlying array, Matlab is unlikely to take extra step to make copies of new contiguous arrays every time there is a resize. [Cell array](https://www.mathworks.com/help/matlab/ref/cell.html) may incur less re-allocation but with some trade-offs. Since [dictionary](https://www.mathworks.com/help/matlab/ref/dictionary.html) is used, I am not sure if this solution qualifies as purely [functional](https://en.wikipedia.org/wiki/Functional_programming). ------------ ##### re: closure vs linkage In short, closure does not confer linkage. Matlab's nested functions have access to variables in parent functions directly -- as long as they are not shadowed by local variables of the same names. But there is no variable passing. And thus there is no pass-by-reference. And thus we can't model linkage with this non-existent referencing. I did take advantage of closure above to make a few variables persistent and shared, since scope (called [workspace](https://www.mathworks.com/help/matlab/matlab_prog/base-and-function-workspaces.html) in Matlab) being referred to means all variables in the scope will persist. That said, Matlab also has a [persistent](https://www.mathworks.com/help/matlab/ref/persistent.html) specifier. Closure is not the only way. To showcase this distinction, the example below will not work because every time there is passing of `previousNode`, `nextNode`, they are passed-by-value. There is no way to access the original `struct` across function boundaries. And thus, even with nested function and closure, there is no linkage! ```lang-matlab function newNode = SOtest01(value,previousNode,nextNode) if ~exist('previousNode','var') || isempty(previousNode) i_prev=m_prev(); else i_prev=previousNode; end if ~exist('nextNode','var') || isempty(nextNode) i_next=m_next(); else i_next=nextNode; end newNode=struct('value',m_value(),... 'prev',i_prev,... 'next',i_next); function out=m_value out=value; end function out=m_prev out=previousNode; end function out=m_next out=nextNode; end end ``` ```lang-matlab >> newNode=SOtest01(1,[],[]); >> newNode2=SOtest01(2,newNode,[]); >> newNode2.prev.value=2; >> newNode.value ans = 1 ``` But we tried to set prev node of node 2 to be have value 2!
I have a AspNetCore app in net6.0. The user is asked to login against Azure Ad (Entra). After this is done, the web app needs to make a call to an API. The endpoint it hits has a scope requirement. This is a scope the web app registration has permission for in Azure. How do I get a hold of a token containing that scope, so I can use it to authorize in the API? I've looked at `_tokenAcquisition.AcquireTokenForUser`, but even if I have all AzureAd configuration in pla, including a client secret, it says `no account or login hint was passed to AcquireTokenSilent`. Reading the documentation confuses me, and it's hard to find a concrete example of what I'm after. I've tried different solutions in code, but none seem to work. Any help appreciated, and let me know if more info is needed, thanks!
AcquireTokenForUser in ASP.NET Core Web app
|asp.net-core|authentication|.net-6.0|azure-ad-msal|msal|
null
Why I imagine happens: While you enter your notes you already type `n`, `o`, `t`, `e` or `s` => You think you are at the beginning of the sequence when the window is closed but you might not be. Thus when you type the first letter `n` (again) when starting the sequence instead it resets and ignores the rest as well missing the entire next sequence. ---- I think there are a couple of points towards solving this - In `ProcessCommands` along with `showNotes = false;` also reset `keySequenceIndex = 0;` - In `else if (Input.anyKeyDown)` instead of only resetting `keySequenceIndex = 0;` rather check if this is maybe the beginning of the sequence - and start over directly. Otherwise as said in case you are already in the middle of the sequence and start with `n` again it will miss it and thereby miss the entire next sequence. - And finally within `Update` simply ignore the entire input as long as the notes window is opened! void Update() { if (showNotes) return; ... } in order to not handle any content you type in the notes as input in the first place
The problem is that, whenever i tried to checkout if it's not login it goes to the link "http://localhost:3000/login?redirect=shipping" to login and then goes to shipping side where it has to put address etc. but it redirect to this http://localhost:3000/login/shipping but it has to to redirect to this http://localhost:3000/shipping the code is: ``` import React, { useEffect, useState } from "react"; import { useDispatch, useSelector } from "react-redux"; import { Link, useLocation, useNavigate } from "react-router-dom"; import { login } from "../../redux/action/userAction"; const Login = () => { const [email, setEmail] = useState(""); const [password, setPassword] = useState(""); const location = useLocation(); const navigate = useNavigate(); const redirect = location.search ? location.search.split("=")[1] : "/"; console.log(`Login Redirect ${redirect}`); // here it take "shipping" from redirect variable const dispatch = useDispatch(); const userLogin = useSelector((state) => state.userLogin); const { loading, error, userInfo } = userLogin; // dispatch(login({ email, password })); useEffect(() => { if (userInfo) { navigate(redirect); } }, [userInfo, redirect]); const sumbitHandler = (e) => { e.preventDefault(); dispatch(login(email, password)); }; return ( <div className="flex flex-col justify-center mx-6 my-6"> <div className="hidden"></div> <div className="flex flex-col gap-10"> {error && ( <h1 className="text-center bg-red-500 text-red-600 text-sm py-4 w-full"> {error} </h1> )} <div className="flex flex-col gap-6"> <h1 className="text-3xl font-medium">Log In to Exclusive</h1> <p className="text-sm">Enter your details below</p> </div> <div className="flex flex-col gap-4"> <form onSubmit={sumbitHandler} autoComplete="off"> <div className="flex flex-col gap-6"> <div> <input type="email" placeholder="Enter Email" value={email} onChange={(e) => setEmail(e.target.value)} className="text-sm border-b-[1px] border-black/[60%] w-full px-1 py-3" autoComplete="off" /> </div> <div> <input type="password" placeholder="Password" value={password} onChange={(e) => setPassword(e.target.value)} className="text-sm border-b-[1px] border-black/[60%] w-full px-1 py-3" autoComplete="off" /> </div> <div className="my-"> <button className="text-base text-white bg-black py-4 w-full rounded-full" type="submit" > Login </button> > </div> </div> </form> <div> <h1 className="text-center text-xs text-black/[60%]"> Don't have an account?{" "} <span className="hover:underline"> <Link to={redirect ? `/signup?redirect=${redirect}` : "/signup"} > SingUp </Link> </span> </h1> </div> </div> </div> </div> ); }; export default Login;````` " I tried to do mnual navigation like giving direcct path to /shipping but it didn't work well ```
How to integrate Visual Studio solution build into flutter build system
|flutter|windows|visual-studio|cmake|msbuild|
null
I need help please. I have a Dialog screen on mfc. In this screen, it has an "Add" button wherein it displays its member (up to 8) when it is clicked. I wanted to implement a scrollbar after Member 6 is added. How can I possibly implement as such. The code is with C++ and using MFC. Kindly advise. Thank you. ![DialogBox[1]][1] [1]: https://i.stack.imgur.com/siGni.png
Look in Tools > Options > Font
The input shown is malformed so `read_xml` will give an error. Since the question indicates it works there must have been a transcription error in moving the XML to the question. We have added a close div tag before the 4th opening div tag in the Note at the end. Since the XML uses a namespace, first strip that using `xml_ns_strip` to avoid problems. Then form the appropriate xpath expression producing the needed nodes and convert those to dcf format (which is a name:value format where each field is on a separate line and a blank line separates records -- see `?read.dcf` for details) in variable `dcf`. Read that using `read.dcf`, convert the resulting character matrix to data frame and fix up the div entries. library(dplyr) library (xml2) doc <- read_xml(Lines) # see Note at end for Lines nodes <- doc %>% xml_ns_strip() %>% xml_find_all('//div | //head[@rend="time"] | //hi[@rend="italic"]') dcf <- case_when( xml_name(nodes) == "div" ~ "\ndiv:", xml_name(nodes) == "hi" ~ paste0("time:", xml_text(nodes)), TRUE ~ paste0("content:", xml_text(nodes)) ) dcf %>% textConnection() %>% read.dcf() %>% as.data.frame() %>% mutate(div = row_number()) giving div time content 1 1 TIME_1 CONTENT1 2 2 TIME_2 <NA> 3 3 TIME_3 CONTENT3 4 4 <NA> CONTENT4
I'm using df.compare where I'm doing a diff between 2 csv's which have same index/row names, but when it does df.compare, it does the diff as expected but gives the row index numbers as 0,2,5,... where ever it find the diff between the csv's. What I'm looking out is instead of the row numbers where It finds the diff, I need df.compare to show me the row text. diff_out_csv = old.compare(latest,align_axis=1).rename(columns={'self':'old','other':'latest'}) Current output - NUMBER1 NUMBER2 NUMBER3 old latest old latest old latest 0 -14.1685 -14.0132 -1.2583 -1.2611 NaN NaN 2 -9.7875 -12.2739 -0.3532 -0.3541 86.0 100.0 3 -0.0365 -0.0071 -0.0099 -0.0039 6.0 2.0 4 -1.9459 -1.5258 -0.5402 -0.0492 73.0 131.0 Requirement - NUMBER1 NUMBER2 NUMBER3 old latest old latest old latest JACK -14.1685 -14.0132 -1.2583 -1.2611 NaN NaN JASON -9.7875 -12.2739 -0.3532 -0.3541 86.0 100.0 JACOB -0.0365 -0.0071 -0.0099 -0.0039 6.0 2.0 JIMMY -1.9459 -1.5258 -0.5402 -0.0492 73.0 131.0 I was able to replace the column names using df.compare.rename(columns={}) but how do I replace 0, 2, 3, 4 with the text names ?
Understanding $not in combination with $elemMatch in MongoDB
|mongodb|mongodb-query|
Here is the complete content of my .pro file (I have specified to add the network module): ``` QT += quick virtualkeyboard network # You can make your code fail to compile if it uses deprecated APIs. # In order to do so, uncomment the following line. #DEFINES += QT_DISABLE_DEPRECATED_BEFORE=0x060000 # disables all the APIs deprecated before Qt 6.0.0 CONFIG += c++14 SOURCES += \ main.cpp RESOURCES += qml.qrc \ resources/icon.qrc # Additional import path used to resolve QML modules in Qt Creator's code model QML_IMPORT_PATH = # Additional import path used to resolve QML modules just for Qt Quick Designer QML_DESIGNER_IMPORT_PATH = # Default rules for deployment. qnx: target.path = /tmp/$${TARGET}/bin else: unix:!android: target.path = /opt/$${TARGET}/bin !isEmpty(target.path): INSTALLS += target ``` However, when I use the QNetwork module in my QML file like this: ``` import QtNetwork 2.15 ``` I am pointed out with the issue: ``` QML module not found (QNetwork) Import paths:E:/Qt/5.15.2/msvc2019/qml For qmake projects, use the QML lMPORT PATH variable to add import paths. For Qbs projects, declare and set a qmllmportPaths property in your product to add import paths. For qmlproject projects, use the importPaths property to add import paths. For CMake projects, make sure QML lMPORT PATH variable is in CMakeCache.txt. For qmlRegister.. calls, make sure that you define the Module URl as a string literal. ``` I checked folder '**E:\Qt\5.15.2\msvc2019**', and indeed, I did not find any files related to the QNetwork module. What is going on? How should I use this module? I have tried various solutions found online, but many of them are aimed at Visual Studio IDE. In reality, I am using **Windows 10 22H2 with QT 5.15, MSVC2019 32bit** compiler, and my Integrated Development Environment (IDE) is **QT Creator**.
module "QtNetwork" is not installed
|qt5|c++14|qnetworkaccessmanager|
null
This can be done using `do.call` which allows for variable args to be passed through to a function. This also allows for the expansion into allowing multiple parameters to be passed through and parsed (e.g. `use_x` and `use_y`) ``` run_benchmarking <- function (parameter_combinations, run_names) { benchmark_list <- alist( do.call(run_function, list(eval(parse(text = parameter_combinations[1])))), do.call(run_function, list(eval(parse(text = parameter_combinations[2])))) ) names(benchmark_list) = run_names # currently setup with only two values allowed microbenchmark::microbenchmark( list = benchmark_list, times = 1000 ) } ``` This can then be called with: ``` run_benchmarking(c("use_x = T", "use_x = F"), c("With X", "Without X")) # Unit: microseconds # expr min lq mean median uq max neval # With X 15.0 15.5 17.7892 15.8 16.4 138.5 1000 # Without X 14.1 14.7 16.5617 14.9 15.4 185.4 1000 ```
I'm running the following query in QuestDB: ```sql SELECT timestamp, first(price) AS open, last(price) AS close, min(price), max(price), sum(amount) AS volume FROM trades WHERE symbol = 'BTC-USD' AND timestamp > dateadd('d', -1, now()) SAMPLE BY 15m ALIGN TO CALENDAR; ``` How can I tell that this query uses all available cores on the server?
I have a C# game emulator which uses TcpListener and originally had a TCP based client. A new client was introduced which is HTML5 (web socket) based. I wanted to support this without modifying too much of the existing server code, still allowing `TcpListener` and `TcpClient` to work with web socket clients connecting. Here is what I have done, but I feel like I'm missing something, as I am not getting the usual order of packets, therefor handshake never completes. 1. Implement protocol upgrade mechanism public static byte[] GetHandshakeUpgradeData(string data) { const string eol = "\r\n"; // HTTP/1.1 defines the sequence CR LF as the end-of-line marker var response = Encoding.UTF8.GetBytes("HTTP/1.1 101 Switching Protocols" + eol + "Connection: Upgrade" + eol + "Upgrade: websocket" + eol + "Sec-WebSocket-Accept: " + Convert.ToBase64String( System.Security.Cryptography.SHA1.Create().ComputeHash( Encoding.UTF8.GetBytes( new Regex("Sec-WebSocket-Key: (.*)").Match(data).Groups[1].Value.Trim() + "258EAFA5-E914-47DA-95CA-C5AB0DC85B11" ) ) ) + eol + eol); return response; } This is then used like so: private async Task OnReceivedAsync(int bytesReceived) { var data = new byte[bytesReceived]; Buffer.BlockCopy(_buffer, 0, data, 0, bytesReceived); var stringData = Encoding.UTF8.GetString(data); if (stringData.Length >= 3 && Regex.IsMatch(stringData, "^GET")) { await _networkClient.WriteToStreamAsync(WebSocketHelpers.GetHandshakeUpgradeData(stringData), false); return; } 2. Encode all messages after switching protocol response public static byte[] EncodeMessage(byte[] message) { byte[] response; var bytesRaw = message; var frame = new byte[10]; var indexStartRawData = -1; var length = bytesRaw.Length; frame[0] = 129; if (length <= 125) { frame[1] = (byte)length; indexStartRawData = 2; } else if (length >= 126 && length <= 65535) { frame[1] = 126; frame[2] = (byte)((length >> 8) & 255); frame[3] = (byte)(length & 255); indexStartRawData = 4; } else { frame[1] = 127; frame[2] = (byte)((length >> 56) & 255); frame[3] = (byte)((length >> 48) & 255); frame[4] = (byte)((length >> 40) & 255); frame[5] = (byte)((length >> 32) & 255); frame[6] = (byte)((length >> 24) & 255); frame[7] = (byte)((length >> 16) & 255); frame[8] = (byte)((length >> 8) & 255); frame[9] = (byte)(length & 255); indexStartRawData = 10; } response = new byte[indexStartRawData + length]; int i, reponseIdx = 0; // Add the frame bytes to the reponse for (i = 0; i < indexStartRawData; i++) { response[reponseIdx] = frame[i]; reponseIdx++; } // Add the data bytes to the response for (i = 0; i < length; i++) { response[reponseIdx] = bytesRaw[i]; reponseIdx++; } return response; } Used here: public async Task WriteToStreamAsync(byte[] data, bool encode = true) { if (encode) { data = WebSocketHelpers.EncodeMessage(data); } 3. Decoding all messages public static byte[] DecodeMessage(byte[] bytes) { var secondByte = bytes[1]; var dataLength = secondByte & 127; var indexFirstMask = dataLength switch { 126 => 4, 127 => 10, _ => 2 }; var keys = bytes.Skip(indexFirstMask).Take(4); var indexFirstDataByte = indexFirstMask + 4; var decoded = new byte[bytes.Length - indexFirstDataByte]; for (int i = indexFirstDataByte, j = 0; i < bytes.Length; i++, j++) { decoded[j] = (byte)(bytes[i] ^ keys.ElementAt(j % 4)); } return decoded; }
For now, the best solution seems to be using macros: MyPage.html ``` {% import 'header.html' as header %} {% extends 'base.html' %} {% block header %} {{ header.header("test1", "My Page") }} {% endblock %} ``` base.html ``` <!DOCTYPE html> <html>     {% block header %}{% endblock %}     <body>         <div class="container">             <h1 class="text-center my-4">{% block h1 %}{% endblock %}</h1>             {% include 'menu.html' %}             {% block content %}{% endblock %}         </div>     </body>     {% include 'footer.html' %}     {% block temp1 %}{% endblock %} </html> ``` header.html ``` {% macro header(value, title) %} <head> ... <title>{{title}}</title> ... </head> test1={{ value }} {% endmacro %} ```
I am creating application in react using router below is my code main.jsx ```jsx import React from 'react' import ReactDOM from 'react-dom/client' import App from './App.jsx' import './index.css' import { Router } from 'react-router-dom' ReactDOM.createRoot(document.getElementById('root')).render( <React.StrictMode> <Router> <App /> </Router> </React.StrictMode> ) ``` app.jsx ```jsx import { Route } from "react-router-dom" import Home from "./Pages/Home" function App() { return ( <> <Route path="/" exact component={Home} /> </> ) } export default App ``` home.jsx ```jsx import Header from "../Components/Header/Header"; function Home() { return <><Header /></> } export default Home; ``` dependencies: ```json "dependencies": { "react": "^18.2.0", "react-dom": "^18.2.0", "react-router-dom": "^6.22.3" }, "devDependencies": { "@types/react": "^18.2.64", "@types/react-dom": "^18.2.21", "@vitejs/plugin-react": "^4.2.1", "eslint": "^8.57.0", "eslint-plugin-react": "^7.34.0", "eslint-plugin-react-hooks": "^4.6.0", "eslint-plugin-react-refresh": "^0.4.5", "vite": "^5.1.6" } ``` And I am getting error: > Uncaught TypeError: Cannot read property 'pathname' of undefined at new Router Why am I getting this error?
|javascript|reactjs|react-router-dom|
Suppose two different processes simultaneously call `os.makedirs(path, exist_ok=True)`. Is it possible that one will raise a spurious exception due to a race condition? My fear is the call might do something like this under the hood: ``` if not dir_exists(d): try_make_dir_and_raise_if_exists(d) ``` I carefully read the [documentation][1], but I see no clear assertion of race-condition safety. Some other [answers][2] on this site suggest the call is safe, but provide no citations. [1]: https://docs.python.org/3/library/os.html#os.makedirs [2]: https://stackoverflow.com/a/57394882/543913
Is os.makedirs(path, exist_ok=True) susceptible to race-conditions?
|python|
I made a simple Rest API in .NET 8 . It must connect to SQL Express. If I run it with https it works fine and connects to the DB. If I run it with Docker or Docker compose itworks but does not connect to the DB. Here it is my Dockerfile: ```FROM build AS publish ARG BUILD_CONFIGURATION=Release RUN dotnet publish "./Euro_WeatherForecast.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false FROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT ["dotnet", "Euro_WeatherForecast.dll"] ``` And my docker-compose.yaml ``` version: '3.4' services: euro_weatherforecast: image: ${DOCKER_REGISTRY-}euroweatherforecast build: context: . dockerfile: Euro_WeatherForecast/Dockerfile ``` my connectionstring: ``` "ConnectionStrings": { "AppDbContext": "server=sqlserver;port=8080;Data Source=TUF1\\SQLEXPRESS;Initial Catalog=Euronext_WeatherForecast;Integrated Security=True;Encrypt=False" ``` The container runs in docker, the service SQL Server and SQL Server browser are running. Firewall deactivated.
Not redirecting to the required link
|reactjs|react-hooks|react-router|react-navigation|
null