instruction
stringlengths
0
30k
FlutterCarousel( options: CarouselOptions( physics: const NeverScrollableScrollPhysics(), controller: _carouselController, onPageChanged: (index, reason) { currentView = index + 1; //setState is called to update the current page with respect to the current view setState(() {}); }, height: 50.0, indicatorMargin: 10.0, showIndicator: true, slideIndicator: CircularWaveSlideIndicator(), viewportFraction: 0.9, ), items: swipeList.map((i) { return const Text(''); }).toList(), ), [The above code outputs this kind of Carousel slider](https://i.stack.imgur.com/CY9IY.jpg) But I would like to change the look of Corousel slider like below [enter image description here](https://i.stack.imgur.com/JcIOz.jpg)
Using `track $index` is not always a good practice (cf: https://github.com/angular/angular/issues/53800). It's usually better to track by a "stable" value. Basically the index changes will prevent the DOM to know exactly which element has been changed (it depends how you bind your value to the DOM but it's the idea). Usually the easiest way is to add a unique identifier to your object and to track by using it. For example, you can define an index variable or generate a random value with any lib: ``` createFilmFormGroup(): FormGroup { this.index++; return this.fb.group({ id: [this.index], title: [''], releaseDate: [''], url: [''], }); } ``` And in your html you can track with a dedicated function: ``` getId(film: any) { return film.controls.id.value; } ``` @for (film of films.controls; track getId(film)) {
This code: ```python index_suffix = "data" index_name = f"vector_{index_suffix}" keyword_index_name = f"keyword_{index_suffix}" print(f"Setup with indices: {index_name} and {keyword_index_name} ") hybrid_db = Neo4jVector.from_documents( docs, embeddings, url=url, username=username, password=password, search_type="hybrid", pre_delete_collection=True, index_name=index_name, keyword_index_name=keyword_index_name, ) print(f"\nLoaded hybrid_db {hybrid_db.search_type} with indices: {hybrid_db.index_name} and {hybrid_db.keyword_index_name} ") print(f"Embedded {index_suffix}\nsize of docs: {len(docs)}\n") ``` prints this, which is not what I expect since I have set the `index_name` and `keyword_name` ```bash Setup with indices: vector_data and keyword_data Loaded hybrid_db hybrid with indices: vector and keyword Embedded data size of docs: 543 ``` *System Info* ``` System Information ------------------ > OS: Linux > OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023 > Python Version: 3.12.2 (main, Feb 7 2024, 21:49:26) [GCC 10.2.1 20210110] Package Information ------------------- > langchain_core: 0.1.26 > langchain: 0.1.9 > langchain_community: 0.0.24 ``` I have executed the code above and I got that output, which is not what I expect.
LangChain's Neo4jVector.from_documents doesn't set index_name if mentioned
{"OriginalQuestionIds":[14220321],"Voters":[{"Id":182668,"DisplayName":"Pointy","BindingReason":{"GoldTagBadge":"javascript"}}]}
I am trying to solve this layout puzzle but am stuck in how to get it as elegant, clean and timeless. Given: - a horizontal line of 1 pixel height stretching inside the container its in - a vertically as well as horizontally centered box over this line - a left aligned textbox - and a right aligned text box What I have tried, is painstakingly increment the percentages until I reached some kind of a middle... warning, the following code is very graphical and ugly! CSS author{color: grey} box{float: left; background: blue; margin: 0 0 0 46.4%; ... /* bad coding feel embarrassed showing this */ } time{color: grey} HTML (flexible and can be changed) <author></author> <box><img src=""/></box> <time></time> I first thought this might be solved in flexbox, using `justify-content: space-between` however, I cannot figure out how to make the line appear there. So I am open for any suggestions whether it's the good old positioning/float or with flexbox. Maybe it would be nice to try to solve it both ways and see which one is the most elegant? [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/CoMi6.png
if you want To share tokens between applications, ensure they access the same data protection keys by configuring a shared storage (example -> a common file system path) for keys using '.PersistKeysToFileSystem()'. Both applications must also use '.SetApplicationName("Whatever")' to align the data protection context. If issues persist, you can consider centralized authentication solutions like OAuth2/OpenID Connect or a shared database for token validation, though these approaches have their complexities. Key configuration discrepancies, s uch as cryptographic algorithms, should be checked to ensure they match across applications.
Ubuntu 20.04 Python 3.8 I'm using a python file (not written by me) with a U-Net and custom Loss functions. The code was written for tensorflow==2.13.0, but my GPU cluster only has tensorflow==2.2.0 (or lower). The available code isn't compatible with this version. Specifically the 'if' statement in update_state. Can somebody help me rewrite this? I'm not experienced with tf. class Distance(tf.keras.metrics.Metric): def __init__(self, name='DistanceMetric', distance='cm', sigma=2.5, data_size=None, validation_size=None, points=None, point=None, percentile=None): super(Distance, self).__init__(name=name) self.counter = tf.Variable(initial_value=0, dtype=tf.int32) self.distance = distance self.sigma = sigma self.percentile = percentile if percentile is not None and point is not None: assert (type(percentile) == float) self.percentile_idx = tf.Variable(tf.cast(tf.round(percentile * validation_size), dtype=tf.int32)) else: self.percentile_idx = None self.point = point self.points = points self.cache = tf.Variable(initial_value=tf.zeros([validation_size, points]), shape=[validation_size, points]) self.val_size = validation_size def update_state(self, y_true, y_pred, sample_weight=None): n, h, w, p = tf.shape(y_pred)[0], tf.shape(y_pred)[1], tf.shape(y_pred)[2], tf.shape(y_pred)[3] y_true = normal_distribution(self.sigma, y_true[:, :, 0], y_true[:, :, 1], h=h, w=w, n=n, p=p) if self.distance == 'cm': x1, y1 = cm(y_true, h=h, w=w, n=n, p=p) x2, y2 = cm(y_pred, h=h, w=w, n=n, p=p) d = ((x1 - x2) ** 2 + (y1 - y2) ** 2) ** 0.5 d = d[:, :, 0] elif self.distance == 'argmax': d = (tf.cast(tf.reduce_sum(((argmax_2d(y_true) - argmax_2d(y_pred)) ** 2), axis=1), dtype=tf.float32)) ** 0.5 temp = tf.minimum(self.counter + n, self.val_size) if self.counter <= self.val_size: self.cache[self.counter:temp, :].assign(d[0:(temp-self.counter), :]) self.counter.assign(self.counter + n) def result(self): if self.percentile_idx is not None: temp = tf.sort(self.cache[:self.val_size, self.point], axis=0, direction='ASCENDING') return temp[self.percentile_idx] elif self.point is not None: return tf.reduce_mean(self.cache[:, self.point], axis=0) else: return tf.reduce_mean(self.cache, axis=None) def reset_states(self): self.cache.assign(tf.zeros_like(self.cache)) self.counter.assign(0) if self.percentile is not None and self.point is not None: self.percentile_idx.assign(tf.cast(self.val_size * self.percentile, dtype=tf.int32)) .... /trinity/home/r084755/DRF_AI/distal-radius-fractures-x-pa-and-lateral-to-clinic/Code files/LandmarkDetection.py:144 update_state if tf.math.less_equal(self.counter, self.val_size): # Updated from self.counter <= self.val_size: /opt/ohpc/pub/easybuild/software/TensorFlow/2.2.0-fosscuda-2019b-Python-3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:778 __bool__ self._disallow_bool_casting() /opt/ohpc/pub/easybuild/software/TensorFlow/2.2.0-fosscuda-2019b-Python-3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:545 _disallow_bool_casting "using a `tf.Tensor` as a Python `bool`") /opt/ohpc/pub/easybuild/software/TensorFlow/2.2.0-fosscuda-2019b-Python-3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:532 _disallow_when_autograph_enabled " decorating it directly with @tf.function.".format(task)) OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did not convert this function. Try decorating it directly with @tf.function. I tried creating a function, but then I run into new issues... @tf.function def myfunc(counter, val_size, cache): temp = tf.minimum(counter + n, val_size-1) if counter <= val_size: return cache[counter:temp, :].assign(d[0:(temp-counter), :]) return cache self.cache = myfunc(self.counter, self.val_size, self.cache) self.counter.assign(self.counter + n) /opt/ohpc/pub/easybuild/software/TensorFlow/2.2.0-fosscuda-2019b-Python-3.7.4/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:571 train_function * outputs = self.distribute_strategy.run( /trinity/home/r084755/DRF_AI/distal-radius-fractures-x-pa-and-lateral-to-clinic/Code files/LandmarkDetection.py:158 myfunc * return cache[counter:temp, :].assign(d[0:(temp-counter), :]) /opt/ohpc/pub/easybuild/software/TensorFlow/2.2.0-fosscuda-2019b-Python-3.7.4/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:1160 assign ** raise ValueError("Sliced assignment is only supported for variables") ValueError: Sliced assignment is only supported for variables
I am looking an approach to store current Scroll Position using Paging_3 when users end the application and restore the same when users re-launch the app. I tried implementing `lazyListState.firstVisibleItemIndex` but it only work best for offline if no data changes. But when using RemoteMediator the data loads first at stated scroll position but suddenly after finishing API call scroll animated to a random position. Here my implementation: ``` @Composable fun ArticleScreen(viewModel: ArticleViewModel = hiltViewModel()) { //fetch scroll position from shared prefs val scrollIndex = remember { viewModel.getScrollIndex } val lazyArticleItems = viewModel.articlesFlows.collectAsLazyPagingItems() //set sharedpref scroll position val lazyListState = rememberLazyListState( initialFirstVisibleItemIndex = scrollIndex ) // collect and store scroll position into sharedprefs LaunchedEffect(key1 = lazyListState, block = { snapshotFlow { lazyListState.firstVisibleItemIndex }.debounce(1000L) .collectLatest { index -> viewModel.setScrollIndex(index) Log.d("GoogleIO","Snapshot position: $index") } }) Box(modifier = Modifier.fillMaxSize()) { if(lazyArticleItems.itemCount < 1 && lazyArticleItems.loadState.refresh is LoadState.Loading) { CircularProgressIndicator(Modifier.align(Alignment.Center)) } else { ArticleList(lazyListState, lazyArticleItems) } } } ```
I'm writing a java code in which I run `Spark batches` using `Dataproc Serverless`. I have initially used `Dataproc Client Library` for java and so far, it works just fine. However, by looking at `Dataproc Serverless` documentation (https://cloud.google.com/dataproc-serverless/docs), there is not a single mention of interacting with it by using Java Client - it mentions REST, RPC and Cloud SDK. I started wondering if I'm doing something wrong. Is using `Dataproc Client Library` for `Dataproc Serverless` related actions actually not recommended or somehow limited?
Interacting with Dataproc Serverless using Dataproc Client Library
|java|google-cloud-platform|google-cloud-dataproc|google-cloud-dataproc-serverless|
null
I am trying to read in an ocel from an Eventlog sqlite file, which I build from multiple csv-files in python. Here's the code for the creation of the sqlite file: ``` # Define the path to the folder in Google Drive drive_path = '/content/drive/MyDrive/Test2/' # Update the file search patterns eventTypeTableFilenames = [fn for fn in glob.glob(drive_path + 'event_*.csv') if not fn.endswith("event_map_type.csv") and not fn.endswith("event_object.csv")] objectTypeTableFilenames = [fn for fn in glob.glob(drive_path + 'object_*.csv') if not fn.endswith("object_map_type.csv") and not fn.endswith("object_object.csv")] # Define the TABLES dictionary TABLES = dict() # Read CSV files into the TABLES dictionary for table_name in ["event", "event_map_type", "event_object", "object", "object_object", "object_map_type"]: TABLES[table_name] = pd.read_csv(drive_path + table_name + ".csv", sep=",") # Read additional event type tables for fn in eventTypeTableFilenames: table_name = fn.split(".")[0].split("/")[-1] table = pd.read_csv(fn, sep=",") TABLES[table_name] = table # Read additional object type tables for fn in objectTypeTableFilenames: table_name = fn.split(".")[0].split("/")[-1] table = pd.read_csv(fn, sep=",") TABLES[table_name] = table sql_path = "Eventlog.sqlite" if os.path.exists(sql_path): os.remove(sql_path) conn = sqlite3.connect(sql_path) for tn, df in TABLES.items(): df.to_sql(tn, conn, index=False) conn.close() # Connect to your SQLite database conn = sqlite3.connect("Eventlog.sqlite") cursor = conn.cursor() # Create a new table without duplicates cursor.execute('''CREATE TABLE IF NOT EXISTS object_object_no_duplicates AS SELECT DISTINCT * FROM object_object''') # Drop the original table cursor.execute('DROP TABLE IF EXISTS object_object') # Rename the new table to the original table name cursor.execute('ALTER TABLE object_object_no_duplicates RENAME TO object_object') # Commit changes and close the connection conn.commit() conn.close() # Connect to your SQLite database conn = sqlite3.connect("Eventlog.sqlite") cursor = conn.cursor() # Create a new table without duplicates cursor.execute('''CREATE TABLE IF NOT EXISTS event_object_no_duplicates AS SELECT DISTINCT * FROM event_object''') # Drop the original table cursor.execute('DROP TABLE IF EXISTS event_object') # Rename the new table to the original table name cursor.execute('ALTER TABLE event_object_no_duplicates RENAME TO event_object') # Commit changes and close the connection conn.commit() conn.close() ``` Im am using this line of code to read in the file afterwards: ``` sql_path = "Eventlog.sqlite" ocel = pm4py.read_ocel2_sqlite(sql_path) ``` After executing the code, I received an Error with the name of my table as it included an "-". After removing the "-" from the csv-file table, I build the sqlite-file once more, but I still receive the same error. I also checked the table name in the database after building it from the csv-file, which is fine but it still throws the same error. I am using colab and tried refreshing the notebook, but nothing helped. The command to read in the sqlite file is stuck with the table name, which contains the "-".
|graph|neo4j|langchain|graphdb|
null
When I run the command `yarn start`, the project to launch the development server build. I get no error, but in the logcat in Android Studio, I get the following error and I can't find the source of the problem. ```json "react-native": "0.67.5", "react-native-device-info": "^10.10.0", ``` [enter image description here](https://i.stack.imgur.com/arq3T.png) ``` Unhandled SoftException java.lang.RuntimeException: Catalyst Instance has already disappeared: requested by DeviceInfo at com.facebook.react.bridge.ReactContextBaseJavaModule.getReactApplicationContextIfActiveOrWarn(ReactContextBaseJavaModule.java:66) at com.facebook.react.modules.deviceinfo.DeviceInfoModule.invalidate(DeviceInfoModule.java:114) at com.facebook.react.bridge.ModuleHolder.destroy(ModuleHolder.java:110) at com.facebook.react.bridge.NativeModuleRegistry.notifyJSInstanceDestroy(NativeModuleRegistry.java:108) at com.facebook.react.bridge.CatalystInstanceImpl$1.run(CatalystInstanceImpl.java:368) at android.os.Handler.handleCallback(Handler.java:942) at android.os.Handler.dispatchMessage(Handler.java:99) at com.facebook.react.bridge.queue.MessageQueueThreadHandler.dispatchMessage(MessageQueueThreadHandler.java:27) at android.os.Looper.loopOnce(Looper.java:201) at android.os.Looper.loop(Looper.java:288) at com.facebook.react.bridge.queue.MessageQueueThreadImpl$4.run(MessageQueueThreadImpl.java:226) ``` I'm expecting the home page of my app to be displayed, but I get a blank page, nothing is displayed.
java.lang.RuntimeException: Catalyst Instance has already disappeared: requested by DeviceInfo
|react-native|
According to the [bash manual](https://www.gnu.org/software/bash/manual/bash.html#index-complete_002dfilename-_0028M_002d_002f_0029), you should already have ... - `complete-filename` bound to `C-x /` (first <kbd>ctrl</kbd><kbd>x</kbd> together then <kbd>/</kbd>). - `possible-filename-completions` bound to `M-/` (either <kbd>alt</kbd><kbd>/</kbd> together, or first <kbd>esc</kbd> then <kbd>/</kbd>) What do they do? Assume your cursor is at the end of `cmd fil` and you have files `file1` and `file2` in your working directory. The former completes the command to `cmd file`. When pressed again, it shows the list of possible completions: `file1 file2`. The latter shows the list directly, without completing anything. ### How to bind to a single key? See [bash's `bind` command](https://www.gnu.org/software/bash/manual/bash.html#index-bind) and `man readline` in general. In general, it is pretty easy: To bind the key <kbd>a</kbd> to filename-completion, use `bind a:complete-filename`. For the backtick <kbd>\`</kbd> you would just do ``bind '`:complete-filename'``. However, if your operating system, keyboard settings has deadkeys (*press <kbd>\`</kbd> once, nothing seems to happen, press <kbd>e</kbd>, you get `è`*), then you have to press <kbd>\`</kbd>, or your terminal and bash won't even see that you pressed that key.
I was actually asking myself this exact same question. So I compared both methods (public assets folder vs. cloud-based service (Cloudinary)). I expected the loading time and resource values to be way smaller for the cloud-based solution (network tab console), however, it's exactly the same as when my assets are rendered from the public folder. So for this reason, and knowing that I won't need to host hundreds of images but around 20-30, I will stick with the public folder. When images are being uploaded by users though, the images will be stored in an S3 bucket and rendered through a link the backend will send me. Hope this will help some people.
How to Save & Restore the Scroll Position of a LazyColumn using Paging 3 RemoteMediator? Compose
|android-jetpack-compose|android-room|kotlin-coroutines|android-paging-3|
null
**My goal** Ping an esp32 device which is in a local network without internet and without https. **Reason** I have a node js server with a PWA hosted on render.com. This server is communicating with my esp32 which is in a local network. ESP32 connected to the server via websockets and sends data to it. User opens the PWA which is default https. I want to check if the user is in it's own local network by pinging my esp32. If the ping was a success i want to redirect the user to the local webpage ( because of network speed ). **The Problem** The main problem which prevents me to do this simply and elegantly is *mixed content policy*. My PWA site just can't access any http endpoint because it is served over https. (even if it is in a local network) My esp32 has already a continous https connection to my server and there is not enough resources to host an HTTP webserver concurrently. **What i have tried** **Try 1.** I'm using `helmet` on node.js to further secure my site. I added the following config to helmet in order to let my pwa ping my esp32 by mDNS. app.use(helmet({ contentSecurityPolicy: { directives: { defaultSrc: ["'self'"], connectSrc: ["'self'", "https://esp.local/", "http://esp.local/"], } }, hsts:{ maxAge:0 }, crossOriginEmbedderPolicy: false, crossOriginResourcePolicy: { policy: "cross-origin" } })); This doesn't work because of mixed content. Browser ( chrome ) refuses to connect. **Try 2.** I noticed that when i want to access my http esp32 with https the browser ( chrome ) spits out a **net::ERR_CONNECTION_REFUSED** message to the console, indicating that my ESP32 is in fact in the local network but refused the https connection. After that I have tried to power off my esp32 and repeat the fetch again. If my esp32 is not alive the error is different. **net::ERR_CONNECTION_TIMED_OUT** or **net::ERR_NAME_NOT_RESOLVED**. I tought that it was good news because i can just capture the **net::ERR_CONNECTION_REFUSED** error and redirect my user to the local web app. Here is my approach async function isLocalNetwork() { const resp = await fetch('https://esp.local/ping').catch((err) => console.log(err)); console.log(resp); return resp.ok; } // Redirect to local ESP32 frontend if the client is in the local network (async () => { if (await isLocalNetwork()) { //window.location.href = 'http://esp.local/'; // Redirect works regardless of ssl alert("Can redirect!"); }else{ alert("Can't redirect!"); } })(); The problem is that neither the response nor the error contains these browser errors. I than tried with try catch blocks async function isLocalNetwork() { try { const resp = await fetch('https://esp.local/ping'); return resp.ok; } catch (err) { console.error(err); // Log the error object for debugging if (err.code === "ECONNREFUSED") { console.log("Connection refused"); // Handle connection refused error } else if (err.code === "ENOTFOUND") { console.log("Name not resolved"); // Handle name not resolved error } else { console.log("Other network error"); // Handle other network errors } return false; // Return false indicating connection failure } } This also does not contain the **net::** errors i see on the console. Chrome spits out this to the console btw: main.js:1 GET https://esp.local/ping net::ERR_CONNECTION_REFUSED I have also tried global window.onError listener but this does not capture the fetch errors also. In my final desperation i wanted to read from console the raw string message. I tought i would parse it raw and if i find the error string i just redirect. But JS can't read from the console because of security reasons.... **Try 3.** I have found a network discovery script on github which looked promising: https://github.com/samyk/webscan Hovewer if it would work it wouldn't be enough because it finds the IP addresses and the esp32 can be configured to allow DHCP so the IP can change. **Summary** I don't ask a big thing. I just want to ping my device in the local network. I don't want to transmit data and don't want to do anything heavy. I just want to know if the site user and my device is on the same local network. Is this a big ask? Maybe I need to do this the other way around? Check the user's public IP in the node.js server and compare the connected esp32's public IP to it. If it matches, redirect the user? I hope somebody can help me find a clever way of doing this.
Resolve mDNS in local network with javascript from https origin
|javascript|redirect|mixed-content|mdns|local-network|
|java|project-loom|virtual-threads|
I have this recursion code to pick dates in a rangepicker: ``` recurse( () => cy.get('.mantine-DatePicker-yearsListCell').invoke('text'), (n) => { if (!n.includes(year)) { cy.get('.mantine-DatePicker-calendarHeaderControlIcon').first().click(); } else if (!n.includes(year) < 0) { cy.get('.mantine-DatePicker-calendarHeaderControlIcon').last().click(); } cy.get('.mantine-DatePicker-yearsListCell').contains(year).click(); }, { limit: 12, } ); ``` The issue is that it is working when i am adding 2021-07-01 - 2023-05-01 as range but does not work if i add 2019-07-01 - 2023-05-01 as range. When the range is 2019-07-01 - 2023-05-01 when trying to add the second date it is still doing the first instead of last but it should do last as 2023 comes after 2019 Basically if it does not find the year in when clicking on first it should click on the last. Any help please..There must be something i am not understanding.
Date issue with rangepicker
|typescript|cypress|
I am using WSO2 IS V6.1 and I can create a client application using the DCR endpoint if I run the following cURL command. ``` curl -k -X POST -H "Authorization: Basic YWRtaW46YWRtaW4=" -H "Content-Type: application/json" -d '{"client_name": "application1","grant_types": ["password"],"ext_param_client_id":"provided_client_id0002","ext_param_client_secret":"provided_client_secret0002" }' "https://localhost:9443/api/identity/oauth2/dcr/v1.1/register" ``` But, when I try to update the client application using the following cURL command as shown in the documentation[1], ``` curl -k -X PUT -H "Authorization: Basic YWRtaW46YWRtaW4=" -H "Content-Type: application/json" -d '{ "redirect_uris":["https://client.example.org/callback"],"client_name":"application1","grant_types": ["password"] }' "https://localhost:9443/api/identity/oauth2/dcr/v1.1/register" ``` I can see the following warning message in the `wso2carbon.log` file and when I check the SP using the management console, it has not been updated. ``` [2024-02-13 11:14:53,129] [b1c1ef57-007d-4668-a261-65ef33cb3b14] WARN {org.apache.cxf.jaxrs.impl.WebApplicationExceptionMapper} - javax.ws.rs.ClientErrorException: HTTP 405 Method Not Allowed at org.apache.cxf.jaxrs.utils.SpecExceptions.toHttpException(SpecExceptions.java:117) at org.apache.cxf.jaxrs.utils.ExceptionUtils.toHttpException(ExceptionUtils.java:168) at org.apache.cxf.jaxrs.utils.JAXRSUtils.findTargetMethod(JAXRSUtils.java:673) at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.processRequest(JAXRSInInterceptor.java:182) at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.handleMessage(JAXRSInInterceptor.java:79) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307) at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:265) at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:225) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:304) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPut(AbstractHTTPServlet.java:234) at javax.servlet.http.HttpServlet.service(HttpServlet.java:558) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:279) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:209) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:481) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:130) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:119) at org.wso2.carbon.identity.context.rewrite.valve.OrganizationContextRewriteValve.invoke(OrganizationContextRewriteValve.java:115) at org.wso2.carbon.tomcat.ext.valves.SameSiteCookieValve.invoke(SameSiteCookieValve.java:38) at org.wso2.carbon.identity.cors.valve.CORSValve.invoke(CORSValve.java:83) at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:154) at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:142) at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:114) at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49) at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:75) at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:152) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:673) at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:63) at org.wso2.carbon.tomcat.ext.valves.RequestEncodingValve.invoke(RequestEncodingValve.java:49) at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:137) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:389) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:926) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1791) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.base/java.lang.Thread.run(Thread.java:829) ``` It seems the PUT method is not implemented for updating a client application. Is there another way to update the client application using the DCR endpoint? If so, what is the cURL command for that? 1. https://is.docs.wso2.com/en/latest/guides/access-delegation/oauth-dynamic-client-registration/#update-an-oauth-application
Updating Client Application using DCR endpoint not working in WSO2 IS V6?
|curl|wso2|wso2-identity-server|
When using the `semantic-release` plugin `@semantic-release/git` on protected branch (master is protected with maintainers allowed to merge and no one allowed to push and merge) I have remote: GitLab: You are not allowed to push code to protected branches on this project I also tried with allowed to force push enabled but it changes nothing. What's the solution ?
@semantic-release/git and protected branches
createtoken(""); is not working in my laravel project... where i have deleted vendor file and done composer install : i am getting : Problem 1 - lcobucci/jwt is locked to version 4.3.0 and an update of this package was not requested. - lcobucci/jwt 4.3.0 requires ext-sodium * -> it is missing from your system. Install or enable PHP's sodium extension. Problem 2 - lcobucci/jwt 4.3.0 requires ext-sodium * -> it is missing from your system. Install or enable PHP's sodium extension. - kreait/firebase-tokens 4.3.0 requires lcobucci/jwt ^4.3.0|^5.0 -> satisfiable by lcobucci/jwt[4.3.0]. - kreait/firebase-tokens is locked to version 4.3.0 and an update of this package was not requested. then i did sudo pecl install libsodium but i am getting : janammaharjan@Janams-MacBook-Air laravel-uat % sudo pecl install libsodium WARNING: channel "pecl.php.net" has updated its protocols, use "pecl channel-update pecl.php.net" to update pecl/libsodium requires PHP (version >= 7.0.0, version <= 8.0.99), installed version is 8.2.4 No valid packages found install failed I have php version 8.2.4 i tried installing xampp with php 8.0.28 but it is not helping can anyone help me on this ?
You are trying to retrieve subscribed sku by `skuId` (id of the service SKU) Check the properties in the [doc][1] |Property|Description| |-|-| |id|The unique identifier for the subscribed sku object| |skuId|The unique identifier (GUID) for the service SKU| The call `await graphServiceClient.SubscribedSkus["id"].GetAsync();` requires `id` not `skuId`. The limitation of the subscribedSkus is that they don't support filtering by `skuId`, so you can't use var result = await graphClient.SubscribedSkus.GetAsync((requestConfiguration) => { requestConfiguration.QueryParameters.Filter = $"skuId eq {idSku}"; }); Only way is to retrieve all subscribedskus and perform filter on the client var response = await graphServiceClient.SubscribedSkus.GetAsync(); var subscribedSku = response.Value.FirstOrDefault(x=>x.SkuId eq idSku); Or you have id not skuId, you can retrieve the specific subscribed sku var SubscribedSku = await graphServiceClient.SubscribedSkus[id].GetAsync(); [1]: https://learn.microsoft.com/en-us/graph/api/resources/subscribedsku?view=graph-rest-1.0#properties
This is probably very basic (sorry), but I'm a beginner in coding and haven't been able to find instructions online. I use R and have a dataframe like this: ```` id<-c(1:4) happy<-c(1,0,1,0) sad<-c(0,0,0,1) angry<-c(0,1,0,0) excited<-c(1,0,1,1) emot<-data.frame(id, happy, sad, angry, excited) emot id happy sad angry excited 1 1 0 0 1 2 0 0 1 0 3 1 0 0 1 4 0 1 0 1 ```` id signifies a person and the other variables signify whether the person mentioned a certain emotion (1) or not (0). I'd like to convert the dataframe to obtain this result: ```` source target count happy sad 0 happy angry 0 happy excited 2 sad angry 0 sad excited 1 angry excited 0 ```` I really tried with the table function, but to no avail. Thank you in advance!
Converting a dataframe into a long format contingency table for network analysis purposes
|r|dataframe|count|data-conversion|
This code works but it can probably be improved: import subprocess ollama_server = subprocess.Popen(["ollama", "serve"]) from langchain_community.chat_models import ChatOllama from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import ChatPromptTemplate llm = ChatOllama(model="mistral") prompt = ChatPromptTemplate.from_template("compute {topic}") chain = prompt | llm | StrOutputParser() print(chain.invoke({"topic": "1+1"})) ollama_server.terminate()
In a function, I sometimes need to select a variable only if it exists. For this, the function `dplyr::any_of()` is perfect, but it only works with standard evaluation, taking a character vector as input. I'm looking for an alternative that would work as a replacement in the following example that feels very hacky: ``` r library(tidyverse) library(rlang) f = function(data, x1, x2, gp){ gpname = as_label(enquo(gp)) data %>% select(x1={{x1}}, x2={{x2}}, gp=any_of(gpname)) %>% names() } iris %>% f(Sepal.Length,Sepal.Width,Species) #> [1] "x1" "x2" "gp" iris %>% f(Sepal.Length,Sepal.Width) #> [1] "x1" "x2" ``` <sup>Created on 2024-03-15 with [reprex v2.1.0](https://reprex.tidyverse.org)</sup> The function should run with or without `Species`, as in my reprex, but it would make sense that it throws an error if querying an unknown column (unlike in my reprex)
I am trying to implement the login with google functionality in asp.net core web api using identity external provider. All things are going well but when my url is redirecting to this external-auth-callback function I am getting info null. This is the external-auth-callback function. public async Task<MessageViewModel> ExternalLoginCallback([FromQuery] string returnUrl) { var info = await _signInManager.GetExternalLoginInfoAsync(); if (info != null) { var signInResult = await _signInManager.ExternalLoginSignInAsync(info.LoginProvider, info.ProviderKey, isPersistent: false, bypassTwoFactor: true); return new MessageViewModel() { IsSuccess = signInResult.Succeeded, Message = "User with this account is already in table", }; } else { var email = info.Principal.FindFirstValue(ClaimTypes.Email); var user = await _userManager.FindByEmailAsync(email); if (user == null) { user = new Users { UserName = info.Principal.FindFirstValue(ClaimTypes.Email), Email = info.Principal.FindFirstValue(ClaimTypes.Email), FirstName = info.Principal.FindFirstValue(ClaimTypes.GivenName), LastName = info.Principal.FindFirstValue(ClaimTypes.Surname), }; await _userManager.CreateAsync(user); } await _userManager.AddLoginAsync(user, info); await _signInManager.SignInAsync(user, isPersistent: false); return new MessageViewModel() { IsSuccess = true, Message = "User with this account is created successfully" }; } } Before that i am returning the return url through this method public async Task<LoginProviderViewModel> ExternalLogin(string provider, string returnUrl) { var redirectUrl = $"https://localhost:7008/api/account/external-auth-callback?returnUrl={returnUrl}"; var properties = _signInManager.ConfigureExternalAuthenticationProperties(provider, redirectUrl); properties.AllowRefresh = true; return new LoginProviderViewModel() { Provider = provider, Properties = properties, }; } from my frontend when i click on login with google button this method is called and after coming to this method first externalLogin method will called and I returning with return url after that in frontend i got google sign in page and when i select user then it goes to external-auth-callback function which is used to save the user who try to login with google. const handleExternalLogin = async () => { const url = `${externalLogin}?provider=Google&returnUrl=/admin-dashboard`; const response = await getData(url); if (response.isSuccessfull) { const redirectUrl = response.data.properties.items[".redirect"]; const loginProvider = response.data.properties.items["LoginProvider"]; const googleAuthorizationUrl = `https://accounts.google.com/o/oauth2/v2/auth` + `?client_id=xxxxx.apps.googleusercontent.com` + `&redirect_uri=${redirectUrl}` + `&response_type=code` + `&scope=openid%20profile%20email` + `&state=${loginProvider}`; window.location.href = googleAuthorizationUrl; } }; Please help me out as I have stucked to this point from 2 days. I have tried all the possible checks but not understanding why the info is showing null in external-auth-callback function? So I was expecting to please anyone let me know why I getting the null value.
``` fig, ax = plt.subplots() gdf.plot(column="vsu", scheme="EqualInterval", legend=True, ax=ax) leg1 = ax.get_legend() gdf_2.plot(column = "gk", ax=ax, legend=True) ax.add_artist(leg1) plt.show() ```
Yes! You can set up an action that writes whatever value the user input to a property of an object. This could either be an existing one or one that is created when the user is taking the action - whatever makes sense for your workflow. As a general rule, think of workshop as a place for users to interact with the ontology - both reading and writing - so if you want to store things they do in workshop you'll need to write to the ontology. If you want to use it in a transform downstream you'll need to then materialise the dataset that backs this object type, which essentially captures all the ontology edits in a dataset. You can set up in ontology manager.
Hello I was facing the same issue I created a template mustache named model.mustache to overwrite the old one. The main problem is that the `#isEnum` condition always return false but `^isEnum` is working correctly; so I found a way to check for `enum` differently with `#allowableValues`. The solution consist to create two generation block one for `enum` and the second one for `model`. This is what you must do step by step. 1. Create a new file in your project called model.mustache 2. Add the following content to the file: package {{package}}; {{#imports}}import {{import}}; {{/imports}} import io.swagger.annotations.*; import com.google.gson.annotations.SerializedName; {{#models}} {{#model}} {{#allowableValues}} @ApiModel(description = "") public enum {{classname}} { {{#allowableValues}}{{#values}} {{.}}, {{/values}}{{/allowableValues}} } {{/allowableValues}} {{/model}} {{/models}} {{#models}} {{#model}} {{^isEnum}} {{#description}} /** {{.}} **/{{/description}} @ApiModel(description = "{{{description}}}") public class {{classname}} {{#parent}}extends {{{.}}}{{/parent}} { {{#vars}}{{#isEnum}} public enum {{datatypeWithEnum}} { {{#allowableValues}}{{#values}} {{.}}, {{/values}}{{/allowableValues}} }; @SerializedName("{{baseName}}") private {{{datatypeWithEnum}}} {{name}} = {{{defaultValue}}};{{/isEnum}}{{^isEnum}} @SerializedName("{{baseName}}") private {{{dataType}}} {{name}} = {{{defaultValue}}};{{/isEnum}}{{/vars}} {{#vars}} /**{{#description}} {{{.}}}{{/description}}{{#minimum}} minimum: {{.}}{{/minimum}}{{#maximum}} maximum: {{.}}{{/maximum}} **/ @ApiModelProperty({{#required}}required = {{required}}, {{/required}}value = "{{{description}}}") public {{{datatypeWithEnum}}} {{getter}}() { return {{name}}; } public void {{setter}}({{{datatypeWithEnum}}} {{name}}) { this.{{name}} = {{name}}; } {{/vars}} @Override public boolean equals(Object o) { if (this == o) { return true; } if (o == null || getClass() != o.getClass()) { return false; } {{classname}} {{classVarName}} = ({{classname}}) o;{{#hasVars}} return {{#vars}}(this.{{name}} == null ? {{classVarName}}.{{name}} == null : this.{{name}}.equals({{classVarName}}.{{name}})){{^-last}} && {{/-last}}{{#-last}};{{/-last}}{{/vars}}{{/hasVars}}{{^hasVars}} return true;{{/hasVars}} } @Override public int hashCode() { int result = 17; {{#vars}} result = 31 * result + (this.{{name}} == null ? 0: this.{{name}}.hashCode()); {{/vars}} return result; } @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("class {{classname}} {\n"); {{#parent}}sb.append(" " + super.toString()).append("\n");{{/parent}} {{#vars}}sb.append(" {{name}}: ").append({{name}}).append("\n"); {{/vars}}sb.append("}\n"); return sb.toString(); } } {{/isEnum}} {{/model}} {{/models}} 3. Now in your generating configuration, you mus add the template folder path with `templateDir = "path/to/the/folder/were/you/create/model.mustache"` This is and example of the config I have in my `build.gradle` var generatedSourcesPath = "$buildDir" // /generated/sources/openapi var apiDescriptionFolder = "$rootDir/app/openapi" var apiRootName = "com.deepdrimz.openapi" ext { sName = 'njangui' } openApiGenerate { var serviceName = findProperty("sName") generatorName = "android" templateDir = "${apiDescriptionFolder}/templates" inputSpec = "${apiDescriptionFolder}/${serviceName}.yaml" outputDir = "${generatedSourcesPath}" modelPackage = "${apiRootName}.${serviceName}" configOptions = [ dateLibrary: "java8", library: "httpclient", serializationLibrary: "gson", serializableModel : "true" ] }
I have a simple spring boot application for API I wanted to put swagger on it for documentation but I get null pointer in main class package com.local.parking.test; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.annotation.ComponentScan; import org.springframework.web.bind.annotation.CrossOrigin; import org.springframework.web.bind.annotation.RequestMethod; @SpringBootApplication @CrossOrigin(origins = "http://localhost:4200", allowedHeaders = "*", methods = { RequestMethod.GET, RequestMethod.POST, RequestMethod.PUT, RequestMethod.DELETE }, allowCredentials = "true") //@ComponentScan(basePackages = {"com.local.parking.test.config"}) public class ParkingApplication { public static void main(String[] args) { System.out.println("Parking"); SpringApplication.run(ParkingApplication.class, args); } } error is in line `SpringApplication.run(ParkingApplication.class, args);` >at com.local.parking.test.ParkingApplication.main(ParkingApplication.java:16) ~[classes/:na] at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) ~[na:na] at java.base/java.lang.reflect.Method.invoke(Method.java:578) ~[na:na] at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49) ~[spring-boot-devtools-2.7.1.jar:2.7.1] Caused by: java.lang.NullPointerException: Cannot invoke "org.springframework.web.servlet.mvc.condition.PatternsRequestCondition.getPatterns()" because "this.condition" is null swagger config : package com.local.parking.test.config; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import springfox.documentation.builders.ApiInfoBuilder; import springfox.documentation.builders.PathSelectors; import springfox.documentation.builders.RequestHandlerSelectors; import springfox.documentation.service.ApiInfo; import springfox.documentation.spi.DocumentationType; import springfox.documentation.spring.web.plugins.Docket; import springfox.documentation.swagger2.annotations.EnableSwagger2; @Configuration @EnableSwagger2 public class SwaggerConfig { @Bean public Docket api() { return new Docket(DocumentationType.SWAGGER_2) .select() // .apis(RequestHandlerSelectors.basePackage("com.example.controller")) // Set your base package here .apis(RequestHandlerSelectors.basePackage("com.local.parking.test.parking")) .paths(PathSelectors.any()) .build() .apiInfo(apiInfo()); } private ApiInfo apiInfo() { return new ApiInfoBuilder() .title("Parking Management API") .description("API for managing parking spaces and bookings") .version("1.0") .build(); } } POM file : <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.7.1</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>com.local</groupId> <artifactId>parkingtest</artifactId> <version>0.0.1-SNAPSHOT</version> <name>parking</name> <description>Demo project for Spring Boot</description> <properties> <java.version>17</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-rest</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-groovy-templates</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.flywaydb</groupId> <artifactId>flyway-core</artifactId> <version>8.0.0</version> </dependency> <dependency> <groupId>org.mybatis.spring.boot</groupId> <artifactId>mybatis-spring-boot-starter</artifactId> <version>3.0.1</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-configuration-processor</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <!-- Swagger 2 --> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>3.0.0</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>3.0.0</version> </dependency> <!-- Connection pooling dependency --> <dependency> <groupId>com.zaxxer</groupId> <artifactId>HikariCP</artifactId> <version>5.0.1</version> </dependency> <!-- for flyway --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-jdbc</artifactId> </dependency> </dependencies> <build> <resources> <resource> <directory>src/main/resources</directory> <includes> <include>**/*.xml</include> </includes> </resource> </resources> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <version>2.5.4</version> <configuration> <excludes> <exclude> <groupId>org.project-lombok</groupId> <artifactId>lombok</artifactId> </exclude> </excludes> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-clean-plugin</artifactId> <version>3.1.0</version> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>3.2.0</version> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.1</version> </plugin> </plugins> </build> </project> What causes the null pointer or how to fix it ?
Spring boot application with swagger 2 gets null on run()
|java|spring-boot|config|swagger-2.0|
I'm trying to test the Antlr4 (Oracle) PLSQL grammar, but really struggling. I might have confused having followed 2 or 3 different guides. I performed the following steps, starting with installation, on Ubuntu 22.04. Install packages, but not sure I ended up using these: ``` sudo apt-get update sudo apt-get -y install python3-antlr4 sudo apt install python3-pip pip3 install antlr-plsql pip3 install antlr4-python3-runtime==4.9.3 ``` I then switched to another which seemed simpler and more self-contained: ``` mkdir lib cd lib curl -O https://www.antlr.org/download/antlr-4.9.3-complete.jar cd .. mkdir grammars cd grammars wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/PlSqlLexer.g4 wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/PlSqlParser.g4 cd .. wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/Python3/PlSqlLexerBase.py wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/Python3/PlSqlParserBase.py ``` This gives me: ``` ├── grammars │   ├── PlSqlLexer.g4 │   └── PlSqlParser.g4 ├── lib │   └── antlr-4.9.3-complete.jar ├── PlSqlLexerBase.py ├── PlSqlParserBase.py ``` Generate the Python 3 modules from the grammars: ``` java -jar ./lib/antlr-4.9.3-complete.jar -Dlanguage=Python3 grammars/*g4 ``` The result is very long error message: ``` error(50): /home/me/try2/grammars/PlSqlLexer.g4:1:0: syntax error: '{"payload": {"allShortcutsEnabled":false,"fileTree": {"sql/plsql": {"items": [ {"name":"CSharp","path":"sql/plsql/CSharp","contentType":"directory"}, {"name":"Cpp","path":"sql/plsql/Cpp","contentType":"directory"}, {"name":"Dart","path":"sql/plsql/Dart","contentType":"directory"}, . . . "grammars-v4", "showInvalidCitationWarning":false, "citationHelpUrl":"https://docs.github.com/github/creating-cloning-and-archiving-repositories/creating-a-repository-on-github/about-citation-files", "actionsOnboardingTip":null}, "truncated":false,"viewable":true,"workflowRedirectUrl":null,"symbols":{"timed_out":false,"not_analyzed":true,"symbols":[]}}, "copilotInfo":null, "copilotAccessAllowed":false, "csrf_tokens":{"/antlr/grammars-v4/branches":{"post":"B-JVTx4hyskRLh-IIK4rWkAi5MsB-02SSXgjgq9nYNzDI1ZfW5iKO2EvjBCRVSAFlOtcSNnBT47ACuB9TmN_KA"},"/repos/preferences":{"post":"GKOXPKaenoTNYzUHdAFAZKMu0-z4ENu8r5KmHmhUiOKs4HBOuNY9PVTbSAMl7rKPVfBwRCJRpTY72nxBH5PotA"}}}, "title":"grammars-v4/sql/plsql/PlSqlParser.g4 at master · antlr/grammars-v4"}' came as a complete surprise to me ``` Tried this as well but got the same result: ``` java -Xmx500M -cp "./lib/antlr-4.9.3-complete.jar:$CLASSPATH" org.antlr.v4.Tool -Dlanguage=Python3 grammars/*g4 ```
Timing constraints, are used by the Timing Analyzer tool, not simulation. A back-annotated simulation works like hardware using setup and hold times to create the waves/simulator output. RTL simulation uses the Verilog event scheduler simulation model. You will not see any affect of timing constraints in simulation. The Vivado timing analyzer does not know about the behavior of your logic. It knows delays thru Xilinx primitives like slices, LUTs, wires, buffers. clk2out delay of registers & RAM outputs (not a comprehensive list, you get the idea). Delays from one domain to another don't make a lot of sense from an analysis pov. The timing analyzer does not know you put a nice CDC handling FIFO in that path with two clocks and two constraints. By default, the timing analyzer times between every path to every other path in all clock domains that have a constraint. If you have paths that cross domains the tool will show that path as failing. The point of this is to show the user 'Hey you have crossed clock domains' If you have taken care to design logic that handles CDC (such as a CDC handling FIYO from Cummings) then the reported path can be ignored because you have handled the CDC. The native purpose of static timing analysis makes sense within the same clock domain only. The normal use of the timing analyzer is to add up delays in all the paths from register to register in a clock domain (done for all clock domains). If a path's delay is shorter that the clock period, then the path passes otherwise it fails. **What to do** A few options 1) Right click on the failing path(s) in the Timing Analyzer tool and do 'Set false path', save the false path to the same constraints file that the clock constraints are in. Run the tools again and the path will not be flagged as an error. More paths may pop up, do the same with those if you are confident that the corresponding RTL logic handles the CDC. 2) Create a max delay constraint. This maintains some relationship between the 2 domains, so that the logic for both sides of the fifo ends up near to the other in the design. Recommendations here [async-fifos-how-can-we-effectively-make-and-constrain-them][1] and here [What constraints are needed][2] 3) A more general approach is to set the clock groups as asynchronous [How when why to set clock groups asynchronous][3]. If this link is not enough, then search on 'set clock groups asynchronous' for more. This Xilinx page recommends setting all clock groups asynchronous [set clock groups async][4]. Here is the section in context ``` set_clock_groups -asynchronous -group {xilinx_jtag_tck} set_clock_groups -asynchronous -group {clkA PLL1_c0 PLL1_c1 } set_clock_groups -asynchronous -group {clkB PLL2_c0 PLL2_c1 } set_clock_groups -asynchronous -group {dsp_clk PLL3_c0 PLL3_c1 PLL3_c2} In general, it is recommended to add all clocks to the group. Exceptions are: 1) DDR and GXB designs, where there are a large number of clocks added to the design that the user does not know about. 95% of these clocks have paths only to domains that are relevant or are cut in the .xdc files that get created, so they do not need to be taken into consideration. 2) Virtual clocks are recommended for all I/O interfaces (except for source-synch outputs). Virtual clocks are not added here because the only paths they connect to are I/O's that are explicitly designated and they tend to only have real paths. If they are added, it will not cause any issues. Refer to (UG903) Vivado Design Suite User Guide: Using Constraints for more information. ``` Xilinx refers to the process of setting up constraints for a new design as 'baselining' There is explanation here [Baselining-the-Design][5] **Alternate solution** Use the Xilinx IP catalog to generate a CDC FIFO, the posted issue will not occur because Xilinx's IP contains false paths or max_delay paths where needed. IP Catalog also generates a simulation model to use for RTL simulations. Not necessarily recommending this; its something else you could do. Designing your own vs using Xilinx IP has its own trade offs. [1]: https://www.theeeview.com/async-fifos-how-can-we-effectively-make-and-constrain-them/ [2]: https://www.reddit.com/r/FPGA/comments/lf9zwh/what_constraints_do_we_need_to_synthesis/ [3]: https://support.xilinx.com/s/question/0D52E00006iHvXnSAK/how-to-know-when-to-set-clocks-asynchronous-and-how-to-do-so?language=en_US [4]: https://support.xilinx.com/s/article/44651?language=en_US [5]: https://docs.xilinx.com/r/en-US/ug949-vivado-design-methodology/Baselining-the-Design
i just found the response, i don't know exactly why, but if you replace '\n' by "\n" it will work str_replace('\n', "\n", $n->mensagem); doing this the message was printed like > Lembre-me, contrato 156051 > > > mensagem: aaaaaa
The question and answers have custom defined div wrappers around them. The closing div </div> is on the next line producing a blank line. I want to remove that line. Post Link: <http://fgstudy.com/uncategorized/a/> I have already tried the white-space CSS code as followed: div#q1, div#a1, div#a2 { white-space: normal; } P { white-space: normal;} <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-css --> body { margin: 2% 2%; } #q1 { margin: 10px 10px; border: 1px; box-shadow: 0px 2px 2px 2px #DCDCDC; text-align: left; padding: 5px 20px; font-size: 18px; color: #ffffff; background-color: #317eac; } #a1 { margin: 10px 10px; border: 1px; padding: 10px; box-shadow: 0px 2px 2px 2px #DCDCDC; text-align: left; font-size: 16px; padding: 8px 18px; } #a2 { margin: 10px 10px; border: 1px; box-shadow: 0px 2px 2px 2px #DCDCDC; text-align: left; padding: 5px 18px; font-size: 16px; } #a1:hover { color: #ffffff; background-color: #73AD21; } #a2:hover { color: #ffffff; background-color: #C71C22; } <!-- language: lang-html --> <hr> <div id="q1"> <p>Which of the following the highest hydration energy</p> </div> <div id="a1"> <p>Mg<sup>++</sup></p> </div> <div id="a2"> <p>Li<sup>+</sup></p> </div> <div id="a2"> <p>Na<sup>+</sup></p> </div> <div id="a2"> <p>K<sup>+</sup></p> </div> <hr> <div id="q1"> <p>Which the correct statement</p> </div> <div id="a1"> <p>Na<sup>+</sup>is smaller than Na atom</p> </div> <div id="a2"> <p>Cl<sup>-</sup>is smaller than Cl atom</p> </div> <div id="a2"> <p>Cl<sup>-</sup>(Ion) and Cl (atom) are equal in size</p> </div> <div id="a2"> <p>Na<sup>+</sup>is larger than Na atom</p> </div> <!-- end snippet --> I want that the white space after text disappears.
How do I remove white space using CSS?
null
i took a quick look at the provided code: you have declared many local variables on the itemBuilder method of the listview: itemBuilder: (BuildContext context, int index) { // Today date DateTime currentDate = DateTime.now(); DateTime startOfWeek = currentDate.subtract(Duration(days: currentDate.weekday + 1)); DateTime itemDate = startOfWeek.add(Duration(days: index)); String formattedDate = DateFormat('d').format(itemDate); String formattedDay = DateFormat('E').format(itemDate); bool isToday = formattedDate == DateFormat('d').format(currentDate) && formattedDay == DateFormat('E').format(currentDate); . int selectedCardIndex = -1; bool isSelected = selectedCardIndex == index; return InkWell(); } **all of the above variables are locally declared and all of the listview items depend on.** eg: if it works (toggles selection mode), then it will work only on all items. in more simple way: if the card is white and the users clicked then all of the items will be black because all depend on local variable called isSelected. **first Fix** make a customized card that has a selection flag. hence, `onTap(){just reverse the selected flag only for the current card}`. btw: i think this line doesn't do any thing onTap(){ selectedCardIndex > 0 ? isSelected : false; } even if you reversed `isSelected` in the above `onTap()`, this cause builder to be called again and again declaring local vars int selectedCardIndex = -1; bool isSelected = selectedCardIndex == index; so, again become unSelected. Hope it helps you.
The documentation of freezed is telling that we should just use the sealed classes made available in dart 3. However: it does not explain how to use it, especially in combination with fromJson. So I'm a little stumped on how to do it with sealed classes instead of union types. Given the following union; ```dart import 'package:freezed_annotation/freezed_annotation.dart'; part 'example.freezed.dart'; part 'example.g.dart'; @freezed sealed class Example with _$Example { const factory Example.text({ required String text, }) = _ExampleText; const factory Example.nameLocation({ required String name, required String location, }) = _ExampleNameLocation; factory Example.fromJson( Map<String, dynamic> json, ) => _$ExampleFromJson(json); } void main() { print(Example.fromJson({'runtimeType': 'text', 'text': 'Hello'})); print(Example.fromJson({ 'runtimeType': 'nameLocation', 'name': 'John', 'location': 'Amsterdam' })); } ``` I tried something like this - but that doesn't work. (The non-abstract class '_$ExampleTextImpl' is missing implementations for these members: _$Example.toJson) ```dart import 'package:freezed_annotation/freezed_annotation.dart'; part 'example.freezed.dart'; part 'example.g.dart'; @freezed sealed class Example with _$Example { const factory Example() = _Example; factory Example.fromJson( Map<String, dynamic> json, ) => _$ExampleFromJson(json); } @freezed class ExampleText extends Example with _$ExampleText { const factory ExampleText({required String text}) = _ExampleText; } @freezed class ExampleNameLocation extends Example with _$ExampleNameLocation { const factory ExampleNameLocation({ required String name, required String location, }) = _ExampleNameLocation; } void main() { print(Example.fromJson({'runtimeType': 'ExampleText', 'text': 'Hello'})); print(Example.fromJson({ 'runtimeType': 'ExampleNameLocation', 'name': 'John', 'location': 'Amsterdam' })); } ```
Using dart Freezed with sealed classes and fromJson
|flutter|dart|freezed|flutter-freezed|json-serializable|
null
{"Voters":[{"Id":466862,"DisplayName":"Mark Rotteveel"},{"Id":1940850,"DisplayName":"karel"},{"Id":16217248,"DisplayName":"CPlus"}],"SiteSpecificCloseReasonIds":[18]}
**GENERIC RESPONSE:** I wish the docs would point out that auth_request fires a GET request, I spent my six hours trying a boatload of GPT's in vain. If in any case anyone is using a POST in the auth_request route, make sure that you change it to GET, the token in the header such as `Authorization: Bearer xyz-etc` work just fine in a GET request.
You have `Hour` pinned. Unpin it and it should restore back to showing the full `DateTimeOffset` value. Expand `dt`, and you should see `Hour` is pinned at the top of the list it displays. Click the pin icon and it will unpin it: [![pin icon][1]][1] [1]: https://i.stack.imgur.com/DK8gX.png
I've had this issue today, and thanks to [Ahmed][1] comment I've been able to solve this issue. In my case, I followed this guide: https://medium.com/frontendweb/how-to-deploy-a-nextjs-app-to-github-pages-1de4f6ed762e There is a step where you configure the required yml file for the build, and all I did was comment the lines of the step that required the _next_ _export_ command. (I know that deleting them may be better but just in case :)) ```yaml - name: Static HTML export with Next.js run: ${{ steps.detect-package-manager.outputs.runner }} next export ``` Also, this was my next.config.js file: ```javascript /** @type {import('next').NextConfig} */ const nextConfig = { output: 'export', reactStrictMode: true, } module.exports = nextConfig ``` [1]: https://stackoverflow.com/users/18079514/ahmed-abdelbaset
About problem in installing ext-sodium * for installing lcobucci/jwt 4.3.0 and kreait/firebase-tokens 4.3.0 for laravel project
|php|laravel|xampp|libsodium|
I got the following error when I upgraded my project to Micronaut 4.3.1: ``` ERROR io.micronaut.runtime.Micronaut - Error starting Micronaut server: Bean definition [org.hibernate.SessionFactory] could not be loaded: Error instantiating bean of type [io.micronaut.configuration.hibernate.jpa.conf.sessionfactory.configure.internal.ValidatorFactoryConfigurer] Message: Multiple possible bean candidates found: [DefaultConstraintValidatorFactory, DefaultInternalConstraintValidatorFactory] Path Taken: SessionFactoryPerDataSourceFactory.buildHibernateSessionFactoryBuilder(SessionFactoryBuilder sessionFactoryBuilder) --> new SessionFactoryPerDataSourceFactory(Environment environment,[List configures],StandardServiceRegistryBuilderCreator serviceRegistryBuilderSupplier,List standardServiceRegistryBuilderConfigurers,JpaConfiguration jpaConfiguration,ApplicationContext applicationContext,Integrator integrator) --> new ValidatorFactoryConfigurer([ValidatorFactory validatorFactory]) io.micronaut.context.exceptions.BeanInstantiationException: Bean definition [org.hibernate.SessionFactory] could not be loaded: Error instantiating bean of type [io.micronaut.configuration.hibernate.jpa.conf.sessionfactory.configure.internal.ValidatorFactoryConfigurer] It seems like a Hibernate Configuration problem. Has anyone encountered that? I am using Micronaut Data Hibernate JPA and I was expecting the application to run normally as it used to with version 4.2.X.
Error starting Micronaut server: Bean definition [hibernate.SessionFactory] could not be loaded: Error instantiating bean [ValidatorFactoryConfigurer]
|hibernate|micronaut|sessionfactory|micronaut-data|
null
if you want To share tokens between applications, ensure they access the same data protection keys by configuring a shared storage (example -> a common file system path) for keys using `'.PersistKeysToFileSystem()'`. Both applications must also use '`.SetApplicationName("Whatever")'` to align the data protection context. If issues persist, you can consider centralized authentication solutions like OAuth2/OpenID Connect or a shared database for token validation, though these approaches have their complexities. Key configuration discrepancies, s uch as cryptographic algorithms, should be checked to ensure they match across applications.
I am developing a Web Application with a node/JS frontend and a Python backend. Development is done in two DevContainer environments with one DevContainer definition for each sub-project. So far, I open two instances of `vscode`, one for each sub-project. Each of the DevContainer setups mounts the overall project folder and points the workfolder to the right sub-folder. Also, I use a common `docker-compose.yml` to place all containers into the same docker network and to add some more services, i.e., a database. ```text my_project/ docker-compose.yml backend/ .devcontainers/ devcontainer.json docker-compose.yml Dockerfile frontend/ .devcontainers/ devcontainer.json docker-compose.yml Dockerfile ``` A `devcontainer.json` looks like this: ```json { "name": "Python & Poetry", "dockerComposeFile": [ "../../docker-compose.yml", "./docker-compose.yml" ], "service": "demo", "workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}", ... ``` Now, I came across the workspace concept in `vscode`. I want to add a workspaces definition into the root of the overall project and enjoy the advantage of only having one `vscode` window. This would be such a `my_project/code-workspace` file: ```json { "folders": [ { "name": "ROOT", "path": "." }, { "name": "backend", "path": "./backend" }, { "name": "frontend", "path": "./frontend" }, ] } ``` However, I don't manage vscode to open the DevContainers of the sub-proejcts. I suceeded to place a `.devcontainer/` folder into the root folder of the overall project, which opens one DevContainer and connects to it for both sub-projects. But that is not what I want. What is wrong? Is my wanted setup possible with the internal architecture of VSCode managing the different settings, areas, agent-connects, etc?
Per [glTexSubImage2D reference](https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glTexSubImage2D.xhtml): > `GL_INVALID_OPERATION` is generated if the combination of internalFormat of the previously specified texture array, format and type is not valid. In your code, `glTexImage2D` call specifies `internalFormat` of `GL_RGBA`, per [glTexImage2D reference](https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glTexImage2D.xhtml) (Table 1) `GL_RGBA` internal format supports only `GL_RGBA` format with several types (`GL_UNSIGNED_BYTE`, `GL_UNSIGNED_SHORT_4_4_4_4`, `GL_UNSIGNED_SHORT_5_5_5_1`) `GL_RED` you use in `glTexSubImage2D` is not one of the supported formats, so operation results in `GL_INVALID_OPERATION`. For your usage `GL_RED` is indeed a valid format for `glTexSubImage2D`, because FreeType renders in 8-bit grayscale bitmap [by default](https://freetype.org/freetype2/docs/reference/ft2-glyph_retrieval.html#ft_load_render), but you should initialize texture with compatible formats (internal format `GL_R8`, format `GL_RED`, type `GL_UNSIGNED_BYTE`) and use your fragment shader to extract *red* component and produce blendable output.
In this declaration int *p1 = arr; the array designator is implicitly converted to a pointer to its first element. It is equivalent to the following declatayion int *p1 = &arr[0]; An expression like that `arr[i]` where `i` is some integer is evaluated like `*( arr + i )`. That is the expession `a[0]` is evaluated like `*( a + 0 )` that is the same as `*( a )` or `*a`. And the expression `a[2]` is evaluated like `*( a + 2 )`. Applying the address of operator for the expressions you will get that the valie of `p1` is equal to `a + 0` or `a` and the value of `p2` is equal to the value of the expression `a + 2`. So the difference `p2 - p1` is the same as `( a + 2 ) - a` that is equal to `2`. Pay attention to that the type of the expression is `ptrdiff_t`. So you should write printf("%td",p2-p1); instead of printf("%d",p2-p1);
I was able to get the Python file generated eventually. I found another pair of grammar files set from https://github.com/antlr/grammars-v4/tree/master/sql/plsql. From here I downloaded as follows: ``` wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/Python3/PlSqlLexerBase.py wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/Python3/PlSqlParserBase.py wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/PlSqlLexer.g4 wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/PlSqlParser.g4 mv PlSql*g4 grammars ``` When I then run: ``` java -jar ./lib/antlr-4.9.3-complete.jar -Dlanguage=Python3 grammars/*g4 ``` I get the following generated in `grammars`: ``` grammars-v4-master PlSqlLexer.interp PlSqlParserBase.py PlSqlParserListener.py __pycache__ master.zip PlSqlLexer.py PlSqlParser.g4 PlSqlParser.py runPLSQL.py PlSqlLexer.g4 PlSqlLexer.tokens PlSqlParser.interp PlSqlParser.tokens ```
I'm trying to create a structure in AWS with ECS so that an internet-facing ALB manages traffic to front-end containers and front-end requests go through an internal ALB that forwards to containers with API, but requests to the API are dying in 504. the ALB facing the internet is configured with listeners that take the host header and forward it to each front, this part is working normally, I did the same for the internal ALB but forwarding it to the API, the containers with API pass the health check, then the service is OK diagram: [Cluster Architecture with ALB](https://i.stack.imgur.com/tEoMk.png) General structure: - VPC - 2 subnets (public and private) - Internet-facing SG ALB (with Internet access) - Internal SG ALB (inbound rule for VPC only) - SG on the fronts releasing port for ALB facing the internet - SG API releasing ports for the internal ALB - Target groups for each ECS service If I'm missing any relevant information, I can let you know, thank you in advance! When I created everything within the same ALB internet-facing works normally
The previous solutions recently failed to utilize cache for my pipeline - it would acknowledge the cache existed but then not consider it during build. It's mentioned in comments but to resolve I had to add "DOCKER_BUILDKIT: 1" as a variable and "--build-arg BUILDKIT_INLINE_CACHE=1" to each "arguments" using "--cache-from=" *The second time you run your pipeline it should start using caching again variables: DOCKER_BUILDKIT: 1 steps: - task: Docker@2 displayName: "Pull image" inputs: command: pull containerRegistry: "$(ContainerRegistryName)" arguments: $(ACR_ADDRESS)/$(REPOSITORY):latest - task: Docker@2 displayName: build inputs: containerRegistry: '$(ContainerRegistryName)' repository: '$(REPOSITORY)' command: 'build' Dockerfile: './dockerfile ' buildContext: '$(BUILDCONTEXT)' arguments: | --cache-from=$(ACR_ADDRESS)/$(REPOSITORY):latest --build-arg BUILDKIT_INLINE_CACHE=1 tags: | $(Build.BuildNumber) latest - task: Docker@2 displayName: "push" inputs: command: push containerRegistry: "$(ContainerRegistryName)" repository: $(REPOSITORY) tags: | $(Build.BuildNumber) latest
I am referring to your question in the comment. *TL;TR:* There is no difference when you replace a single property of a post or the complete post. SwiftUI provides DEBUG method which can show you which data causes your view to update. `Self._printChanges()` So lets write a quick demo: import SwiftUI struct Post: Codable, Identifiable { var id: Int var title: String var message: String } @Observable class PostManager { var posts: [Post] = [Post(id: 1, title: "1", message: "1")] func updatePostMessage(postId: Int, message: String) { // Switch these methods to check the output of which view is re-rendered posts.updatePostMessage(with: Post(id: postId, title: "1", message: message)) posts.updatePostTitle(with: Post(id: postId, title: "1", message: message)) // posts.updateCompletePost(with: Post(id: postId, title: "1", message: message)) } } // This code could go to a different file to answer your initial question extension [Post] { mutating func updatePostMessage(with newPost: Post) { if let index = self.firstIndex(where: { $0.id == newPost.id }) { self[index].message = newPost.message } } mutating func updateCompletePost(with newPost: Post) { if let index = self.firstIndex(where: { $0.id == newPost.id }) { self[index] = newPost } } mutating func updatePostTitle(with newPost: Post) { if let index = self.firstIndex(where: { $0.id == newPost.id }) { self[index].title = newPost.title } } } struct TitleView: View { let title: String var body: some View { let _ = Self._printChanges() Text(title) } } struct MessageView: View { let message: String var body: some View { let _ = Self._printChanges() Text(message) } } struct ContentView: View { @State var postManager = PostManager() var body: some View { let _ = Self._printChanges() VStack { ForEach(postManager.posts) { post in TitleView(title: post.title) MessageView(message: post.message) } Button { postManager.updatePostMessage(postId: 1, message: "Message") } label: { Text("Update") } } } } We are going to ignore the initial log when the view gets rendered for the first time. When we use the method which only changes the message of the post the console is: ContentView: @dependencies changed. MessageView: @self changed. If you change the update method in the extension above and update the complete post the console is: ContentView: @dependencies changed. MessageView: @self changed. And only if you explicitly set a different title it would update the TitleView. ContentView: @dependencies changed. TitleView: @self changed. MessageView: @self changed.
AWS: Error 504 in communication between ALB internet-facing and ALB internal
|amazon-web-services|amazon-ecs|amazon-elb|amazon-vpc|aws-application-load-balancer|
null
import { ScrollView, FlatList } from 'react-native-gesture-handler'; import ScrollView and FlatList from 'react-native-gesture-handler' it will work
I'm trying to test the Antlr4 (Oracle) PLSQL grammar, but really struggling. I might have confused things by having followed 2 or 3 different guides. I performed the following steps, starting with installation, on Ubuntu 22.04. Install packages, but not sure I ended up using these: ``` sudo apt-get update sudo apt-get -y install python3-antlr4 sudo apt install python3-pip pip3 install antlr-plsql pip3 install antlr4-python3-runtime==4.9.3 ``` I then switched to another guide, which seemed simpler and more self-contained: ``` mkdir lib cd lib curl -O https://www.antlr.org/download/antlr-4.9.3-complete.jar cd .. mkdir grammars cd grammars wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/PlSqlLexer.g4 wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/PlSqlParser.g4 cd .. wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/Python3/PlSqlLexerBase.py wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/Python3/PlSqlParserBase.py ``` This gives me: ``` ├── grammars │   ├── PlSqlLexer.g4 │   └── PlSqlParser.g4 ├── lib │   └── antlr-4.9.3-complete.jar ├── PlSqlLexerBase.py ├── PlSqlParserBase.py ``` Generate the Python 3 modules from the grammars: ``` java -jar ./lib/antlr-4.9.3-complete.jar -Dlanguage=Python3 grammars/*g4 ``` The result is very long error message: ``` error(50): /home/me/try2/grammars/PlSqlLexer.g4:1:0: syntax error: '{"payload": {"allShortcutsEnabled":false,"fileTree": {"sql/plsql": {"items": [ {"name":"CSharp","path":"sql/plsql/CSharp","contentType":"directory"}, {"name":"Cpp","path":"sql/plsql/Cpp","contentType":"directory"}, {"name":"Dart","path":"sql/plsql/Dart","contentType":"directory"}, . . . "grammars-v4", "showInvalidCitationWarning":false, "citationHelpUrl":"https://docs.github.com/github/creating-cloning-and-archiving-repositories/creating-a-repository-on-github/about-citation-files", "actionsOnboardingTip":null}, "truncated":false,"viewable":true,"workflowRedirectUrl":null,"symbols":{"timed_out":false,"not_analyzed":true,"symbols":[]}}, "copilotInfo":null, "copilotAccessAllowed":false, "csrf_tokens":{"/antlr/grammars-v4/branches":{"post":"B-JVTx4hyskRLh-IIK4rWkAi5MsB-02SSXgjgq9nYNzDI1ZfW5iKO2EvjBCRVSAFlOtcSNnBT47ACuB9TmN_KA"},"/repos/preferences":{"post":"GKOXPKaenoTNYzUHdAFAZKMu0-z4ENu8r5KmHmhUiOKs4HBOuNY9PVTbSAMl7rKPVfBwRCJRpTY72nxBH5PotA"}}}, "title":"grammars-v4/sql/plsql/PlSqlParser.g4 at master · antlr/grammars-v4"}' came as a complete surprise to me ``` Tried this as well but got the same result: ``` java -Xmx500M -cp "./lib/antlr-4.9.3-complete.jar:$CLASSPATH" org.antlr.v4.Tool -Dlanguage=Python3 grammars/*g4 ```
# Problem Seems a version incompatibility problem between typing_extensions and other packages. # Solution I tested it with Python 3.10 and some pinned versions of the following packages, and it should work for you also. ``` conda create -n my_env python=3.10 conda activate my_env pip install openai==1.8.0 typing_extensions==0.4.7 trl ``` Those packages usually cause dependency errors, so this should fix your current problem and future ones. # Extra Ball There is a similar issue in the `openai` repository: https://github.com/openai/openai-python/issues/751
You don't have to read the entire file in at once as there is an argument with the `read_csv()` function. You would just need to modify your code to ``` df1 <- read_csv( "sample-data.csv", col_names=c("D","B"), col_select=c("D","B") ) ``` If the file does not in fact contain a header than you would call column index in the `col_select` instead ``` df1 <- read_csv( "sample-data.csv", col_names=c("D","B"), col_select=c(4,2) ) ```
the output of ```#check Nat.add``` seems to be garbled - the result looks like ```Nat.add (a✝a✝¹ : Nat) : Nat``` Version: 4.0.0-nightly-2023-07-06, commit c268d7e97bb0, Release How can I make sense of this output? Note: as of version 4.8 this change has been reverted, and the output looks like, as expected, ```Nat.add : Nat → Nat → Nat```
``` // test_input.cpp #include <complex> #include <iostream> int main( void ) { std::complex<double> a, b; std::cout << "Enter two complex numbers: "; std::cin >> a >> b; std::cout << "a * b = " << a * b << std::endl; std::cout << "a + b = " << a + b << std::endl; return 0; } g++ -std=c++14 -o test_input test_input.cpp ./test_input (1,2) (1,-2) a * b = (5,0) a + b = (2,0) ```
I have a C# game emulator which uses TcpListener and originally had a TCP based client. A new client was introduced which is HTML5 (web socket) based. I wanted to support this without modifying too much of the existing server code, still allowing `TcpListener` and `TcpClient` to work with web socket clients connecting. Here is what I have done, but I feel like I'm missing something, as I am not getting the usual order of packets, therefor handshake never completes. 1. Implement protocol upgrade mechanism public static byte[] GetHandshakeUpgradeData(string data) { const string eol = "\r\n"; // HTTP/1.1 defines the sequence CR LF as the end-of-line marker var response = Encoding.UTF8.GetBytes("HTTP/1.1 101 Switching Protocols" + eol + "Connection: Upgrade" + eol + "Upgrade: websocket" + eol + "Sec-WebSocket-Accept: " + Convert.ToBase64String( System.Security.Cryptography.SHA1.Create().ComputeHash( Encoding.UTF8.GetBytes( new Regex("Sec-WebSocket-Key: (.*)").Match(data).Groups[1].Value.Trim() + "258EAFA5-E914-47DA-95CA-C5AB0DC85B11" ) ) ) + eol + eol); return response; } This is then used like so: private async Task OnReceivedAsync(int bytesReceived) { var data = new byte[bytesReceived]; Buffer.BlockCopy(_buffer, 0, data, 0, bytesReceived); var stringData = Encoding.UTF8.GetString(data); if (stringData.Length >= 3 && Regex.IsMatch(stringData, "^GET")) { await _networkClient.WriteToStreamAsync(WebSocketHelpers.GetHandshakeUpgradeData(stringData), false); return; } 2. Encode all messages after switching protocol response public static byte[] EncodeMessage(byte[] message) { byte[] response; var bytesRaw = message; var frame = new byte[10]; var indexStartRawData = -1; var length = bytesRaw.Length; frame[0] = 129; if (length <= 125) { frame[1] = (byte)length; indexStartRawData = 2; } else if (length >= 126 && length <= 65535) { frame[1] = 126; frame[2] = (byte)((length >> 8) & 255); frame[3] = (byte)(length & 255); indexStartRawData = 4; } else { frame[1] = 127; frame[2] = (byte)((length >> 56) & 255); frame[3] = (byte)((length >> 48) & 255); frame[4] = (byte)((length >> 40) & 255); frame[5] = (byte)((length >> 32) & 255); frame[6] = (byte)((length >> 24) & 255); frame[7] = (byte)((length >> 16) & 255); frame[8] = (byte)((length >> 8) & 255); frame[9] = (byte)(length & 255); indexStartRawData = 10; } response = new byte[indexStartRawData + length]; int i, reponseIdx = 0; // Add the frame bytes to the reponse for (i = 0; i < indexStartRawData; i++) { response[reponseIdx] = frame[i]; reponseIdx++; } // Add the data bytes to the response for (i = 0; i < length; i++) { response[reponseIdx] = bytesRaw[i]; reponseIdx++; } return response; } Used here: public async Task WriteToStreamAsync(byte[] data, bool encode = true) { if (encode) { data = WebSocketHelpers.EncodeMessage(data); } 3. Decoding all messages public static byte[] DecodeMessage(byte[] bytes) { var secondByte = bytes[1]; var dataLength = secondByte & 127; var indexFirstMask = dataLength switch { 126 => 4, 127 => 10, _ => 2 }; var keys = bytes.Skip(indexFirstMask).Take(4); var indexFirstDataByte = indexFirstMask + 4; var decoded = new byte[bytes.Length - indexFirstDataByte]; for (int i = indexFirstDataByte, j = 0; i < bytes.Length; i++, j++) { decoded[j] = (byte)(bytes[i] ^ keys.ElementAt(j % 4)); } return decoded; } Which is used here private async Task OnReceivedAsync(int bytesReceived) { var data = new byte[bytesReceived]; Buffer.BlockCopy(_buffer, 0, data, 0, bytesReceived); var stringData = Encoding.UTF8.GetString(data); if (stringData.Length >= 3 && Regex.IsMatch(stringData, "^GET")) { await _networkClient.WriteToStreamAsync(WebSocketHelpers.GetHandshakeUpgradeData(stringData), false); return; } var decodedData = WebSocketHelpers.DecodeMessage(data);
{"Voters":[{"Id":773113,"DisplayName":"Mike Nakis"},{"Id":466862,"DisplayName":"Mark Rotteveel"},{"Id":1852723,"DisplayName":"Drunix"}]}
To add quotation marks to the start and end of every line with regular expressions in Xcode, choose the “Regular expression” search option and replace `^.*$` (i.e., `^` is the start of the line, `.*` is zero or more of any character, and `$` is the end of the line) with `"$0"` (i.e., a quotation mark, followed by capture group number zero, followed by a final quotation mark): [![enter image description here][1]][1] - - - You can also use multi-cursors. You can, for example, hold down the <kbd>⌥</kbd> key and click-drag with your mouse, and then hit <kbd>⌘</kbd>-<kbd>◀︎</kbd> to go to the start of the lines, <kbd>"</kbd> for the opening quotation mark, <kbd>⌘</kbd>-<kbd>▶︎</kbd> to go to the end of the lines, and <kbd>"</kbd> for the closing quotation mark. [![enter image description here][2]][2] And while you can hold <kbd>⌥</kbd> and and click-drag to make a bunch of multi-cursors, you can also toggle individual ones on and off with <kbd>⇧</kbd>-<kbd>⌃</kbd>-clicks. [1]: https://i.stack.imgur.com/avDga.gif [2]: https://i.stack.imgur.com/uhL0h.gif
``` // test_input.cpp #include <complex> #include <iostream> int main( void ) { std::complex<double> a, b; std::cout << "Enter two complex numbers: "; std::cin >> a >> b; std::cout << "a * b = " << a * b << std::endl; std::cout << "a + b = " << a + b << std::endl; return 0; } g++ -std=c++14 -o test_input test_input.cpp ./test_input (1,2) (1,-2) a * b = (5,0) a + b = (2,0) ```
i've got this problem when calling firestore function from angular fire. When i'm succesfully authenticated, i'm fetching to firestore the current user document id for additional data associated with the user. ```typescript // In App.Component ngOnInit(): void { this.afAuth.onAuthStateChanged((user) => { if (user) { // Some firestore call like // await this.firestore.collection<T>(collectionName).ref.where(fieldName, '==', fieldValue).get(); // --> FirebaseError: [code=permission-denied]: Missing or insufficient permissions. } }); } ``` I don't understand because my rules for firestore are : ``` rules_version = '2'; service cloud.firestore { match /databases/{database}/documents { match /{document=**} { allow read, write: if true; } } } ``` so read and write are permitted to everyone. I'm facing this error since i've enforced security as firebase console "AppCheck" suggested, now, as far as i remember, only specific applications are accepted and i've settled recaptcha, so is this occuring because recaptcha configuration is missing in my app ? Or maybe it's not related ? https://firebase.google.com/docs/app-check/web/recaptcha-provider
How to configure a vscode workspace with multiple DevContainers?
|visual-studio-code|vscode-devcontainer|
null
I'm using the following XML shape to add a coloured border to any EditText of my app to indicate it is focused: <layer-list xmlns:android="http://schemas.android.com/apk/res/android"> <item> <shape android:shape="rectangle"> <corners android:radius="10dp" /> <solid android:color="#ffffff"/> <stroke android:width="3dp" android:color="#FFC300"/> </shape> </item> </layer-list> And, as you can see in the attached picture below, it's working fine: [![Bordered EditText][1]][1] The fact is that the stroke -relative to the EditText- is inner and I'd like it to be outer (or, maybe, 50% inner 50% outer), so to highlight the focused EditText from the others even more, but I don't know how to do it. What can I try next? [1]: https://i.stack.imgur.com/v5dl2.png
Which values are valid for the `path text[]` parameter of the `jsonb_set` function and how are they interpreted? Numbers vs. strings vs. something else? ``` sql jsonb_set(target jsonb, path text[], /* <-- ??? */ new_value jsonb, create_missing boolean default true) ``` Especially if the queried json is a complex object like `{"Id": 1, "Values": {"Name": "Doe", "Online": [10,20,30]}}` and not a simple array. ``` obj = '{"Id": 1, "Values": {"Name": "Doe", "Online": [10,20,30]}}' SELECT jsonb_set(obj, '{Values,0}', '99'); -> {"Id": 1, "Values": {"0": 99, "Name": "Doe", "Online": [10, 20, 30]}} SELECT jsonb_set(obj, '{Values,Online,0}', '99'); -> {"Id": 1, "Values": {"Name": "Doe", "Online": [99, 20, 30]}} ``` And yes i read the docu [here][1], but they always only say _"Returns target with the item designated by path"_ - and how is **"path"** defined, works? [1]: https://www.postgresql.org/docs/current/functions-json.html
Explain parameter 'path' (text[]) of postgres function jsonb_set
|postgresql|jsonb|