instruction
stringlengths
0
30k
I am trying to create a regular expression for the IP subnet 192.168.224.0/22. The valid IP range is 192.168.224.1 to 192.168.227.254. I can use this in Sentinel KQL query. Is the below correct? 192\.168\.(22[4-7]|23[0-6])\.(25[0-4]|2[0-4][0-9]|[01]?[0-9][0-9]?)
I really excited to using Ant Design library, it saves me a lot of time, but there's some flaws or I just misunderstood something. I am using TreeSelect with async loading. ``` const onLoadData = () => { ... // Some api call which changes state "treeData" } <TreeSelect value={value} onChange={onChange} loadData={onLoadData} treeData={treeData} /> ``` My problem is that when api logic fails, e.g. API is down, so it return 502, the Tree node falls into never ending loading cycle until API return return "child" data for node. So I figure out that the problem is that when node is expanded it need some "child" nodes. So according docs I take controll which nodes should be expanded via treeExpandedKeys prop. ``` <TreeSelect value={value} onChange={onChange} loadData={onLoadData} treeData={treeData} treeExpandedKeys={treeExpandedKeys} /> ``` So with that, when API call fails I just close node with state treeExpandedKeys. But it has one big problem. When node is once expanded it cannot be closed. Because library doesn't have any function which would return node's id when user click on expand arrow. Is anybody who encouter same situation? Maybe I just miss something.
Ant Design TreeSelect async load
|reactjs|antd|
You can fill your `LineSeries` with a JavaScript function. Small example: ChartView { id: chart anchors.fill: parent property var list1: [1,2,3,4,5,6,7] property var list2: [11,12,13,14,15,16,17] Component.onCompleted: { for (let i = 0; i < list1.length; ++i) { line.append(list1[i], list2[i]); } } LineSeries { id: line axisX: ValueAxis { min: 0 max: 7 } axisY: ValueAxis { min: 0 max: 17 } } }
I am using code to read file from AWS s3 bucket, throwing error "Not a gzipped file (b'\xef\xbb')" even if filename is ends with .gz and the content type is also application/x-gzip, it throwing erorr to uncompress the input file how can i handle this. job.file_bucket = 'upload-bucket' job.file_path = 'filepath/filename.gz' s3client = AwsS3Service.get_client(True) file_object = s3client.Object(job.file_bucket, job.file_path) job.file_object = file_object.get()['Body'] content_type = file_object.get()["ContentType"] job.cFilePath = f"{job.file_bucket}/{job.file_path}" if content_type == 'application/x-gzip' or (content_type == 'binary/octet-stream' and job.cFilePath.endswith('.gz')): with gzip.open(job.file_object, 'rb') as file: xml_data = file.read()
If I were to answer, I would try to be as efficient as possible and not create a "one-liner," as it likely doesn't do what you expect. I know you mention not looping, but for those who see your question and do not mind looping, I would do the following: uses System.SysUtils, System.RegularExpressions; function RemoveExcessiveSpaces(const Input: string): string; begin Result := TRegEx.Replace(Input, '\s+', ' ').Trim; end; A regex is looping, just not by you in your code, and without regex I would do: function RemoveExcessiveSpaces(const Input: string): string; var i, StartIndex, EndIndex: Integer; sb: TStringBuilder; begin StartIndex := 1; EndIndex := Length(Input); // Find the first non-space character while (StartIndex <= EndIndex) and (Input[StartIndex] = ' ') do Inc(StartIndex); // Find the last non-space character while (EndIndex >= StartIndex) and (Input[EndIndex] = ' ') do Dec(EndIndex); // Exit if the string is empty or all spaces if StartIndex > EndIndex then Exit(''); sb := TStringBuilder.Create(EndIndex - StartIndex + 1); try for i := StartIndex to EndIndex do begin // Append current character if it's not a space or if it's a single space between words if (Input[i] <> ' ') or ((i > StartIndex) and (Input[i - 1] <> ' ')) then sb.Append(Input[i]); end; Result := sb.ToString; finally sb.Free; end; end;
we have used msal-browser for Azure AD B2C login in the react application with vite & RTK. We want to refresh the token once the main access token expires. But inside the react application, we couldn't receive a refresh token by calling any method of @azure/msal-browser. By calling the acquireTokenSilent method, we got an access token but didn't receive a refresh token. We want a refresh token inside our react application code, which we can able to see in the API response. But as mentioned, this API is called by @azure/msal-browser internally and we didn't get a refresh token on the react code side. [Reference API response screenshot](https://i.stack.imgur.com/SOjty.png) Implemented Code for acquiring the silent token if (event.eventType === EventType.LOGIN_SUCCESS || event.eventType === EventType.ACQUIRE_TOKEN_SUCCESS) { if (event?.payload) { if (event?.payload?.idToken) { instance .acquireTokenSilent({ scopes: [AuthConfig.Policy], account: currentAccount }) .then(async (response) => { // Application level logic here }) } } } Any help would be appreciated. Thanks
This already existys in `stringi::stri_replace_all_regex`. > data[] <- stringi::stri_replace_all_regex(data, data_to_replace, + replacement, + vectorise_all=FALSE) > data [[1]] [1] "B-A" [[2]] [1] "D-C" [[3]] [1] "E-F" [[4]] [1] "G-H" [[5]] [1] "I-J"
The difference in execution speed is likely due to running the code in debug mode, plus the overhead of the additional debugging and diagnostic tools running in Visual Studio. While I haven't measured the difference in performance, I can say anecdotally that I do notice that running automation code (or any code) while debugging in Visual Studio goes much slower. This is expected behavior. There probably isn't a lot you can do to speed things up when debugging through Visual Studio. Try closing out of all unnecessary diagnostics and debugging tools in Visual Studio. Otherwise, sit back and have a cup of coffee or tea. If the boss walks by, tell them you are running tests. And why not? [Developers do the same thing](https://xkcd.com/303/).
It has been two days trying to add payment method to google cloud console but continue getting the error bellow please is there any body know how to by pass the error bellow This action couldn’t be completed. Try again later. [OR_BACR2_34] I was trying to add payment method to my cloud console account but it continue failing
I am trying to add payment method to google cloud console getting the error This action couldn’t be completed. Try again later. [OR_BACR2_34]
|google-cloud-platform|google-developers-console|google-pay|
null
{"Voters":[{"Id":13288265,"DisplayName":"jksevend"}],"DeleteType":1}
Remove the following : `app:itemRippleColor="@color/white"`
null
First, some observations: 1. The string size is exponential in n so a(n) takes 10000 bits. The largest number type in C++ is uint64_t with 64 bits. 2. Luckily, the restriction on k means we only have to consider the first 10^15 digits, which is 35 bits and thus fits in a uint64_t. 3. a(49) is the last digit string whose size is below 10^15. Next, the trivial case: if k <= d(n), where d(n) counts digits of the number n we can just read off the answer. Then, for n > 50 we can safely assume the wanted digit must reside in the first branch of a(n-1) as a(50) and up no longer fit in 10^^15. We must take care to not index into the *n* prefix, so: solution(k, n) | n > 50 = solution(k - d(n), n-1) Once n <= 50, k might also reach the *second* branch of a(n-1). Thus: solution(k, n) | n <= 50 && k - d(n) <= a(n-1) = solution(k - d(n), n-1) | otherwise = solution(k - a(n-1) - d(n), n-1) All that remains to make this efficient is to precalculate a(n) up to 50. Implementing this in C++ is left as an exercise for the reader. EDIT: The split at n>50 is entirely to fit the limits of a uint64_t number. If you have arbitrary-precision numbers to your disposal (eg libgmp or built-in to the language like Python or Haskell) this is just a special case of the last clause.
### TL;DR ``` from yfinance import download # Prepare data similar to the original symbol_df = ( download(tickers="AAPL", period="7d", interval="1m") .rename_axis(index='Date') .reset_index() ) # Calculate Relative Volume Ratio volume = symbol_df.set_index('Date')['Volume'] dts = volume.index cum_volume = volume.groupby(dts.date, sort=False).cumsum() prev_mean = lambda days: ( cum_volume .groupby(dts.time, sort=False) .rolling(days, closed='left') .mean() .reset_index(0, drop=True) # drop the level with dts.time ) rvr = cum_volume / prev_mean(5) # Assign the output to the initial data symbol_df = symbol_df.join(rvr.rename('Relative volume ratio'), on='Date') ``` ### Explanation Based on the provided description, you need to perform several transformations on the aggregated data. First is to cumulatively summarize the data for each day. Then run a [ten]-day window over the data grouped by time of day to calculate the average. And at the end, actually divide the former by the latter. Let's say, you have the following test data, where `"Date"` is a column of type `datetime`: ``` from yfinance import download symbol_df = ( download(tickers="AAPL", period="7d", interval="1m") .rename_axis(index='Date') .reset_index() ) ``` To calculate the _Relative Volume Ratio_ values, we will use `"Volume"` as a separate sequence with date-time stamps `"Date"` as its index: ``` volume = symbol_df.set_index('Date')['Volume'] dts = volume.index # date-time stamps for convenient grouping ``` Let's create a sequence of cumulative volumes for each day. For this, we group `volume` by its date (the year, month and day values with no time) and apply `cumsum` to a group (use `sort=False` in hopes to speed up calculations): cum_volume = volume.groupby(dts.date, sort=False).cumsum() To calculate the mean of cumulative volumes at the same time of day in the given number of previous days, we group `cum_volume` by its time (hours and minutes with no year, month, day values), and apply rolling calculations to each group to obtain averages over windows. _Note that here we need the source data to be sorted by date-time stamps since only business days are taken into account and we can't use a non-fixed frequency of `"10B"` as a `window` value._ To calculate means for exactly the previous days excluding the current one, we pass `closed='left'` (see [DataFrameGroupBy.rolling docs][1] for details): ``` prev_mean = lambda days: ( cum_volume .groupby(dts.time, sort=False) .rolling(days, closed='left') .mean() .reset_index(0, drop=True) ) ``` Now the final touch with the window of 5 days: rvr = cum_volume / prev_mean(5) ### Comparison Compared to [Andrei Kesely's solution][2], this one wins in speed (on Intel Core i3-2100, for example, processing the data offered there will take over 1 minute versus 300-400 ms with the code above). The calculation result is the same for timestamps after the first 10 days. But in the beginning, when there's less then 10 previous days, calculation of mean in rolling windows is made as if there's always 10 items (missing values are set to nan). Whereas in the case of the Kesely's solution, we obtain average values only for the _available_ cumulative volumes. [1]: https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.rolling.html [2]: https://stackoverflow.com/a/78235015/14909621
The size of a `ZStack` is determined by the size of its contents. Since you are scaling the image to fill, the full image is (probably) going to be larger than the screen (unless the aspect ratio of the image exactly matches the aspect ratio of the screen). This is the size being adopted by the `ZStack`. The `.frame` modifier being applied to the button is then making it occupy the same size as the image. So this is why it is going off-screen. I would suggest applying the `Image` as background to the `ZStack` instead: ```swift ZStack { NavigationLink(destination: OverviewView()) { OverviewButton() } .frame(maxWidth: .infinity, maxHeight: .infinity, alignment: .topTrailing) .padding() } .background { Image("yose") .resizable() .scaledToFill() } ```
I'm trying to scrape from Linkedin hrefs to better filter the results. However, for some reason the code will only return results 1-7 only. Results 8 or more will not return, even if explicitly stated. I have included sleep timers to help the website complete the load, but still only 1-7. Is there a bug, a security measure or something wrong with the code? ``` from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import time from icecream import ic PATH = "C:\Program Files (x86)\chromedriver.exe" service = Service(executable_path=PATH) driver = webdriver.Chrome(service=service) user = "*********" pswd = "***********" driver.get("https://www.linkedin.com/jobs") driver.implicitly_wait(10) userLogin = driver.find_element('id', "session_key") pswdLogin = driver.find_element('id', "session_password") userLogin.send_keys(user) pswdLogin.send_keys(pswd) pswdLogin.send_keys(Keys.RETURN) driver.implicitly_wait(600) searchKey = driver.find_element(By.XPATH, '//input[contains(@id, "jobs-search-box-keyword-id-ember")]') searchLocation = driver.find_element(By.XPATH, '//input[contains(@id, "jobs-search-box-location-id-ember")]') searchKey.send_keys("python developer") searchLocation.send_keys("27560") time.sleep(0.25) searchLocation.send_keys(Keys.RETURN) WebDriverWait(driver, 3) time.sleep(2) jobs = [] for i in range(1, 16): ic(i) the_url = driver.find_element(by=By.XPATH, value='/html/body/div[5]/div[3]/div[4]/div/div/main/div/div[2]/div[1]/div/ul/li[' + str( i) + ']/div/div/div[1]/div[2]/div[1]/a') print(the_url.get_attribute('href')) ```
we have a use case where we need to issue one additional warning (in a pop up) to customers right before they finalize their order, but **after** all checkout field validations (form fields filled in, payment method fields validated, etc). Essentially, we want the PLACE ORDER button to behave exactly like it already does (not clickable or working if anything fails validation or if there are any errors in fields or payment method fields). Then, when it **clickable** and ready to place order, we want to be able to trigger our custom code (the popup) when PLACE ORDER button is clicked instead of actually processing the order. We will then move or copy the default place order button functionality into our pop up that will work after they have read the warning in the pop up. I am currently arguing with my developer who is claiming this is not possible because we use a Stripe payment plugin and she claims that Stripe plugin does not return the same validation error or hooks like regular Woocommerce so she can't check that we've passed Stripe validation on her new 'custom' PLACE ORDER button. What I don't understand then and we are having the disconnect about is then how does the default WooCommerce PLACE ORDER button know not to function if there is an error in Stripe fields? Shouldn't we be able to literally copy the PLACE ORDER code line by line and use it again (which is working properly and working / not working if there are Stripe or other form field errors), then intercept the last step before the actual 'processing' of payment and completing the order occurs to instead trigger our pop up? There has to be a way to either duplicate the existing code and functionality and alter it to our means, or a built in hook/filter of some sort that validates ALL checkout fields (including from any payment method plugin) and show our button or have it function if all validation is met and hide the button or have it display "error" if anything is failing validation. Thanks for any help on this!
This answer should work with a single page push-state app, or a multi-page app, or a combination of the two. *(Corrected to fix the `History.length` bug addressed in Mesqualito’s comment.)* ### How it works ### We can easily listen for new entries to the history stack. We know that for each new entry, the [specification](http://w3c.github.io/html/browsers.html#updating-the-session-history-with-the-new-page) requires the browser to: 1. “Remove all the entries in the browsing context’s session history after the current entry” 2. “Append a new entry at the end” At the moment of entry, therefore: > new entry position = position last shown + 1 The solution then is: 1. Stamp each history entry with its own position in the stack 2. Keep track in the session store of the position last shown 3. Discover the direction of travel by comparing the two ### Example code ### <pre><code>'use strict'; function reorient() { // After travelling in the history stack const positionLastShown = Number( // If none, then zero sessionStorage.getItem( 'positionLastShown' )); let position = history.state; // Absolute position in stack if( position === null ) { // Meaning a new entry on the stack position = positionLastShown + 1; // Top of stack // (1) Stamp the entry with its own position in the stack history.replaceState( position, /*no title*/'' ); } // (2) Keep track of the last position shown sessionStorage.setItem( 'positionLastShown', String(position) ); // (3) Discover the direction of travel by comparing the two const direction = Math.sign( position - positionLastShown ); console.log( 'Position ' + position ); console.log( 'Travel direction is ' + direction ); } // One of backward (-1), reload (0) and forward (1) addEventListener( 'pageshow', reorient ); addEventListener( 'popstate', reorient ); // Travel in same page</code></pre> ### Test pages ### Here are [test pages][test-pages] that show the code running. Included is a [local-storage variant][test-pages-local] which replaces `sessionStorage` in the code with `localStorage`, but is otherwise the same. ### Limitations ### When I first wrote this example code, it worked in all the browsers I tested. Nowadays it fails under Firefox for cross-page navigation (as opposed to in-page) unless you switch to local storage (see test pages above). This solution ignores the history entries of pages foreign to the app’s domain of origin, as though the user had never visited them. It calculates travel direction only in relation to the last domain page shown, regardless of any foreign page that the user visited in between. If you expect the user to push foreign entries onto the history stack (see Atomosk’s comment), then you might need a workaround. [test-pages]: http://reluk.ca/project/Web/test/travel_direction/ [test-pages-local]: http://reluk.ca/project/Web/test/travel_direction/local_storage/
I am creating MLM (multi-level marketing) system in PHP and MYSQL database. I want to fetch the child user ID based on the parent ID. I have found a solution https://stackoverflow.com/questions/45444391/how-to-count-members-in-15-level-deep-for-each-level-in-php but am getting some errors. I have created a class - ``` <?php Class Team extends Database { private $dbConnection; function __construct($db) { $this->dbConnection = $db; } public function getDownline($id, $depth=5) { $stack = array($id); for($i=1; $i<=$depth; $i++) { // create an array of levels, each holding an array of child ids for that level $stack[$i] = $this->getChildren($stack[$i-1]); } return $stack; } public function countLevel($level) { // expects an array of child ids settype($level, 'array'); return sizeof($level); } private function getChildren($parent_ids = array()) { $result = array(); $placeholders = str_repeat('?,', count($parent_ids) - 1). '?'; $sql="select id from users where pid in ($placeholders)"; $stmt=$this->dbConnection->prepare($sql); $stmt->execute(array($parent_ids)); while($row=$stmt->fetch()) { $results[] = $row->id; } return $results; } } ``` I am using the class like this - ``` $id = 4; $depth = 2; // get the counts of his downline, only 2 deep. $downline_array = $getTeam->getDownline($id, $depth=2); ``` I am getting errors - > Fatal error: Uncaught TypeError: count(): Argument #1 ($value) must be > of type Countable|array, int given in and second > Warning: PDOStatement::execute(): SQLSTATE[HY093]: Invalid parameter > number: number of bound variables does not match number of tokens in I want to fetch the child user ID in 5 levels
Error in reading .gz file in python using gzip
|gzip|python-3.9|
null
hope you are well. My code is not working properly and I have no idea what is wrong or how to fix it. I only know that when I add the numbers for a square or rectangle it shows that my shape is a parallelogram. I have tried my code and the output is showing incorrectly. I have also looked at you tube videos but nothing is helping me. Please advise as soon as possible. <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> let side1 = prompt("Please enter the side of the shape"); let side2 = prompt("Please enter the side of the shape"); let side3 = prompt("Please enter the side of the shape"); let side4 = prompt("Please enter the side of the shape"); let corner1 = prompt("Please enter the corners of the shape"); let corner2 = prompt("Please enter the corners of the shape"); let corner3 = prompt("Please enter the corners of the shape"); let corner4 = prompt("Please enter the corners of the shape"); if ( side1 === side2 && side2 === side3 && side3 === side4 && ((corner1 === corner2) === corner3) === corner4 ) { console.log(`The shape is a Square`); } else if ( side1 === side3 && side2 === side4 && ((corner1 < 90 && corner3 > 90 && corner2 < 90 && corner4 > 90) || (corner1 > 90 && corner3 < 90 && corner2 > 90 && corner4 < 90)) ) { console.log(`The shape is a Rhombus`); } else if ( side1 === side3 && side2 === side4 && corner1 === corner3 && corner2 === corner4 ) { console.log(`The shape is a Parallelogram`); } else if ( side1 === side3 && side2 === side4 && corner1 === corner2 && corner3 === corner4 ) { console.log(`The shape is a Rectangle`); } else console.log("Your shape is weird"); <!-- end snippet -->
This should work: ```yaml components: schemas: Mammals: type: array items: anyOf: - $ref: '#/components/schemas/AquaticMammals/items' - $ref: '#/components/schemas/LandMammals/items' ``` <br/> Alternatively, you can create named schemas for the initial `anyOf` lists, this will make referencing a bit easier. ```yaml components: schemas: AquaticMammal: anyOf: - $ref: '#/components/schemas/Dolphin' - $ref: '#/components/schemas/Otter' - $ref: '#/components/schemas/Seal' - $ref: '#/components/schemas/Beaver' AquaticMammals: type: array items: $ref: '#/components/schemas/AquaticMammal' LandMammal: anyOf: - $ref: '#/components/schemas/Elephant' - $ref: '#/components/schemas/Bear' - $ref: '#/components/schemas/Monkey' - $ref: '#/components/schemas/Camel' LandMammals: type: array items: $ref: '#/components/schemas/LandMammal' Mammals: type: array items: anyOf: - $ref: '#/components/schemas/AquaticMammal' - $ref: '#/components/schemas/LandMammal' ```
I have a JSON: ```json { "error" : { "code" : "validation_failed", "message" : "Validation failed", "fields" : { "date_end" : { "code" : "min_value", "message" : "Value is less than minimum allowed" } } } } ``` `date_end` can be `date_start`, `name`, `geo`. I need to extract `message`, I did this: ```json [ { "operation": "shift", "spec": { "error": { "fields": { "date_end|date_start|name|geo": { "message": "message" } } } } } ] ``` But also I need this json object name inside `message`. So I expect to get this result: ```json { "message": "date_end|Value is less than minimum allowed" } ``` Before `message` I need to add this json object name. How can I do this?
{"Voters":[{"Id":23845872,"DisplayName":"Rollo"}],"DeleteType":1}
I'm new to Fortran and I already have a hard time understanding the concept of broadcasting of arrays in Python, so it is even more difficult for me to implement it in Fortran Fortran code: ``` program test implicit none integer,parameter::N=6,t_size=500 real,dimension(t_size,N,2)::array1 contains pure function f(a) result(dr) real::dr(N,N,2) real,intent(in)::a(N,2) real::b(N,N,2) real::c(N,N,2) b=spread(a,1,N) c=spread(a,2,N) dr=c-b end function f end program ``` Now N would be the number of points and t_size is just the number of different time steps. I came up with this function which uses *spread* in two different dimensions to create a NxNx2 array. Now I thought of using a line like for example `r=f(array1(1,:,:)` in order to get an array which holds all differences of spatial coordinates of every 2 points. I already wrote code that does this in Python (taken from a physics textbook for Python) ``` r = np.empty((t.size, N, 2)) r[0] = r0 def f(r): dr=r.reshape(N,1,2)-r ``` where I can write later for example `f(r[i]`. (In this case, I left the line r[0] = r0, because it shows that an initial condition is given - later I plan to do this in Fortran by using the random_number subroutine.) Now I hope it is clear what my question is. If anyone has a better idea (which I am sure there is) to implement broadcasting in Fortran, please let me know. Please have a little patience with someone new to Fortran and also programming in general - thanks in advance for your replies. I already tried it with the random_number subroutine and it worked, but I have no way of checking if the output is true.
I'm building an api in golang using chi. I created a wrapper in my handler functions to intercept and identify errors ``` func BuildHandler(db *sql.DB) *chi.Mux { // Repositories userRepository := user.NewRepository(db) // Api Handlers authHandler := auth.NewAuthAPIHandler(userRepository) r := chi.NewRouter() // r.Use(middleware.Recoverer) r.Method("POST", "/auth/login", handler.Handler(authHandler.Login())) return r } ``` ``` type Handler func(w http.ResponseWriter, r *http.Request) error func (h Handler) ServeHTTP(w http.ResponseWriter, r *http.Request) { if err := h(w, r); err != nil { fmt.Println(reflect.TypeOf(err)) var valErrs validator.ValidationErrors if errors.As(err, &valErrs) { fmt.Println("called here") } w.Write([]byte("error")) } } ``` ``` func (a *AuthAPI) Login() handler.Handler { return func(w http.ResponseWriter, r *http.Request) error { var loginDTO LoginDTO err := json.NewDecoder(r.Body).Decode(&loginDTO) if err != nil { return err } email, err := a.service.Login(r.Context(), loginDTO) if err != nil { fmt.Println(reflect.TypeOf(err)) return err } w.Write([]byte(email)) return nil } } ``` both places with reflect return the type `validator.ValidationErrors`, however "errorsAs" does not work within ServeHTTP, but works within the login handler. What can it be ? I would like errorsAs to work within ServeHTTP
Curious behavior with errors.As in go
|go|
null
how about: ```java String hql = "select ln " + "from Document ln " + "left join Data cd on ln.documentId = cd.dataId and ln.status = 'Active' " + "where ln.status = 'Active' and cd.dataId is null"; List<Document> documents = session.createQuery(hql).list(); ```
This command facilitates restoration but may be time-consuming: ``` sudo mongod --verbose --port 27017 --bind_ip 10.0.0.1 --keyFile /key_pair.pem --storageEngine wiredTiger --dbpath /db/data/ --repair --directoryperdb ``` For further details, refer to the [MongoDB documentation](https://www.mongodb.com/docs/manual/reference/program/mongod/#core-options).
I can't comment, so I'm answering here. You should refer to `url` property of your `listing.image` object rather than the object itself: `<img class="image1" src="{{ listing.image.url }}">`
I'm encountering an issue with React Router where I'm trying to implement authentication for my application using React Router's Route components. I have an Auth component that checks the user's authentication status by making an asynchronous request to my server's /users/check-auth endpoint. If the user is authenticated, it should render the protected routes using <Outlet />, otherwise, it should redirect to the login page. This is my Auth.jsx ``` function Auth() { const navigate = useNavigate(); useEffect(() => { const checkAuth = async () => { try { const response = await axios.get(`${base}/users/check-auth`, { withCredentials: true, }); if (response.status === 200) { navigate("/"); } } catch (error) { console.log("Authentication error:", error); } }; checkAuth(); }, [navigate]); return <Login />; } export default Auth; ``` These are my Routes ``` function App() { return ( <Provider store={store}> <Toaster richColors position="top-center" /> <Router> <Routes> <Route path="/signup" element={<Register />} /> <Route path="/login" element={<Login />} /> <Route element={<Auth />}> <Route path="/" element={<Home />} /> <Route path="/user-details" element={<UserDetails />} /> </Route> </Routes> </Router> </Provider> ); } export default App; ``` and this code is in Login.jsx ``` const handleSubmit = async (e) => { e.preventDefault(); const userDetails = { email: data.email, password: data.password, }; try { const response = await axios.post(`${base}/users/login`, userDetails); toast.success(response.data.message); Cookies.set("aToken", response.data.data.accessToken, { expires: 1, path: "", }); navigate("/"); } catch (error) { toast.error(error.response.data.message); } }; ``` after this didn't work, i tried different appraoch using reducer where i set isloggedin status to true after a successfull login and then use it in auth.jsx to check whether the user is authenticated or not, this was the code for it: ``` const { isLoggedIn } = useSelector((state) => state.user); let token = Cookies.get("aToken") || isLoggedIn; if (!token) { return <Login />; } return <Outlet />; ``` also there is this problem in the above code aswell. It didn't work if i used only cookies or only isLoggedIn. I had to use both in order to make it work. https://daisyui.onrender.com/ this was achieved using both cookies and islogged in. i want to understand what is the problem. i think it has something to do with batch updates of react or maybe something else that i dont know of. Please help me with it.
null
Error while reading japanese character name file.without japanese characters, file can be read import matplotlib.pyplot as plt import cv2 image_path = r"FAX注文0004.tif" image = cv2.imread(image_path) plt.imshow(image) plt.axis('off') plt.show() Note: while renaming file from r"FAX注文0004.tif" to FAX0004.tif. The file is easily read Error: TypeError: Image data of dtype object cannot be converted to float
null
Error while reading japanese character name file. Without japanese characters the file can be read. import matplotlib.pyplot as plt import cv2 image_path = r"FAX注文0004.tif" image = cv2.imread(image_path) plt.imshow(image) plt.axis('off') plt.show() Note: After renaming file from `FAX注文0004.tif` to `FAX0004.tif` the file is easily read. ``` Error: TypeError: Image data of dtype object cannot be converted to float ```
I know this is too late to help OP, but for anyone else with this problem, you can force Qt Creator to recognize the latest Xcode by going to the Build directory and doing: rm -rf .qmake.stash rm -rf .qmake.cache This can happen when upgrading Xcode, for example. You can confirm what's going on by showing includes in your .PRO file: QMAKE_CFLAGS += -showIncludes # Windows QMAKE_CFLAGS += -H. # Mac Look for "-isysroot" in Compile Output. Or open `.qmake.stash` with a text editor.
As soon as changes are detected, you could disable scrolling and use a `DragGesture` to detect a swipe. Then you can prompt for confirmation. Something like this: ```swift struct ContentView: View { struct FormValues: Equatable { var aFlag = false var text = "" } @State private var isSheetShowing = false @State private var isAlertshowing = false @State private var formValues = FormValues() @State private var hasChanges = false @State private var dragOffset = CGFloat.zero var body: some View { Button("Show sheet") { isSheetShowing = true } .buttonStyle(.bordered) .sheet(isPresented: $isSheetShowing) { Form { Picker("On or off", selection: $formValues.aFlag) { Text("Off").tag(false) Text("On").tag(true) } .pickerStyle(.segmented) TextField("Text", text: $formValues.text) } .offset(y: dragOffset) .animation(.easeInOut, value: dragOffset) .interactiveDismissDisabled(hasChanges) .scrollDisabled(hasChanges) .onChange(of: formValues) { oldVal, newVal in hasChanges = true } .gesture( DragGesture() .onChanged { val in if hasChanges { dragOffset = val.translation.height } } .onEnded { val in dragOffset = 0 if hasChanges { isAlertshowing = true } } ) .confirmationDialog("Are you sure?", isPresented: $isAlertshowing) { Button("Yes") { isSheetShowing = false // Save changes } Button("No", role: .cancel) { // Do nothing } } message: { Text("Are you sure?") } } } } ``` This works, but there are two issues I couldn't resolve: - after an alert choice is selected, the alert disappears but then re-appears, before disappearing a second time - the drag gesture may interfere with some of the form content, for example, with a `Toggle` switch. Also, if the user needs to be able to scroll to reach the end of the form then of course this may be impacted. You may find that the first issue (with the re-appearing alert) happens with other solutions too.
You just have to unload it function unloadSplitScreen() { let ext = viewer.getExtension('Autodesk.SplitScreen'); ext.unload(); }
Here is an approach using multiple mono-color colormaps, together with a legend: ```python import matplotlib.pyplot as plt from matplotlib.cm import ScalarMappable import seaborn as sns import pandas as pd import numpy as np fig, ax = plt.subplots(figsize=(6, 6)) cmaps = ['Blues', 'Oranges', 'Greens', 'Reds', 'Purples'] norm = plt.Normalize(vmin=0.25, vmax=1) handles = [] for rel, cmap in zip(np.unique(df_relationships.values), cmaps): sns.heatmap(df_times, mask=df_relationships.values != rel, cmap=cmap, norm=norm, annot=True, cbar=False) handles.append(plt.Rectangle((0, 0), 0, 0, color=plt.get_cmap(cmap)(0.55), lw=0, label=rel)) plt.colorbar(ScalarMappable(cmap='Greys', norm=norm), ax=ax) ax.legend(handles=handles, ncol=len(handles), bbox_to_anchor=(0, 1.01), loc='lower left', handlelength=0.7) plt.tight_layout() plt.show() ``` [![sns.heatmap with multiple color ranges][1]][1] [1]: https://i.stack.imgur.com/oUjI2.png
I'm not sure why but I was sure that `Ctrl+f` can find all occurences of a string in a file. Say I want to find all occurences of variables that ends with `_cfg` or all imports ending with `time` in a file and it turns out that `Ctrl+f` is not able to find it. It can only find full strings such as `user_cfg` or `datetime` but not when u put a substring of the name into the search fields. Is there any option that I can turn on full substring search in a file? ``` PyCharm 2023.3.4 (Community Edition) Build #PC-233.14475.56, built on February 26, 2024 Runtime version: 17.0.10+1-b1087.17 x86_64 VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o. macOS 14.3.1 GC: G1 Young Generation, G1 Old Generation Memory: 4096M Cores: 16 Metal Rendering is ON Registry: ide.experimental.ui=true Non-Bundled Plugins: com.chesterccw.excelreader (2024.2.1-233) Key Promoter X (2023.3.0) com.dsoftware.ghtoolbar (1.17.0) com.github.copilot (1.4.18.4775) net.ashald.envfile (3.4.2) ``` I tried testing with multiple substrings.
Pycharm: Ctrl+F not able of finding substrings (only full strings)
|pycharm|jetbrains-ide|
null
I'm trying to calculate row or column means of a SpatRaster/SpatRaster stack using the R terra package in order to make a Hovmoller plot, but am finding myself stumped. I'd typically just use something like this: ``` # Test matrix A = matrix(seq(1,6,1),4,3,byrow = TRUE) # Usually lots of NAs in the data A[c(2,4,5)] <- NA # Calculate the mean of each row m = apply(A, 1, "mean", na.rm =T) ``` How would someone calculate this using terra functions? Reading the documentation for the various app() functions it doesn't seem apparent how to calculate values based on SpatRaster dimensions. Hopefully I'm missing something obvious.
Calculate row-wise or column-wise means using R terra functions
|r|raster|terra|
null
for anyone with these questions, the database is overwritten and the themes and plugins are loaded with the restore process (by the updraftplus plugin)
Add json object key as value
|jolt|
The class VariableMap is part of the following maven artifact: <groupId>org.camunda.commons</groupId> <artifactId>camunda-commons-typed-values</artifactId> So if you include that in the correct version in your maven pom, this class should be found.
There is currently an open issue about this problem on Github since August 2023. Currently there is only a temporarly fix to downgrade the notebook package with: `pip install notebook==6.5.6` After that it worked for me again. See the issue for reference: https://github.com/django-extensions/django-extensions/issues/1835
pip install --upgrade pip pip install --upgrade setuptools pip install pandas
How do I convert a 16-bit, single-channel image to a 16-bit, three-channel image in the HSV color space? I aim to maintain a 16-bit depth throughout the process. Firstly, I converted the image to an 8-bit single channel. Then, I applied the COLOR_GRAY2BGR conversion, followed by the COLOR_BGR2HSV transformation. Subsequently, I divided the image based on intensity into distinct regions and assigned corresponding Hue (H), Saturation (S), and Value (V) intensity values to each channel. Finally, I converted it back to 16-bit depth and placed the first intensity as the Value (V) channel
opencv HSV colorspace
|c++|opencv|
null
I recommend using `df.at[row, column]` ([source](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.at.html)) for iterate all pandas cells. For example: ``` for row in range(len(df)): print(df.at[row, 'c1'], df.at[row, 'c2']) ``` The output will be: ```lang-none 10 100 11 110 12 120 ``` --- # Bonus You can also modify the value of cells with `df.at[row, column] = newValue`. ```python for row in range(len(df)): df.at[row, 'c1'] = 'data-' + str(df.at[row, 'c1']) print(df.at[row, 'c1'], df.at[row, 'c2']) ``` The output will be: ```lang-none data-10 100 data-11 110 data-12 120 ```
|webpack|babeljs|tailwind-css|twin.macro|rspack|
Widget customSnackbar({ required String message, Color backgroundColor = Colors.black, Color borderColor = Colors.red, double borderWidth = 4.0, double cornerRadius = 12.0, }) { return ClipRRect( borderRadius: BorderRadius.circular(cornerRadius), child: Container( padding: const EdgeInsets.symmetric(horizontal: 24.0, vertical: 12.0), decoration: const BoxDecoration( color: Colors.yellow, border: Border( bottom: BorderSide(color: Colors.red, width: 10), left: BorderSide(color: Colors.transparent), right: BorderSide(color: Colors.transparent), top: BorderSide(color: Colors.transparent), ), ), child: Text( message, style: const TextStyle(color: Colors.white), ), ), ); }
The problem with taking a screenshot is that the canvas is divided into several sections: ![enter image description here](https://i.stack.imgur.com/p7xKE.png) I am using Angular and I don't see a way to capture it, I have tried to create a single canvas combining the existing ones but the styles are not maintained in the new one. I try: ``` downloadGrafo() { var sigmaContainer = document.getElementById('sigma-container'); var combinedCanvas = document.createElement('canvas'); combinedCanvas.width = sigmaContainer.offsetWidth; combinedCanvas.height = sigmaContainer.offsetHeight; var ctx = combinedCanvas.getContext('2d'); sigmaContainer.querySelectorAll('canvas').forEach(function(canvas) { // Dibujar el contenido del canvas en el canvas combinado ctx.drawImage(canvas, 0, 0); }); var imageDataURL = combinedCanvas.toDataURL(); var downloadLink = document.createElement('a'); downloadLink.href = imageDataURL; downloadLink.download = 'sigma_image.png'; downloadLink.click(); } ```
### Context I am trying to add a gui to the mapping script of the [concept-graphs mapping system](https://github.com/concept-graphs/concept-graphs/tree/c52043b177ee10816dfd9a1509e2ee746ae459b7). The code is ugly at the moment but you can have a look [here](https://github.com/concept-graphs/concept-graphs/blob/c52043b177ee10816dfd9a1509e2ee746ae459b7/conceptgraph/slam/gui_realtime_mapping.py). This mapping system takes as input rgb images + depth images + poses, and iteratively constructs an object based 3D map that also encodes semantic features. I want to be able to visualize the whole map building process, so I can inspect the map and debug things as it's being built. So I looked at the python examples and saw that the [multiple_windows.py](https://www.open3d.org/docs/release/python_example/visualization/index.html#multiple-windows-py) example seems to have what I want, which is to visualize the object pointclouds and the bounding boxes and so on. I can do that, it works great so far. ### The Problem The issue is that I want to also display more information. I would like to be able to see in a panel beside the map: the current rgb image, the current depth image, the current annotated image, the current frame, the current number of objects and so on. I want it to be like the [dense_slam_gui.py](https://github.com/isl-org/Open3D/blob/main/examples/python/t_reconstruction_system/dense_slam_gui.py) example in the docs. I have tried a bunch of stuff but I cannot seem to be able to add a panel to the o3d.visualization.O3DVisualizer thing, and I was also unable to add another window or something that I can update at the same time. I don't mind if its not the same window, I just want to be able to see this information displayed somewhere alongside the map, as it would help me improve the mapping system. ### My Code Here are some relevant code snippets. How I make the app (as you can see I have tried stuff to add another window or panel, but everything I have tried so far has resulted in some kind of error) ```python def run(self): app = o3d.visualization.gui.Application.instance app.initialize() self.main_vis = o3d.visualization.O3DVisualizer( "Open3D - Multi-Window Demo") self.main_vis.add_action("Take snapshot in new window", self.on_snapshot) self.main_vis.add_action("Pause/Resume updates", lambda vis: self.toggle_pause()) self.main_vis.set_on_close(self.on_main_window_closing) app.add_window(self.main_vis) # app.add_window(self.images_window) # # Setup the secondary window for images # # self.setup_image_display_window() # # self.image_window = app.create_window("RGB and Depth Images", 640, 480) # self.image_window = gui.Application.instance.create_window("RGB and Depth Images", 640, 480) # # self.image_window.create_window() # # self.image_window = o3d.visualization.Visualizer("RGB and Depth Images", 640, 480) # self.layout = o3d.visualization.gui.Vert(0, o3d.visualization.gui.Margins(10)) # self.image_window.add_child(self.layout) # # Create image widgets # self.rgb_widget = o3d.visualization.gui.ImageWidget() # self.depth_widget = o3d.visualization.gui.ImageWidget() # # Add image widgets to the layout # self.layout.add_child(self.rgb_widget) # self.layout.add_child(self.depth_widget) # app.add_window(self.image_window) self.snapshot_pos = (self.main_vis.os_frame.x, self.main_vis.os_frame.y) threading.Thread(target=self.update_thread).start() app.run() ``` Here is how I do the update thread: ```python def update_thread(self): # This is NOT the UI thread, need to call post_to_main_thread() to update # the scene or any part of the UI. ... lots of code, basically the whole mapping script ... def my_update_cloud(): # Remove previous objects for obj_name in self.prev_obj_names: self.main_vis.remove_geometry(obj_name) self.prev_obj_names = [] # Remove previous bounding boxes for bbox_name in self.prev_bbox_names: self.main_vis.remove_geometry(bbox_name) self.prev_bbox_names = [] # Add the new objects and bounding boxes for obj_num, obj in enumerate(self.objects): obj_label = f"{obj['curr_obj_num']}_{obj['class_name']}" obj_name = f"obj_{obj_label}" bbox_name = f"bbox_{obj_label}" self.prev_obj_names.append(obj_name) self.main_vis.add_geometry(obj_name, obj['pcd']) self.prev_bbox_names.append(bbox_name) self.main_vis.add_geometry(bbox_name, obj['bbox'] ) if self.is_done: # might have changed while sleeping break o3d.visualization.gui.Application.instance.post_to_main_thread( self.main_vis, my_update_cloud) ``` ### What I want to do For starters, I just want to be able to update a display with the current rgb and depth image and maybe the current frame number?. I'm imagining something like: ```python def update_images(self): # Convert numpy images to Open3D images and update widgets o3d_rgb_image = o3d.geometry.Image(self.curr_image_rgb) o3d_depth_image = o3d.geometry.Image((self.curr_depth_array / self.curr_depth_array.max() * 255).astype(np.uint8)) # Example normalization self.rgb_widget.update_image(o3d_rgb_image) self.depth_widget.update_image(o3d_depth_image) # and then in the main update thread loop I would do self.curr_image_rgb = image_rgb self.curr_depth_array = depth_array o3d.visualization.gui.Application.instance.post_to_main_thread( self.image_window, self.update_images()) ``` I hope it makes sense, what I'm trying to achieve here. If anyone could point me in the right direction, this would be much appreciated. I have googled and asked chatGPT a lot to no avail. Thank you in advance for your time and help.
How to add another panel or window to the open3d.visualization.O3DVisualizer class? (In python open3d)
|python|3d|point-clouds|open3d|
null
I use vanilla extract in next.js project. When i configure vanilla extract in storybook, i meet this error. ``` Cannot find module '@vanilla-extract/css/recipe' at webpackMissingModule (vendors-node_modules_vanilla-extract_recipes_dist_vanilla-extract-recipes_esm_js-node_modules-2b7443.iframe.bundle.js:1960:50) at ./node_modules/@vanilla-extract/recipes/dist/vanilla-extract-recipes.esm.js (vendors-node_modules_vanilla-extract_recipes_dist_vanilla-extract-recipes_esm_js-node_modules-2b7443.iframe.bundle.js:1960:152) at options.factory (runtime~main.iframe.bundle.js:655:31) at __webpack_require__ (runtime~main.iframe.bundle.js:28:33) at fn (runtime~main.iframe.bundle.js:313:21) at ./components/base/atom/button/button.css.ts (component-base-atom-button-stories.iframe.bundle.js:140:82) at options.factory (runtime~main.iframe.bundle.js:655:31) at __webpack_require__ (runtime~main.iframe.bundle.js:28:33) at fn (runtime~main.iframe.bundle.js:313:21) at ./components/base/atom/button/index.tsx (component-base-atom-button-stories.iframe.bundle.js:259:69) ``` ``` Can't resolve '@vanilla-extract/css/recipe' in '(project root)/node_modules/@vanilla-extract/recipes/dist' ``` I think, there is something special in vanilla extract. This is main.ts ``` import type { StorybookConfig } from "@storybook/nextjs"; import * as path from "path"; import { TsconfigPathsPlugin } from "tsconfig-paths-webpack-plugin"; const config: StorybookConfig = { stories: [ "../stories/**/*.mdx", "../stories/**/*.stories.@(js|jsx|mjs|ts|tsx)", ], addons: [ "@storybook/addon-onboarding", "@storybook/addon-links", "@storybook/addon-essentials", "@chromatic-com/storybook", "@storybook/addon-interactions", "@storybook/addon-styling-webpack", ], framework: { name: "@storybook/nextjs", options: {}, }, docs: { autodocs: "tag", }, staticDirs: ["../public"], features: { //for react server component experimentalRSC: true, }, webpackFinal: (config) => { // for webpack plugin if (config.resolve) { config.resolve.plugins = config.resolve.plugins || []; // for alias @ config.resolve.plugins.push( new TsconfigPathsPlugin({ configFile: path.resolve(__dirname, "../tsconfig.json"), }) ); } if (config.resolve?.alias) { config.resolve.alias["@vanilla-extract/css"] = require.resolve( "@vanilla-extract/css" ); } if (config.module?.rules) { config.module.rules.push({ test: /\.css$/i, use: ["@vanilla-extract/webpack-plugin/loader"], }); } return config; }, }; export default config; ``` I use this packages. ``` "@vanilla-extract/css": "^1.14.1", "@vanilla-extract/recipes": "^0.5.2", "@vanilla-extract/sprinkles": "^1.6.1", "next": "14.1.4", ---dev--- "mini-css-extract-plugin": "^2.8.1", "storybook": "^8.0.4", "@vanilla-extract/next-plugin": "^2.3.7", "@vanilla-extract/webpack-plugin": "^2.3.7", "@storybook/addon-essentials": "^8.0.4", ``` I think if manipulate webpack directly, i can beat this issue. Unfortunately I am not familiar with webpack. So I have a hard time writing webpackfinal (reference https://vanilla-extract.style/documentation/integrations/webpack/).
React Router: Authenticated Route Redirection Issue
|reactjs|authentication|react-router-dom|
null
{"Voters":[{"Id":3858472,"DisplayName":"CharlyCM"}],"DeleteType":1}
You need to define the client details correctly both in the Keycloak client and in the angular app. For my app redirect uri is kept as `/` then it gets redirected by my routing startegy, you'll have to update as per your strategy You should define in this format [![enter image description here][1]][1] and for the `silentCheckSsoRedirectUri` I've used the Identity providers RedirectURI which you can get from (Sidenav)Identity Providers->"your provider"->Redirect URI Also for Keycloak Init you'll have to use `valid_redirect_url` which as per my config is `http://localhost:4200/` initOptions: { flow: 'standard', onLoad: 'check-sso', redirectUri: valid_redirect_url, silentCheckSsoRedirectUri: valid_sso_redirect_url } [1]: https://i.stack.imgur.com/H8P7B.png
{"Voters":[{"Id":3858472,"DisplayName":"CharlyCM"}]}
I have below regex ```js const stringPattern = "[a-zA-Z0-9]*"; const stringRegex = new RegExp(stringPattern,"g"); const str = "there are 33 states and 7 union territory in india."; const matches = str.match(stringRegex); console.log({matches}); ``` why does this result contains whitespaces also while in the regex, have not used `\s` or ` `. and how do we exclude whitespace ? ```js [ "there", "", "are", "", "33", "", "states", "", "and", "", "7", "", "union", "", "territory", "", "in", "", "india", "", "" ] ```
why this regex match whitespace also?
|javascript|regex|
How to get 5 LEVEL hierarchy users from database using PHP and MYSQL
null
Currently, there is no API to remove a property from a collection. But you can replace the object as explained in [hsm207 answer][1]. Here is how to do it with the v4 Python API. ```python my_collections = client.collections.get("Product") # Replace with your collection name prop_to_remove = "description" # Replace with the property name you want to remove for obj in my_collections.iterator(): if prop_to_remove in obj.properties: del obj.properties[prop_to_remove] my_collections.data.replace(uuid=obj.uuid, properties=obj.properties) ``` [1]: https://stackoverflow.com/a/76177363/8547757
I have a Spring Batch application which is successfully running spring batch jobs but i have exception when declaring multiple DirectChannels. The exception happens when i start the "firstJob". Here is the exception: ``` `Caused by: org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel 'IFRSGoodBookService-1.secondReplies'. at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:76) at org.springframework.integration.channel.AbstractMessageChannel.sendInternal(AbstractMessageChannel.java:375) at org.springframework.integration.channel.AbstractMessageChannel.sendWithMetrics(AbstractMessageChannel.java:346) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:326) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:187) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:166) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47) at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:109) at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutput(AbstractMessageProducingHandler.java:499) at org.springframework.integration.handler.AbstractMessageProducingHandler.doProduceOutput(AbstractMessageProducingHandler.java:354) at org.springframework.integration.handler.AbstractMessageProducingHandler.produceOutput(AbstractMessageProducingHandler.java:283) at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutputs(AbstractMessageProducingHandler.java:247) at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:142) at org.springframework.integration.handler.AbstractMessageHandler.doHandleMessage(AbstractMessageHandler.java:105) at org.springframework.integration.handler.AbstractMessageHandler.handleWithMetrics(AbstractMessageHandler.java:90) at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:70) at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:115) at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:133) at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:106) at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:72) at org.springframework.integration.channel.AbstractMessageChannel.sendInternal(AbstractMessageChannel.java:375) at org.springframework.integration.channel.AbstractMessageChannel.sendWithMetrics(AbstractMessageChannel.java:346) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:326) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:299) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:187) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:166) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47) at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:109) at org.springframework.integration.endpoint.MessageProducerSupport.lambda$sendMessage$1(MessageProducerSupport.java:262) at io.micrometer.observation.Observation.lambda$observe$0(Observation.java:493) at io.micrometer.observation.Observation.observeWithContext(Observation.java:603) at io.micrometer.observation.Observation.observe(Observation.java:492) at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:262) at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter.access$200(AmqpInboundChannelAdapter.java:69) at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter$Listener.createAndSend(AmqpInboundChannelAdapter.java:397) at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter$Listener.onMessage(AmqpInboundChannelAdapter.java:360) at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:1663) ... 14 common frames omitted Caused by: org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:139) at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:106) at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:72) ... 50 common frames omitted` ``` This is my FlowConfig class which declares the channels: ``` @Configuration @RequiredArgsConstructor class FlowConfig { private final QueueConfig queueConfig; @Bean public DirectChannel firstRequests() { return new DirectChannel(); } @Bean public DirectChannel firstReplies() { return new DirectChannel(); } @Bean public DirectChannel secondRequests() { return new DirectChannel(); } @Bean public DirectChannel secondReplies() { return new DirectChannel(); } @Bean("secondManagerInBoundFlow") @Profile("manager") public IntegrationFlow secondManagerInBoundFlow() { return queueConfig.getInboundAdapter(true, secondReplies()); } @Bean("secondWorkerInBoundFlow") @Profile("worker") public IntegrationFlow secondInBoundFlow() { return queueConfig.getInboundAdapter(false, secondRequests()); } @Bean("secondManagerOutboundFlow") @Profile("manager") public IntegrationFlow secondManagerOutboundFlow() { return queueConfig.getOutboundAdapter(true, secondRequests()); } @Bean("secondWorkerOutboundFlow") @Profile("worker") public IntegrationFlow secondWorkerOutboundFlow() { return queueConfig.getOutboundAdapter(false, secondReplies()); } @Bean("firstManagerInBoundFlow") @Profile("manager") public IntegrationFlow firstManagerInBoundFlow() { return queueConfig.getInboundAdapter(true, firstReplies()); } @Bean("firstWorkerInBoundFlow") @Profile("worker") public IntegrationFlow firstWorkerInBoundFlow() { return queueConfig.getInboundAdapter(false, firstRequests()); } @Bean("firstManagerOutboundFlow") @Profile("manager") public IntegrationFlow firstManagerOutboundFlow() { return queueConfig.getOutboundAdapter(true, firstRequests()); } @Bean("firstWorkerOutboundFlow") @Profile("worker") public IntegrationFlow firstWorkerOutboundFlow() { return queueConfig.getOutboundAdapter(false, firstReplies()); } } ``` This is the implementation of the inbound and outbound adapters: ``` @Configuration @ConditionalOnProperty("spring.rabbitmq.enabled") @RequiredArgsConstructor public class RabbitMqQueueConfig implements QueueConfig { private final ConnectionFactory connectionFactory; private final RabbitTemplate defaultRabbitTemplate; private final QueueConstants queueConstants; @Override public IntegrationFlow getInboundAdapter(boolean isManager, DirectChannel channel) { String queueName = isManager ? queueConstants.getConstantWithPrefix(QueueConstants.JOB_REPLIES_QUEUE) : queueConstants.getConstantWithPrefix(QueueConstants.JOB_REQUESTS_QUEUE); return IntegrationFlow.from(Amqp.inboundAdapter(connectionFactory, queueName)).channel(channel).get(); } @Override public IntegrationFlow getOutboundAdapter(boolean isManager, DirectChannel channel) { String queueName = isManager ? queueConstants.getConstantWithPrefix(QueueConstants.JOB_REQUESTS_QUEUE) : queueConstants.getConstantWithPrefix(QueueConstants.JOB_REPLIES_QUEUE); AmqpOutboundChannelAdapterSpec messageHandlerSpec = Amqp.outboundAdapter(defaultRabbitTemplate).routingKey(queueName); return IntegrationFlow.from(channel).handle(messageHandlerSpec).get(); } } ``` This is JobManagerConfiguration ``` @Configuration @Profile("manager") @EnableBatchIntegration @AllArgsConstructor public class JobManagerPartitionConfiguration { private final JobRepository jobRepository; private final RemotePartitioningManagerStepBuilderFactory managerStepBuilderFactory; private PlatformTransactionManager transactionManager; private DeleteDataTasklet deleteDataTasklet; private InstDataLoader instDataLoader; private final ApplicationProperties appProperties; private final DirectChannel firstRequests; private final DirectChannel firstReplies; private final DirectChannel secondRequests; private final DirectChannel secondReplies; private final ManagerJobListener managerJobListener; private final IdBoundaryPartitioner idBoundaryPartitioner; private final ContextService contextService; @Bean public Step firstJobManagerStep() { return managerStepBuilderFactory.get("firstJobManagerStep") .partitioner("remotefirstJobAsStep", idBoundaryPartitioner) .gridSize(appProperties.getJobParameters().getGridSize()) .outputChannel(firstRequests) .inputChannel(firstReplies) .listener(new SyncStepContextWithJob()) .build(); } @Bean public Step secondJobManagerStep() { return managerStepBuilderFactory.get("secondJobManagerStep") .partitioner("remoteSecondJobAsStep", idBoundaryPartitioner) .gridSize(appProperties.getJobParameters().getGridSize()) .outputChannel(secondRequests) .inputChannel(secondReplies) .listener(new SyncStepContextWithJob()) .build(); } @Bean public Job secondJob(Step secondJobManagerStep) { return new JobBuilder("secondJob", jobRepository).incrementer(new RunIdIncrementer()) .start(instDataLoaderStep()) .next(deleteTable()) .next(secodJobManagerStep) .listener(contextService) .listener(managerJobListener) .build(); } @Bean public Job firstJob(Step firstJobManagerStep) { return new JobBuilder("firstJob", jobRepository).incrementer(new RunIdIncrementer()) .start(instDataLoaderStep()) .next(deleteTable()) .next(firstJobManagerStep) .listener(contextService) .listener(managerJobListener) .build(); } ``` This is my JobWorkerConfiguration: ``` @Configuration @Profile("worker") @EnableBatchIntegration @AllArgsConstructor @Slf4j public class JobWorkerPartitionConfiguration { private RemotePartitioningWorkerStepBuilderFactory workerStepBuilderFactory; private JobRepository jobRepository; private JobCache<I9Data[]> cache; private CacheJobListener<I9Data[]> jobListener; private WorkerJobListener workerJobListener; private StepMonitoringListener monitoringListener; private ApplicationProperties appProperties; private InstDataLoader instDataLoader; private ContractLoader contractLoader; private CacheItemWriter cacheItemWriter; private PlatformTransactionManager transactionManager; private InstDataJpaCache instDataJpaCache; private ContextService contextService; private MonitorService monitorService; @Bean @StepScope Job remoteFirstJob(NamedParameterJdbcTemplate jdbcTemplate) { return new JobBuilder("remoteFirstJob", jobRepository).start(instDataLoaderStep()) .next(contractLoaderTaskletStep()) .next(calculateStep()) .listener(contextService) .listener(jobListener) .listener(workerJobListener) .listener(monitoringListener) // .listener(new SyncStepContextWithJob(this.monitorService)) .build(); } @Bean public Step remoteFirstJobAsStep( DirectChannel firstRequests, DirectChannel firstReplies ) { return workerStepBuilderFactory.get("remoteFirstJobAsStep") .inputChannel(firstRequests) .outputChannel(firstReplies) .parametersExtractor(remoteJobParametersExtractor()) .listener(new SyncStepContextWithJob()) .build(); } ``` The thing is why it's getting such kind of exception when I am starting the first job. It should not care about the secondReplies as my examples in the JobManagerPartitionConfiguration class there is defined for the firstJob inputChannel = "firstReplies" and outputChannel = "firstRequests" so that mean it should use these channels not the second channel configurations.
Starting first job causes - Dispatcher has no subscribers for channel exception when declared multiple DirectChannels
|java|spring|spring-batch|spring-integration|
Try the below regular expression: ^\(\d{3}\)\s?\d{3}-\d{4} I added the "\s?" to the regular expression. It checks for an optional space.
WooCommerce - Place Order - Intercept functionality after validation but before processing order?
|woocommerce|
I have a two column csv file. Column 1 is string values, column 2 is integer values. If a term is found in a string in Column 1 I want to return the corresponding value in Column 2. Col1 Col2 Green 5 Red 6 If Col1 contains "ed" return the corresponding row value in Col2, in this case 6. Thanks. ``` import pandas as pd # Read the CSV file into a pandas DataFrame file_name = input("Enter file name: ") df = pd.read_csv(file_name) string1 = input("Enter search term: ") #check if each element in the DataFrame contains the partial string matches = df.apply(lambda col: col.astype(str).str.contains(string1, case=False)) #get the row and column indices where the partial string matches rows, cols = matches.values.nonzero() for row, col in zip(rows, cols): print(f"Match found at Row: ", row) ```
Final results in Powershell for future visitors to clean PostgreSQL Archived Location if you don't have replicas. Hope it helps someone. Clear-Host $archdir='\\uncpath\pgsql_arch' $filename=Get-ChildItem -Path $archdir -filter *.backup | Sort-Object -Property LastWriteTime -Descending | Where-Object {$_.lastwritetime -gt (get-date).addDays(-31) -and -not $_.PSIsContainer} | Select -ExpandProperty Name -last 1 Write-Output "$archdir $filename" $script="$archdir $filename" Start-Process -FilePath "Your_PostgreSQL_bin_path\pg_archivecleanup.exe" -ArgumentList "-d $script" -Wait
I use cloudformation template from other AWS account but today I have this error: Resource handler returned message: "Invalid request provided: AWS::Cognito::UserPoolUser" (RequestToken: d55c7a26-473f-3eda-b1af-fdc97afc5364, HandlerErrorCode: InvalidRequest) My cloudformation template: CGNAdminUser: Type: AWS::Cognito::UserPoolUser Properties: DesiredDeliveryMediums: - EMAIL UserAttributes: - Name: email Value: !Ref Email Username: !Ref Username UserPoolId: !Ref CGNUserPool I try add this but do not work: DependsOn: CGNUserPool
I have a development cluster, in which I have two namespaces ns-a and ns-b. However, I'd like to create a backup of only nsb and make a copy of it in the whole new cluster. How can I achieve it. Also, if any tutorials or documents on the same. Please share
null
Is it recommended to customize keycloak database to add some tables and columns or there is another method to do that. I don't want to use that attributes table to add data about user. I want to add extra field to the user table like address and phone number..etc
Customize user table used by Keycloak