instruction
stringlengths
0
30k
null
Check if it happens the same way in all browsers, or if it's only one browser. In my case, I got the `Server Request Interrupted` message, only when using **Chrome**. But not when using Firefox, Safari, Edge. Turned out to be that Chrome sends `zstd` in the `Accept-Encoding`, where the other browsers doesn't send it. I ended up removing the `zstd` on the server. ``` Chrome request header Accept-Encoding: gzip, deflate, br, zstd Firefox request header Accept-Encoding: gzip, deflate, br ``` I use django on heroku, and remove `zstd` with this middleware: ```python class RemoveZstdEncodingMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): ae = request.META.get('HTTP_ACCEPT_ENCODING', '') if 'zstd' in ae: new_ae = ', '.join(enc for enc in ae.split(',') if 'zstd' not in enc) request.META['HTTP_ACCEPT_ENCODING'] = new_ae response = self.get_response(request) return response ```
this is my code, ``` console.log('%c Hello, World!', 'background:linear-gradient(45deg,#FF0099,#493240);padding:5px;font-weight:900;border-radius:5px;') ``` `border-radius`, `padding-block` `border-radius` not working on firefox as the image [![enter image description here][1]][1] but it's working fine on chromium [![enter image description here][2]][2] do I have to enable something? or firefox is just limited? [1]: https://i.stack.imgur.com/ormx7.png [2]: https://i.stack.imgur.com/HNoan.png
some CSS property working on chrome's console but not on firefox
|css|firefox|console|
I know there have been a few posts on this, but my case is a little bit different and I wanted to get some help on this. I have a pandas dataframe symbol_df with 1 min bars in the below format for each stock symbol: id Symbol_id Date Open High Low Close Volume 1 1 2023-12-13 09:15:00 4730.95 4744.00 4713.95 4696.40 2300 2 1 2023-12-13 09:16:00 4713.20 4723.70 4717.85 4702.55 1522 3 1 2023-12-13 09:17:00 4716.40 4718.55 4701.00 4701.00 909 4 1 2023-12-13 09:18:00 4700.15 4702.80 4696.70 4696.00 715 5 1 2023-12-13 09:19:00 4696.70 4709.90 4702.00 4696.10 895 ... ... ... ... ... ... ... ... 108001 1 2024-03-27 13:44:00 6289.95 6291.95 6289.00 6287.55 989 108002 1 2024-03-27 13:45:00 6288.95 6290.85 6289.00 6287.75 286 108003 1 2024-03-27 13:46:00 6291.25 6293.60 6292.05 6289.10 1433 108004 1 2024-03-27 13:47:00 6295.00 6299.00 6293.20 6293.15 2702 108005 1 2024-03-27 13:48:00 6292.05 6296.55 6291.95 6291.95 983 I would like to calculate the "Relative Volume Ratio" indicator and add this calculated value to the symbol_df as a new column on a rolling basis. "Relative volume ratio" indicator calculated as below: So far today's Volume is compared with the mean volume of the last 10 days of the same period. To get the ratio value, we simply divide "today so far volume" by "mean volume of the last 10 days of the same period". For example..the current bar time is now 13:48. `cumulativeVolumeOfToday = Volume` of 1 minuite bars between 00:00 -13:48 today added up `avergeVolumeOfPreviousDaysOfSamePeriod = Average` accumulation of volume from the same period(00:00 - 13:48) over the last 10 days. relativeVolumeRatio = CumulativeVolumeOfToday/AvergeVolumeOfPrevious10DaysOfSamePeriod Add this value as a new column to the dataframe. Sample data download for the test case: import yfinance as yf #pip install yfinance from datetime import datetime import pandas as pd symbol_df = yf.download(tickers="AAPL", period="7d", interval="1m")["Volume"] symbol_df=symbol_df.reset_index(inplace=False) #symbol_df['Datetime'] = symbol_df['Datetime'].dt.strftime('%Y-%m-%d %H:%M') symbol_df = symbol_df.rename(columns={'Datetime': 'Date'}) #We can only download 7 days sample data. So 5 days mean for calculations How can I do this in Pandas?
How can I run a python model in snowflake?
|python|snowflake-cloud-data-platform|
null
A trick: create proc MyProc @MySub varchar(20) = '' as if @MySub = 'MySub' goto MySub print 'MyProc' exec MyProc MySub return MySub: print 'MySub' return
After setting up the GitHub path in the Jenkins pipeline definition SCM (using GitHub webhook), can set the Jenkinsfile script path below to be in another repository on GitHub? [enter image description here][1] Thanks. For example: Webhook repository (https://github.example.com/example/test.git Jenkinsfile repository (https://github.example.com/jenkinsfile/jenkinsfile-test.git) [1]: https://i.stack.imgur.com/lmM9t.png
In the "[simple demonstration of coregionalisation][1]" example in GPflow a VGP is used with Gaussian likelihoods for each GP. Is this approach equivalent to an exact regression model? I was trying to reimplement the example using GPR, but was unsuccessful so far, so I am not sure how to verify otherwise. Appreciate any hint. [1]: https://gpflow.github.io/GPflow/2.9.1/notebooks/advanced/coregionalisation.html
VGP in "simple demonstration of coregionalisation" with Gaussian likelihood
|gaussian-process|gpflow|
I am trying to build a Keras model for my reinforcement learning algorithm, but I am encountering a problem related to an error about an invalid output shape. Here is the code for my build_model function: ``` def build_model(observation_space, action_space): # Input for possible_position_map matrix possible_position_map_input = Input(shape=observation_space['possible_position_map'].shape, name='possible_position_map_input') # Reshape input data to fit Conv2D format possible_position_map_reshaped = Reshape((100, 1, 1))(possible_position_map_input) # Convolutional layers for processing possible_position_map matrix possible_position_map_conv = Conv2D(filters=16, kernel_size=(3, 1), activation='relu')(possible_position_map_reshaped) possible_position_map_flatten = Flatten()(possible_position_map_conv) # Input for height_map matrix height_map_input = Input(shape=observation_space['height_map'].shape, name='height_map_input') # Reshape input data to fit Conv2D format height_map_input_reshaped = Reshape((100, 1, 1))(height_map_input) # Convolutional layers for processing height_map matrix height_map_conv = Conv2D(filters=16, kernel_size=(3, 1), activation='relu')(height_map_input_reshaped) height_map_flatten = Flatten()(height_map_conv) # Input for currentBox vector current_box_input = Input(shape=observation_space['currentBox'].shape, name='current_box_input') current_box_flatten = Flatten()(current_box_input) # Merge all branches merged = Concatenate()([possible_position_map_flatten, height_map_flatten, current_box_flatten]) # Dense layers for action decisions dense1 = Dense(128, activation='relu')(merged) dense2 = Dense(64, activation='relu')(dense1) # Output layer for discrete actions output = Dense(action_space.n, activation='softmax')(dense2) model = tf.keras.Model(inputs=[possible_position_map_input, height_map_input, current_box_input], outputs=output) return model ``` Invocation: ``` env = ENV.env.BinEnv(False, False) model = ENV.Model.build_model(env.observation_space, env.action_space) model.summary() policy = BoltzmannQPolicy() memory = SequentialMemory(limit=50000, window_length=1) dqn = DQNAgent(model=model, memory=memory, policy=policy, nb_actions=env.action_space.n, nb_steps_warmup=10, target_model_update=1e-2) dqn.compile(Adam(lr=1e-3), metrics=['mae']) ``` However, I'm getting the following error: ``` File "C:\Users\konra\.conda\envs\magisterka\Lib\site-packages\rl\agents\dqn.py", line 107, in __init__ raise ValueError(f'Model output "{model.output}" has invalid shape. DQN expects a model that has one dimension for each action, in this case {self.nb_actions}.') ValueError: Model output "Tensor("dense_2/Softmax:0", shape=(?, 100), dtype=float32)" has invalid shape. DQN expects a model that has one dimension for each action, in this case 100. ``` Can someone help me identify the cause of this error and how to fix it? ``` Model: "model" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== possible_position_map_inpu [(None, 100)] 0 [] t (InputLayer) height_map_input (InputLay [(None, 100)] 0 [] er) reshape (Reshape) (None, 100, 1, 1) 0 ['possible_position_map_input[ 0][0]'] reshape_1 (Reshape) (None, 100, 1, 1) 0 ['height_map_input[0][0]'] conv2d (Conv2D) (None, 98, 1, 16) 64 ['reshape[0][0]'] conv2d_1 (Conv2D) (None, 98, 1, 16) 64 ['reshape_1[0][0]'] current_box_input (InputLa [(None, 3)] 0 [] yer) flatten (Flatten) (None, 1568) 0 ['conv2d[0][0]'] flatten_1 (Flatten) (None, 1568) 0 ['conv2d_1[0][0]'] flatten_2 (Flatten) (None, 3) 0 ['current_box_input[0][0]'] concatenate (Concatenate) (None, 3139) 0 ['flatten[0][0]', 'flatten_1[0][0]', 'flatten_2[0][0]'] dense (Dense) (None, 128) 401920 ['concatenate[0][0]'] dense_1 (Dense) (None, 64) 8256 ['dense[0][0]'] dense_2 (Dense) (None, 100) 6500 ['dense_1[0][0]'] ================================================================================================== ``` I've tried verifying that the output shape of my model matches the number of actions in my environment (env.action_space.n), and it seems to be correct. I've also double-checked the input shapes and ensured that all layers are connected properly. entire trace back: ``` Model: "model" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== x (InputLayer) [(None, 100)] 0 [] fx (InputLayer) [(None, 3)] 0 [] theta (InputLayer) [(None, 100)] 0 [] dense (Dense) (None, 64) 6464 ['x[0][0]'] dense_1 (Dense) (None, 64) 256 ['fx[0][0]'] dense_2 (Dense) (None, 64) 6464 ['theta[0][0]'] concatenate (Concatenate) (None, 192) 0 ['dense[0][0]', 'dense_1[0][0]', 'dense_2[0][0]'] dense_3 (Dense) (None, 64) 12352 ['concatenate[0][0]'] dense_4 (Dense) (None, 100) 6500 ['dense_3[0][0]'] ================================================================================================== Total params: 32036 (125.14 KB) Trainable params: 32036 (125.14 KB) Non-trainable params: 0 (0.00 Byte) __________________________________________________________________________________________________ Shape of the model output: (?, 100) Expected shape for DQN: (None, 100) Traceback (most recent call last): File "C:\Users\konra\.conda\envs\magisterka\Lib\site-packages\keyboard\_generic.py", line 22, in invoke_handlers if handler(event): ^^^^^^^^^^^^^^ File "C:\Users\konra\.conda\envs\magisterka\Lib\site-packages\keyboard\__init__.py", line 474, in <lambda> return hook(lambda e: e.event_type == KEY_UP or callback(e), suppress=suppress) ^^^^^^^^^^^ File "E:\studia\ISA\magisterka\Magisterka\main.py", line 328, in obsluz_zdarzenie keras() File "E:\studia\ISA\magisterka\Magisterka\main.py", line 275, in keras dqn = DQNAgent(model=model, nb_actions=nb_actions, memory=memory, nb_steps_warmup=2000, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\konra\.conda\envs\magisterka\Lib\site-packages\rl\agents\dqn.py", line 107, in __init__ raise ValueError(f'Model output "{model.output}" has invalid shape. DQN expects a model that has one dimension for each action, in this case {self.nb_actions}.') ValueError: Model output "Tensor("dense_4/BiasAdd:0", shape=(?, 100), dtype=float32)" has invalid shape. DQN expects a model that has one dimension for each action, in this case 100. ```
Try this structure, using **one** `class MovieModel: ObservableObject` and passing this model to other views using `.environmentObject(movieModel)`. See this link, it gives you some good examples of how to manage data in your app: [monitoring data](https://developer.apple.com/documentation/swiftui/monitoring-model-data-changes-in-your-app) struct ContentView: View { var body: some View { ListView() } } class MovieModel: ObservableObject { @Published var movies: [Movie] = [Movie(id: 1, name: "Titanic")] func fetchMovie(id: Int) { //...request // for testing if let index = movies.firstIndex(where: {$0.id == id}) { movies[index].name = movies[index].name + " II" } } func fetchAllMovies() { // for testing movies = [Movie(id: 1, name: "Titanic"), Movie(id: 2, name: "Mickey Mouse")] } } struct Movie: Identifiable { let id: Int var name: String //.... } struct ListView: View { @StateObject var movieModel = MovieModel() var body: some View { List { ForEach(movieModel.movies) { movie in MovieItem(movie: movie) } } .onAppear { movieModel.fetchAllMovies() } .environmentObject(movieModel) } } struct MovieItem: View { @EnvironmentObject var movieModel: MovieModel var movie: Movie var body: some View { Text(movie.name) .onAppear { movieModel.fetchMovie(id: movie.id) } } } If you plan to use ios17, then have a look at this link for how to manage data in your app: [Managing model data in your app](https://developer.apple.com/documentation/swiftui/managing-model-data-in-your-app).
Chronicle queue version 25 and JDK17
|java|java-17|chronicle|chronicle-queue|
null
Or try my solution: http://robau.wordpress.com/2011/08/16/unobtrusive-table-column-resize-with-jquery-as-plugin/ :) ``` (function( $ ) { $.fn.simpleResizableTable = function() { /** * Author: Rob Audenaerde * Version: plugin version 0.5 */ $("<style type='text/css'> .srt-draghandle.dragged{border-left: 1px solid #333;}</style>").appendTo("head"); $("<style type='text/css'> .srt-draghandle{ position: absolute; z-index:5; width:5px; cursor:e-resize;}</style>").appendTo("head"); function resetTableSizes (table, change, columnIndex) { //calculate new width; var tableId = table.attr('id'); var myWidth = $('#'+tableId+' TR TH').get(columnIndex).offsetWidth; var newWidth = (myWidth+change)+'px'; $('#'+tableId+' TR').each(function() { $(this).find('TD').eq(columnIndex).css('width',newWidth); $(this).find('TH').eq(columnIndex).css('width',newWidth); }); resetSliderPositions(table); }; function resetSliderPositions(table) { var tableId = table.attr('id'); //put all sliders on the correct position table.find(' TR:first TH').each(function(index) { var td = $(this); var newSliderPosition = td.offset().left+td.outerWidth(); $("#"+tableId+"_id"+(index+1)).css({ left: newSliderPosition , height: table.height() + 'px'} ); }); } function makeResizable(table) { //get number of columns var numberOfColumns = table.find('TR:first TH').size(); //id is needed to create id's for the draghandles var tableId = table.attr('id'); for (var i=0; i<=numberOfColumns; i++) { //enjoy this nice chain :) $('<div class="srt-draghandle" id="'+tableId+'_id'+i+'"></div>').insertBefore(table).data('tableid', tableId).data('myindex',i).draggable( { axis: "x", start: function () { var tableId = ($(this).data('tableid')); $(this).toggleClass( "dragged" ); //set the height of the draghandle to the current height of the table, to get the vertical ruler $(this).css({ height: $('#'+tableId).height() + 'px'} ); }, stop: function (event, ui) { var tableId = ($(this).data('tableid')); $( this ).toggleClass( "dragged" ); var oldPos = ($( this ).data("draggable").originalPosition.left); var newPos = ui.position.left; var index = $(this).data("myindex"); resetTableSizes($('#'+tableId), newPos-oldPos, index-1); } } );; }; resetSliderPositions(table); return table; }; return this.each(function() { makeResizable($(this)); }); }; })( jQuery ); ```
our security and pen test team reported one issue that the below resource is missing **x-frame-options** header? Any suggestions or thoughts why only this page alone missing that header even though it is set at Keycloak console? <keycloak-domain/auth/realms/<realm-name>/protocol/openid-connect/3p-cookies/step1.html Keycloak version: 21.1.2 The realm security defenses setting already configured but still it is reported as an issue.
X-FRAME-OPTIONS header missing on step1.html of Keycloak
I have created the following code for easy logging. It creates an UTF-8 file without BOM when executed in PowerShell 5. It works as expected for me. Feel free to customize it to your needs :-) Function myWriteLog{ # $LogFilePath has to be defined before calling the function # And, "$SciptName=$MyInvocation.MyCommand.Name" has to be set before calling the function Param( [Parameter(Mandatory=$true, ValueFromPipeline=$true)] [string]$content ) # disallow a NULL or an EMPTY value if ([string]::IsNullOrEmpty($content.Trim())){ throw "Found 'EMPTY or NULL': must be provide a nonNull and nonEmpty string to function ""myWriteLog""" return 0 } else { if((Test-Path $LogFilePath) -eq $False){ # Creates the file, please note that option "-NoNewline" has to be set "" | Out-file -FilePath $LogFilePath -Encoding ascii -Force -NoNewline # Create a string as a line separator for a file header $t ="".PadLeft(("Logfile for : $SciptName").Length,"#") Add-Content -path $LogFilePath -value "$t" Add-Content -path $LogFilePath -value "Logfile for : $SciptName" Add-Content -path $LogFilePath -value "LogFile Created: $(Get-date -F "yyyy-MM-dd-HH-mm-ss")" Add-Content -path $LogFilePath -value "$t" Add-Content -path $LogFilePath -value "" #and now add the content to Add-Content -path $LogFilePath -value "$(Get-date -F "yyyy-MM-dd-HH-mm-ss") : $content" -Encoding UTF8 -force }else{ Add-Content -path $LogFilePath -value "$(Get-date -F "yyyy-MM-dd-HH-mm-ss") : $content" -Encoding UTF8 -force } } }
I am trying to run a nodejs based Docker container on a k8s cluster. The code refuses to run, and continuously getting errors: `Navigation frame was detached` `Requesting main frame too early` I cut down to the minimal code that should do some work: ``` const puppeteer = require('puppeteer'); const os = require('os'); function delay(time) { return new Promise(resolve => setTimeout(resolve, time)); } const platform = os.platform(); (async () => { console.log('started') let browserConfig = {} if (platform === 'linux') { browserConfig = { executablePath: '/usr/bin/google-chrome', headless: true, args:['--no-sandbox','--disable-web-security','--disable-features=IsolateOrigins,site-per-process','--disable-gpu', '--disable-dev-shm-usage'] } } else { browserConfig = { headless: true, args:['--no-sandbox','--disable-web-security','--disable-features=IsolateOrigins,site-per-process','--disable-gpu', '--disable-dev-shm-usage'] } } console.log('create browser') const browser = await puppeteer.launch(browserConfig) console.log('create page') const page = await browser.newPage() await page.setViewport({ width: 1920, height: 926 }) console.log('browsing to url') await page.goto("https://www.example.com", { waitUntil: 'load', timeout: 3000000 }) console.log('waiting') await delay(5000) console.log('get content') const s = await page.content(); console.log('content', s); console.log('close browser') await browser.close() console.log('finished') })(); ``` That code is running on the local nodejs cli, and also when packed to a docker image using this Dockerfile: ``` # Install dependencies only when needed FROM node:20.11.1-slim AS deps ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD true RUN apt-get update && \ apt-get install -y libc6 && \ apt-get install -y git && \ rm -rf /var/lib/apt/lists/* WORKDIR /app COPY package.json package-lock.json ./ RUN npm ci --legacy-peer-deps # Rebuild the source code only when needed FROM node:20.11.1-slim AS builder WORKDIR /app COPY . . COPY --from=deps /app/node_modules ./node_modules ARG NODE_ENV=production RUN echo ${NODE_ENV} # RUN NODE_ENV=${NODE_ENV} npm run build # Production image, copy all the files and run next FROM node:20.11.1-slim AS runner ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD true WORKDIR /app RUN apt-get update && apt-get install -y gnupg wget && \ wget --quiet --output-document=- https://dl-ssl.google.com/linux/linux_signing_key.pub | gpg --dearmor > /etc/apt/trusted.gpg.d/google-archive.gpg && \ echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google-chrome.list && \ apt-get update && \ apt-get install -y google-chrome-stable --no-install-recommends && \ rm -rf /var/lib/apt/lists/* COPY --from=builder /app/node_modules ./node_modules COPY --from=builder /app/package.json ./package.json COPY --from=builder /app/app.js ./app.js # Expose EXPOSE 3000 # CMD ["node", "app.js"] CMD node app.js ``` Running this image on local Docker is doing the job. However when I try to deploy to Kubernetes cluster, it fails with those errors: ``` started create browser create page browsing to url /app/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/LifecycleWatcher.js:99 this.#terminationDeferred.resolve(new Error('Navigating frame was detached')); ^ Error: Navigating frame was detached at #onFrameDetached (/app/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/LifecycleWatcher.js:99:47) at /app/node_modules/puppeteer-core/lib/cjs/third_party/mitt/mitt.js:25:732 at Array.map (<anonymous>) at Object.emit (/app/node_modules/puppeteer-core/lib/cjs/third_party/mitt/mitt.js:25:716) at CdpFrame.emit (/app/node_modules/puppeteer-core/lib/cjs/puppeteer/common/EventEmitter.js:83:23) at #removeFramesRecursively (/app/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/FrameManager.js:444:15) at #onClientDisconnect (/app/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/FrameManager.js:92:42) Node.js v20.11.1 started create browser create page browsing to url waiting get content /app/node_modules/puppeteer-core/lib/cjs/puppeteer/util/assert.js:18 throw new Error(message); ^ Error: Requesting main frame too early! at assert (/app/node_modules/puppeteer-core/lib/cjs/puppeteer/util/assert.js:18:15) at FrameManager.mainFrame (/app/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/FrameManager.js:213:32) at CdpPage.mainFrame (/app/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/Page.js:414:35) at CdpPage.content (/app/node_modules/puppeteer-core/lib/cjs/puppeteer/api/Page.js:519:31) at /app/app.js:40:26 Node.js v20.11.1 ``` This is very frustrating :( help will be much appreciated.
You could manually make the columns of both dataframes match, then union them. For example, with these dataframes: ```python df1 = spark.createDataFrame([ (1, 2), (3, 4) ], ['a', 'b']) df2 = spark.createDataFrame([ (5, 6, 7), (8, 9, 10) ], ['a2', 'c', 'd']) df2 = df2.withColumnRenamed('a2', 'a') ``` The first step would be to make sure the column that is common to both dataframes has the same name: ```python df2 = df2.withColumnRenamed('a2', 'a') ``` Then you can make the columns match: ```python for c in df2.columns: if c not in df1.columns: df1 = df1.withColumn(c, lit(None)) for c in df1.columns: if c not in df2.columns: df2 = df2.withColumn(c, lit(None)) ``` Finally, you can take the union. I find `unionByName` to be safer: ```python df_all = df1.unionByName(df2) ```
This in context with OKTA. I am logged in as a Superuser (Admin) and trying to perform user impersonation via switch or masquerading in OKTA but could not locate the option in the user profile or anywhere else. How can we achieve it? The ultimate aim is to fetch the token for a specific user. So that On his behalf I can access the API, test, or debug without ogin into the application for that user or using the password. Any help is appreciated. Thankyou.
OKTA User Impersonation _Masquerade
|okta|impersonation|okta-api|masquerade|
If you're fetching data from API, you can handle it as follows: ```js if (res.data) { chartData = { labels: getChartLabels(res.data), datasets: [ //set your data set ], }; //other assignments } else { chartData.datasets.forEach((item) => (item.data = [])); } ``` It'll empty the dataset's data list when there are no records. **Update** In my case, I had to initialize the empty state of chartData. Something like this: ```js const powerChartData = { labels: [], datasets: [ { label: "Max", backgroundColor: "#0086C9", borderColor: "#0086C9", data: [], fill: false, }, { label: "Average", backgroundColor: "#32D583", borderColor: "#32D583", data: [], fill: false, }, ], }; ``` Later, I just need to fill in `powerChartData.datasets[].data`
I have two models with `M2M field`. Because there wont be any update or deletion (just need to read data from db) I'm looking to have single db hit to retrieve all the required data. I used `prefetch_related` with `Prefetch` to be able to filter data and also have filtered objects in a cached list using `to_attr`. I tried to achieve the same result using `annotate` along with `Subquery`. but here I can't understand why the annotated filed contains only one value instead of a list of values. let's review the code I have: - some Routes may have more than one special point (Point instances with is_special=True). ### models.py ```python class Route(models.Model): indicator = models.CharField() class Point(models.Model): indicator = models.CharField() route = models.ManyToManyField(to=Route, related_name="points") is_special=models.BooleanField(default=False) ``` ### views.py ```python routes = Route.objects.filter(...).prefetch_related( Prefetch( "points", queryset=Point.objects.filter(is_special=True), to_attr="special_points", ) ) ``` this will work as expected but it will result in a separate database querying to fetch the points data. in the following code I tried to use Subquery instead to have a single database hit. ```python routes = Route.objects.filter(...).annotate( special_points=Subquery( Point.objects.filter(route=OuterRef("pk"), is_special=True).values("indicator") ) ``` the problem is in the second approach will have __either one or none__ special-point indicator when printing `route_instance.special_points` even if when using prefetch the printed result for the same instance of Route shows that there are two more special points. - I know in the first approach `route_instance.special_points` will contains the Point instances and not their indicators but that is the problem. - I checked the SQL code of the Subquery and there is no sign of limitation in the query as I did not used slicing in the python code as well. but again the result is limited to either one (if one or more exists) or none if there isn't any special point. ### This is how I check db connection ```python # Enable query counting from django.db import connection connection.force_debug_cursor = True route_analyzer(data, err) # Output the number of queries print(f"Total number of database queries: {len(connection.queries)}") for query in connection.queries: print(query["sql"]) # Disable query counting connection.force_debug_cursor = False ``` with the help from GPT, I have raw sql code that gives the result: - it is based on some python code so it's not clean template. ```SQL SELECT "general_route"."id", "general_route"."indicator", (SELECT GROUP_CONCAT(U0."indicator", ', ') FROM "points_point" U0 INNER JOIN "points_point_route" U1 ON (U0."id" = U1."point_id") WHERE (U1."route_id" = "general_route"."id" AND U0."is_special") ) AS "special_points", (SELECT GROUP_CONCAT(U0."indicator", ', ') FROM "points_point" U0 INNER JOIN "points_point_route" U1 ON (U0."id" = U1."point_id") WHERE (U1."route_id" = "general_route"."id" AND U0."indicator" IN ('CAK', 'NON')) ) AS "all_points" FROM "general_route" WHERE ("general_route"."indicator" LIKE 'OK%' OR "general_route"."indicator" LIKE 'OI%') ORDER BY "general_route"."indicator" ASC ```
|keycloak|penetration-testing|
null
on my wordpress site using an external cookie service, they provide me with a script to insert into the head tag ``` <script src='https://acconsento.click/script.js' id='acconsento-script' data-key='dBLtIvrsCoonhdR1zC1boEzG4YVBO1ebja0gFyOW'></script> ``` The problem is that a popup should appear to accept cookies, but this does not happen. In the browser console I saw the following error message:Failed to load resource: the server responded with a status of 403 () Even on the network you can see a blocked request, I'll attach all the screens, to see for yourself the link is [caseigerolalogisticspark.com](https://caseigerolalogisticspark.com/) By inspecting any page you can see the error message ![](https://i.stack.imgur.com/1rGuX.png) ![](https://i.stack.imgur.com/ZZBIP.png) I insert the HT access file here, but it should be the standard WP one I thought it was an error with my GoDaddy hosting provider, something related to DNS pointing or you have a firewall, but they say that everything is ok and that nothing is blocking the request. What do you recommend me to do?
I'm dealing with a problem I have since 2 days, i'm trying to store an image for a car website in a database other than my car one, the creation is successful, it appears in the database but when I try to store the same image in a folder of my application, whatever I try it simply doesn't work, here's my models : ``` class Cars(models.Model): marque = models.CharField(max_length=60) modele = models.CharField(max_length=60) motorisation = models.CharField(max_length=60) couleur = models.CharField(max_length=60) carburant = models.CharField(max_length=60) annee_modele = models.IntegerField() kilometrage = models.IntegerField() nbr_porte = models.IntegerField() nbr_place = models.IntegerField() puiss_fiscale = models.IntegerField() #En chevaux fiscaux puiss_din = models.IntegerField() #En chevaux DIN a_vendre = models.BooleanField(default=True) class CarImage(models.Model): car = models.ForeignKey(Cars, related_name="images", on_delete=models.CASCADE) image = models.ImageField(upload_to='car_images/')models.py ``` ``` Post view : class manage_cars(View): parser_classes = (MultiPartParser, FormParser) def post(self, request, *args, **kwargs): c_serializer = car_serializer(data=request.POST) if c_serializer.is_valid(): car_instance = c_serializer.save() images = request.FILES.getlist('image') print(images) for image in images: image_path = handle_uploaded_image(car_instance, image) # Créer l'objet CarImage avec le chemin de l'image CarImage.objects.create(car=car_instance, image=image_path) return JsonResponse({"message": "Voiture et images sauvegardées avec succès"}, status=201) else: return JsonResponse(car_serializer.errors, status=400) ``` ``` serializers.py from rest_framework import serializers from .models import * class car_serializer(serializers.ModelSerializer): class Meta: model = Cars fields = "__all__"serializers.py ``` ``` settings.py : MEDIA_URL = "/media/" MEDIA_ROOT = os.path.join(BASE_DIR, "media") ``` I'm just a french newbie in back-end development it's probably some stupid stuff that cause the error but i'm so sick of it... i tried to give my "media" and "car_images" folders full-control rights and other things that didn't work
const words = [ "cat", "dog", "bird", "elephant", "lion", "tiger", "zebra", "monkey", "ox", "b", "giraffe", "antelop", ]; function findShortestWord(words) { var shortest = words.reduce((acc, current) => { if (current.length < acc.length) { return current; } else { return acc; } }, words[0]); return shortest; } var shortest = findShortestWord(words); console.log(shortest);
Coming with an update and additional information. These answers will still work to date. Additionally, you want to make sure that when you install Powertoys that you do so globally. On initial setup you have the option to install for a particular user or globally. If the above answers aren't solving your problem then you'll want to uninstall and make sure that global installation happened.
I need this information: I have a Cobol 6.3 program “PROGRAM1” that includes a copy statement `COPY1`. `COPY1` contains only data definition, it can be included in WORKING STORAGE or LINKAGE SECTION. I need to tailor the copy in the inclusion process. I need to include only the statement between two eye-catchers. It doesn’t matter the content of the eye-catcher, but it must be a comment, because `COPY1` is included In a lot of other COBOL programs, that must include the copy as is without significant modification. Below an example: PROGRAM1: ``` IDENTIFICATION DIVISION. PROGRAM-ID. PROGRAM1. ENVIRONMENT DIVISION. DATA DIVISION. WORKING-STORAGE SECTION. COPY COPY1 from eye-catcher 1 to eye-catcher 2**. ... Rest of program data definition ... PROCEDURE DIVISION. ... Rest of program logic ... STOP RUN. ``` ** This isn’t a valid COBOL statement. It’s only for clarity. COPY1: ``` 01 AREA1 PIC X (10). ...eye-catcher 1... 01 AREA2 PIC X (10). ...eye-catcher 2... 01 AREA3 PIC X (10). I need to include only : 01 AREA2 PIC X (10). ``` I've searched but found nothing. any idea?
Resolved adding 'use server'; to my login form action solved ALL problems. I _suppose_ this forced my next-auth code to be run also only server side, so it's finding the .env and populating the `process.env`
|charts|gremlin|azure-cosmosdb-gremlinapi|
I have created a function which has a `[scriptblock]` parameter and I call it with a scriptblock that contains the `$_` automatic variable. Like this: ```powershell function Test-ScriptBlock { param ( [Parameter(ValueFromPipeline)][PSObject] $InputObject, [ScriptBlock] $ScriptBlock ) $_ = $input Invoke-Command $ScriptBlock } ``` If I call the function by dot-sourcing a .ps1 file containing the function or pasting the function code directly at the command prompt, it works fine: ```powershell 3,2,1 | Test-ScriptBlock -ScriptBlock {$_ | Sort-Object} 1 2 3 ``` However, if I add the function to a module and import it, it doesn't work anymore: ```powershell 3,2,1 | Test-ScriptBlock -ScriptBlock {$_ | Sort-Object} <nothing> ``` I had to change the code like this to make it work: ```powershell function Test-ScriptBlockModule { param ( [Parameter(ValueFromPipeline)][PSObject] $InputObject, [ScriptBlock] $ScriptBlock ) $_ = $input Invoke-Command ([scriptblock]::Create($ScriptBlock)) } ``` So it looks like the issue is when the scriptblock gets evaluated ? If I force a recreation of the passed scriptblock after $_ has been set inside the module function, it works. But I don't really know why it only does this inside a module ? I'm sure some PowerShell guru can explain this to me !
Well, the current version of XSLT is 3.0, there you could use `xsl:where-populated` as follows: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="3.0" xmlns:xs="http://www.w3.org/2001/XMLSchema" exclude-result-prefixes="#all" expand-text="yes"> <xsl:strip-space elements="*"/> <xsl:output indent="yes"/> <xsl:mode on-no-match="shallow-copy"/> <xsl:key name="lov" match="ValueGroup[@AttributeID='tec_att_ignore_basic_data_text']/Value" use="@QualifierID" /> <xsl:template match="ValueGroup[@AttributeID='prd_att_description']/Value[key('lov', @QualifierID)/@ID='Y']"/> <xsl:template match="ValueGroup[@AttributeID='prd_att_description']"> <xsl:where-populated> <xsl:next-match/> </xsl:where-populated> </xsl:template> </xsl:stylesheet> I don't think there is a direct equivalent in XSLT 1/2 but for your use case I think you basically want the empty template for `ValueGroup` as <xsl:template match="ValueGroup[@AttributeID='prd_att_description'][not(Value[not(key('lov', @QualifierID)/@ID='Y')])]"/> and of course keep your <xsl:template match="ValueGroup[@AttributeID='prd_att_description']/Value[key('lov', @QualifierID)/@ID='Y']"/>
I have two Model `Wilaya` and `Commune` which are basicly like `Country` and `Capital` there is a one to many relationship between the two models This is the **Wilaya** model code ``` class Wilaya extends Model { protected $fillable = ['nom']; use HasFactory; public function communes() { return $this->hasMany(Commune::class); } } ``` And this is the **Commune** model code ``` class Commune extends Model { protected $fillable = ['nom', 'wilaya_id']; use HasFactory; public function wilaya() { return $this->belongsTo(Wilaya::class); } } ``` What I want to do is to display the **WilayaName** & **CommuneName** instead of their **Id's** on my order view page. **Here is how I am fetching them on the Controller and displaying them on view ** ``` public function show(Order $order, $id) { $data = Order::find($id); //$data = Order::with('wilaya')->find($id); $arr = unserialize($data->products); //dd($data); return view('order.show', compact('data', 'arr')); } ``` **view code ** ``` <div class="mb-3"> <label for="exampleInputEmail1" class="form-label">Wilaya: </label> <input type="email" class="form-control" id="exampleInputEmail1" aria-describedby="emailHelp" name="email" placeholder="/" value="{{ $data->wilaya }}" disabled> </div> <div class="mb-3"> <label for="exampleInputEmail1" class="form-label">Ville: </label> <input type="email" class="form-control" id="exampleInputEmail1" aria-describedby="emailHelp" name="email" placeholder="/" value="{{ $data->commune }}" disabled> </div> ``` This will only display their `ID` Instead of `Name`. well I tried to fetch the name like this ` value="{{ $data->commune->name }}"` and `value="{{ $data->wilaya }}"` and I got an error saying **Attempt to read property "name" on int** and I also tried to eager load the relationships on the Controller like this `$data = Order::with('wilaya.communes')->find($id);` it throws back an error saying **Call to undefined relationship [wilaya] on model [App\Models\Order].** knowing there's no relationship between `Order Model ` and `Wilaya model` nor `Commune model`. Feel free to tell me what I have done worng or what I could have done better I am new to this, Thanks
My djanco configuration fail to download images but store them in the db without occuring any error
|python|django|django-models|django-views|
null
You expect that since you passed to correct enum value that only the case that matters is called, and you are correct in that, the issue is that if the enum value does not match the correct case, then you are calling the wrong function. Since none of this checking is done at compile time the compiler is forced to check the entire function to make sure all paths compile and in your case they do not. Since all of you `search_in` functions are the same except for `pee` parameter there is no reason to even use a switch case statement and instead you can just rely on overload resolution to call the correct function for you like: ``` template <typename T> void analyze(/*vegetable& place - no longer needed, */const std::vector<std::uint8_t>& dee, T& pee, int ole, std::uint16_t cee) { using beta::search_in; using gemma::search_in; using zeta::search_in; search_in(dee, pee, ole, cee); } ``` You can see this working with your example code in this[live example][1]. [1]: https://godbolt.org/z/KbY7v6WPK
null
I would like to restrict the installation of npm package in some projects of my company to releases that are at least X weeks old. The reason for that being that given enough time before installing a package version, it is likely that if the package maintainer was compromised and contains malicious code (in the package itself or in `postinstall` script), it will have been caught and removed from the public npm registry. I could easily ask our developers to manually install versions that are at least X weeks old, however they are likely to install transitive dependencies that are not locked on a specific version, which can result in the installation of a transitive dependency that is only a few hours old. Is there a way to force npm to only install packages that are at least X weeks old, including for transitive dependencies ?
Only install packages that are at least X weeks old
|node.js|npm|
I don't quite understand your code or your example, but I think the following code sample shows (the principle of) how you can accomplish what you need: from matplotlib import pyplot as plt import matplotlib.colors as mcolors import shapely from shapely.plotting import plot_line, plot_polygon poly1 = shapely.box(2, 0, 4, 3) poly2 = shapely.box(0, 1, 2, 2) lines = [] # Intersecting lines intersecting_lines = poly1.boundary.intersection(poly2.boundary) lines.extend(shapely.get_parts(shapely.line_merge(intersecting_lines))) # Non intersecting boundaries lines.extend( shapely.get_parts(shapely.line_merge(poly1.boundary.difference(intersecting_lines))) ) lines.extend( shapely.get_parts(shapely.line_merge(poly2.boundary.difference(intersecting_lines))) ) # Plot fig, ax = plt.subplots(ncols=2, figsize=(15, 15)) plot_polygon(poly1, ax=ax[0], color="red") plot_polygon(poly2, ax=ax[0]) colors = [] for line, color in zip(lines, mcolors.TABLEAU_COLORS): plot_line(line, ax=ax[1], color=color) plt.show() Plotted image with input left, output right: [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/uhtSQ.png
display the country name and its corresponding capital on an order page instead of an Id in laravel?
|laravel|foreign-keys|relationship|
null
I am following the instructions from the [official documentation][1] on how to install llama-cpp with GPU support in Apple silicon Mac. Here is my Dockerfile: ``` FROM python:3.11-slim WORKDIR /code RUN pip uninstall llama-cpp-python -y ENV CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 RUN pip install -U llama-cpp-python --no-cache-dir RUN pip install 'llama-cpp-python[server]' COPY ./requirements.txt /code/requirements.txt RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt COPY . . EXPOSE 8000 CMD ["panel", "serve", "--port", "8000", "chat.py", "--address", "0.0.0.0", "--allow-websocket-origin", "*"] ``` I am getting the following error: ``` [+] Building 6.1s (9/13) docker:desktop-linux => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 508B 0.0s => [internal] load metadata for docker.io/library/python:3.11-slim 0.9s => [auth] library/python:pull token for registry-1.docker.io 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [1/8] FROM docker.io/library/python:3.11-slim@sha256:90f8795536170fd08236d2ceb74fe7065dbf74f738d8 0.0s => => resolve docker.io/library/python:3.11-slim@sha256:90f8795536170fd08236d2ceb74fe7065dbf74f738d8 0.0s => [internal] load build context 0.0s => => transferring context: 2.19kB 0.0s => CACHED [2/8] WORKDIR /code 0.0s => CACHED [3/8] RUN pip uninstall llama-cpp-python -y 0.0s => ERROR [4/8] RUN pip install -U llama-cpp-python --no-cache-dir 5.2s ------ > [4/8] RUN pip install -U llama-cpp-python --no-cache-dir: 0.410 Collecting llama-cpp-python 0.516 Downloading llama_cpp_python-0.2.57.tar.gz (36.9 MB) 1.023 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 36.9/36.9 MB 99.0 MB/s eta 0:00:00 1.325 Installing build dependencies: started 2.285 Installing build dependencies: finished with status 'done' 2.285 Getting requirements to build wheel: started 2.336 Getting requirements to build wheel: finished with status 'done' 2.340 Installing backend dependencies: started 3.863 Installing backend dependencies: finished with status 'done' 3.864 Preparing metadata (pyproject.toml): started 3.955 Preparing metadata (pyproject.toml): finished with status 'done' 3.996 Collecting typing-extensions>=4.5.0 (from llama-cpp-python) 4.014 Downloading typing_extensions-4.10.0-py3-none-any.whl.metadata (3.0 kB) 4.181 Collecting numpy>=1.20.0 (from llama-cpp-python) 4.201 Downloading numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (62 kB) 4.202 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.3/62.3 kB 96.9 MB/s eta 0:00:00 4.242 Collecting diskcache>=5.6.1 (from llama-cpp-python) 4.261 Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB) 4.298 Collecting jinja2>=2.11.3 (from llama-cpp-python) 4.317 Downloading Jinja2-3.1.3-py3-none-any.whl.metadata (3.3 kB) 4.372 Collecting MarkupSafe>=2.0 (from jinja2>=2.11.3->llama-cpp-python) 4.393 Downloading MarkupSafe-2.1.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (3.0 kB) 4.416 Downloading diskcache-5.6.3-py3-none-any.whl (45 kB) 4.418 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.5/45.5 kB 412.0 MB/s eta 0:00:00 4.440 Downloading Jinja2-3.1.3-py3-none-any.whl (133 kB) 4.444 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.2/133.2 kB 63.9 MB/s eta 0:00:00 4.472 Downloading numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (14.2 MB) 4.627 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.2/14.2 MB 94.7 MB/s eta 0:00:00 4.648 Downloading typing_extensions-4.10.0-py3-none-any.whl (33 kB) 4.671 Downloading MarkupSafe-2.1.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (29 kB) 4.713 Building wheels for collected packages: llama-cpp-python 4.714 Building wheel for llama-cpp-python (pyproject.toml): started 4.910 Building wheel for llama-cpp-python (pyproject.toml): finished with status 'error' 4.912 error: subprocess-exited-with-error 4.912 4.912 × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. 4.912 │ exit code: 1 4.912 ╰─> [24 lines of output] 4.912 *** scikit-build-core 0.8.2 using CMake 3.29.0 (wheel) 4.912 *** Configuring CMake... 4.912 loading initial cache file /tmp/tmpk4ft3wii/build/CMakeInit.txt 4.912 -- The C compiler identification is unknown 4.912 -- The CXX compiler identification is unknown 4.912 CMake Error at CMakeLists.txt:3 (project): 4.912 No CMAKE_C_COMPILER could be found. 4.912 4.912 Tell CMake where to find the compiler by setting either the environment 4.912 variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to 4.912 the compiler, or to the compiler name if it is in the PATH. 4.912 4.912 4.912 CMake Error at CMakeLists.txt:3 (project): 4.912 No CMAKE_CXX_COMPILER could be found. 4.912 4.912 Tell CMake where to find the compiler by setting either the environment 4.912 variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path 4.912 to the compiler, or to the compiler name if it is in the PATH. 4.912 4.912 4.912 -- Configuring incomplete, errors occurred! 4.912 4.912 *** CMake configuration failed 4.912 [end of output] 4.912 4.912 note: This error originates from a subprocess, and is likely not a problem with pip. 4.913 ERROR: Failed building wheel for llama-cpp-python 4.913 Failed to build llama-cpp-python 4.913 ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects ------ Dockerfile:9 -------------------- 7 | ENV CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 8 | 9 | >>> RUN pip install -U llama-cpp-python --no-cache-dir 10 | 11 | RUN pip install 'llama-cpp-python[server]' -------------------- ERROR: failed to solve: process "/bin/sh -c pip install -U llama-cpp-python --no-cache-dir" did not complete successfully: exit code: 1 ``` I have tried different variations of the Dockerfile, but it always gives error on the same line, i.e., `RUN pip install -U llama-cpp-python`. Why? And how do I fix it? ---------- **UPDATE:** Based on the comments, I modified my Dockerfile like so: ``` FROM python:3.11-slim RUN apt-get update && apt-get install -y --no-install-recommends gcc WORKDIR /code RUN pip uninstall llama-cpp-python -y ENV CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 RUN pip install -U llama-cpp-python --no-cache-dir RUN pip install 'llama-cpp-python[server]' COPY ./requirements.txt /code/requirements.txt RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt COPY . . EXPOSE 8000 CMD ["panel", "serve", "--port", "8000", "chat.py", "--address", "0.0.0.0", "--allow-websocket-origin", "*"] ``` And I still am not able to install llama-cpp: ``` [+] Building 12.3s (10/14) docker:desktop-linux => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 578B 0.0s => [internal] load metadata for docker.io/library/python:3.11-slim 0.9s => [auth] library/python:pull token for registry-1.docker.io 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => CACHED [1/9] FROM docker.io/library/python:3.11-slim@sha256:90f8795536170fd08236d2ceb74fe7065dbf7 0.0s => => resolve docker.io/library/python:3.11-slim@sha256:90f8795536170fd08236d2ceb74fe7065dbf74f738d8 0.0s => [internal] load build context 0.0s => => transferring context: 2.26kB 0.0s => [2/9] RUN apt-get update && apt-get install -y --no-install-recommends gcc 5.2s => [3/9] WORKDIR /code 0.0s => [4/9] RUN pip uninstall llama-cpp-python -y 1.0s => ERROR [5/9] RUN pip install -U llama-cpp-python --no-cache-dir 5.2s ------ > [5/9] RUN pip install -U llama-cpp-python --no-cache-dir: 0.500 Collecting llama-cpp-python 0.600 Downloading llama_cpp_python-0.2.57.tar.gz (36.9 MB) 1.093 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 36.9/36.9 MB 88.6 MB/s eta 0:00:00 1.373 Installing build dependencies: started 2.344 Installing build dependencies: finished with status 'done' 2.345 Getting requirements to build wheel: started 2.395 Getting requirements to build wheel: finished with status 'done' 2.399 Installing backend dependencies: started 3.882 Installing backend dependencies: finished with status 'done' 3.882 Preparing metadata (pyproject.toml): started 3.972 Preparing metadata (pyproject.toml): finished with status 'done' 4.014 Collecting typing-extensions>=4.5.0 (from llama-cpp-python) 4.034 Downloading typing_extensions-4.10.0-py3-none-any.whl.metadata (3.0 kB) 4.200 Collecting numpy>=1.20.0 (from llama-cpp-python) 4.218 Downloading numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (62 kB) 4.219 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.3/62.3 kB 95.0 MB/s eta 0:00:00 4.258 Collecting diskcache>=5.6.1 (from llama-cpp-python) 4.282 Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB) 4.316 Collecting jinja2>=2.11.3 (from llama-cpp-python) 4.335 Downloading Jinja2-3.1.3-py3-none-any.whl.metadata (3.3 kB) 4.392 Collecting MarkupSafe>=2.0 (from jinja2>=2.11.3->llama-cpp-python) 4.410 Downloading MarkupSafe-2.1.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (3.0 kB) 4.434 Downloading diskcache-5.6.3-py3-none-any.whl (45 kB) 4.435 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.5/45.5 kB 345.0 MB/s eta 0:00:00 4.454 Downloading Jinja2-3.1.3-py3-none-any.whl (133 kB) 4.458 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.2/133.2 kB 103.0 MB/s eta 0:00:00 4.482 Downloading numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (14.2 MB) 4.620 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.2/14.2 MB 106.0 MB/s eta 0:00:00 4.641 Downloading typing_extensions-4.10.0-py3-none-any.whl (33 kB) 4.665 Downloading MarkupSafe-2.1.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (29 kB) 4.703 Building wheels for collected packages: llama-cpp-python 4.704 Building wheel for llama-cpp-python (pyproject.toml): started 4.938 Building wheel for llama-cpp-python (pyproject.toml): finished with status 'error' 4.941 error: subprocess-exited-with-error 4.941 4.941 × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. 4.941 │ exit code: 1 4.941 ╰─> [50 lines of output] 4.941 *** scikit-build-core 0.8.2 using CMake 3.29.0 (wheel) 4.941 *** Configuring CMake... 4.941 loading initial cache file /tmp/tmp839s_tl2/build/CMakeInit.txt 4.941 -- The C compiler identification is GNU 12.2.0 4.941 -- The CXX compiler identification is unknown 4.941 -- Detecting C compiler ABI info 4.941 -- Detecting C compiler ABI info - failed 4.941 -- Check for working C compiler: /usr/bin/cc 4.941 -- Check for working C compiler: /usr/bin/cc - broken 4.941 CMake Error at /tmp/pip-build-env-qdg4zwxu/normal/lib/python3.11/site-packages/cmake/data/share/cmake-3.29/Modules/CMakeTestCCompiler.cmake:67 (message): 4.941 The C compiler 4.941 4.941 "/usr/bin/cc" 4.941 4.941 is not able to compile a simple test program. 4.941 4.941 It fails with the following output: 4.941 4.941 Change Dir: '/tmp/tmp839s_tl2/build/CMakeFiles/CMakeScratch/TryCompile-UGSEbu' 4.941 4.941 Run Build Command(s): /tmp/pip-build-env-qdg4zwxu/normal/lib/python3.11/site-packages/ninja/data/bin/ninja -v cmTC_b1c9c 4.941 [1/2] /usr/bin/cc -o CMakeFiles/cmTC_b1c9c.dir/testCCompiler.c.o -c /tmp/tmp839s_tl2/build/CMakeFiles/CMakeScratch/TryCompile-UGSEbu/testCCompiler.c 4.941 [2/2] : && /usr/bin/cc CMakeFiles/cmTC_b1c9c.dir/testCCompiler.c.o -o cmTC_b1c9c && : 4.941 FAILED: cmTC_b1c9c 4.941 : && /usr/bin/cc CMakeFiles/cmTC_b1c9c.dir/testCCompiler.c.o -o cmTC_b1c9c && : 4.941 /usr/bin/ld: cannot find Scrt1.o: No such file or directory 4.941 /usr/bin/ld: cannot find crti.o: No such file or directory 4.941 collect2: error: ld returned 1 exit status 4.941 ninja: build stopped: subcommand failed. 4.941 4.941 4.941 4.941 4.941 4.941 CMake will not be able to correctly generate this project. 4.941 Call Stack (most recent call first): 4.941 CMakeLists.txt:3 (project) 4.941 4.941 4.941 CMake Error at CMakeLists.txt:3 (project): 4.941 No CMAKE_CXX_COMPILER could be found. 4.941 4.941 Tell CMake where to find the compiler by setting either the environment 4.941 variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path 4.941 to the compiler, or to the compiler name if it is in the PATH. 4.941 4.941 4.941 -- Configuring incomplete, errors occurred! 4.941 4.941 *** CMake configuration failed 4.941 [end of output] 4.941 4.941 note: This error originates from a subprocess, and is likely not a problem with pip. 4.941 ERROR: Failed building wheel for llama-cpp-python 4.941 Failed to build llama-cpp-python 4.941 ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects ------ Dockerfile:11 -------------------- 9 | ENV CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 10 | 11 | >>> RUN pip install -U llama-cpp-python --no-cache-dir 12 | 13 | RUN pip install 'llama-cpp-python[server]' -------------------- ERROR: failed to solve: process "/bin/sh -c pip install -U llama-cpp-python --no-cache-dir" did not complete successfully: exit code: 1 ``` ---------- **UPDATE 2:** With the line in my Dockerfile: `RUN apt-get update && apt-get install -y build-essential` this is the error trace when building the Dockerfile: ``` [+] Building 17.5s (10/14) docker:desktop-linux => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 566B 0.0s => [internal] load metadata for docker.io/library/python:3.11-slim 0.9s => [auth] library/python:pull token for registry-1.docker.io 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => CACHED [1/9] FROM docker.io/library/python:3.11-slim@sha256:90f8795536170fd08236d2ceb74fe7065dbf7 0.0s => => resolve docker.io/library/python:3.11-slim@sha256:90f8795536170fd08236d2ceb74fe7065dbf74f738d8 0.0s => [internal] load build context 0.0s => => transferring context: 2.25kB 0.0s => [2/9] RUN apt-get update && apt-get install -y build-essential 10.1s => [3/9] WORKDIR /code 0.0s => [4/9] RUN pip uninstall llama-cpp-python -y 0.9s => ERROR [5/9] RUN pip install -U llama-cpp-python --no-cache-dir 5.5s ------ > [5/9] RUN pip install -U llama-cpp-python --no-cache-dir: 0.633 Collecting llama-cpp-python 0.765 Downloading llama_cpp_python-0.2.57.tar.gz (36.9 MB) 1.231 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 36.9/36.9 MB 109.6 MB/s eta 0:00:00 1.514 Installing build dependencies: started 2.479 Installing build dependencies: finished with status 'done' 2.479 Getting requirements to build wheel: started 2.530 Getting requirements to build wheel: finished with status 'done' 2.534 Installing backend dependencies: started 4.027 Installing backend dependencies: finished with status 'done' 4.028 Preparing metadata (pyproject.toml): started 4.119 Preparing metadata (pyproject.toml): finished with status 'done' 4.161 Collecting typing-extensions>=4.5.0 (from llama-cpp-python) 4.182 Downloading typing_extensions-4.10.0-py3-none-any.whl.metadata (3.0 kB) 4.355 Collecting numpy>=1.20.0 (from llama-cpp-python) 4.374 Downloading numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (62 kB) 4.376 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.3/62.3 kB 121.2 MB/s eta 0:00:00 4.416 Collecting diskcache>=5.6.1 (from llama-cpp-python) 4.436 Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB) 4.472 Collecting jinja2>=2.11.3 (from llama-cpp-python) 4.491 Downloading Jinja2-3.1.3-py3-none-any.whl.metadata (3.3 kB) 4.549 Collecting MarkupSafe>=2.0 (from jinja2>=2.11.3->llama-cpp-python) 4.569 Downloading MarkupSafe-2.1.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (3.0 kB) 4.594 Downloading diskcache-5.6.3-py3-none-any.whl (45 kB) 4.596 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.5/45.5 kB 295.7 MB/s eta 0:00:00 4.618 Downloading Jinja2-3.1.3-py3-none-any.whl (133 kB) 4.621 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.2/133.2 kB 138.4 MB/s eta 0:00:00 4.646 Downloading numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (14.2 MB) 4.820 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.2/14.2 MB 76.3 MB/s eta 0:00:00 4.843 Downloading typing_extensions-4.10.0-py3-none-any.whl (33 kB) 4.868 Downloading MarkupSafe-2.1.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (29 kB) 4.910 Building wheels for collected packages: llama-cpp-python 4.911 Building wheel for llama-cpp-python (pyproject.toml): started 5.212 Building wheel for llama-cpp-python (pyproject.toml): finished with status 'error' 5.214 error: subprocess-exited-with-error 5.214 5.214 × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. 5.214 │ exit code: 1 5.214 ╰─> [32 lines of output] 5.214 *** scikit-build-core 0.8.2 using CMake 3.29.0 (wheel) 5.214 *** Configuring CMake... 5.214 loading initial cache file /tmp/tmp5q9e9my4/build/CMakeInit.txt 5.214 -- The C compiler identification is GNU 12.2.0 5.214 -- The CXX compiler identification is GNU 12.2.0 5.214 -- Detecting C compiler ABI info 5.214 -- Detecting C compiler ABI info - done 5.214 -- Check for working C compiler: /usr/bin/cc - skipped 5.214 -- Detecting C compile features 5.214 -- Detecting C compile features - done 5.214 -- Detecting CXX compiler ABI info 5.214 -- Detecting CXX compiler ABI info - done 5.214 -- Check for working CXX compiler: /usr/bin/c++ - skipped 5.214 -- Detecting CXX compile features 5.214 -- Detecting CXX compile features - done 5.214 -- Could NOT find Git (missing: GIT_EXECUTABLE) 5.214 CMake Warning at vendor/llama.cpp/scripts/build-info.cmake:14 (message): 5.214 Git not found. Build info will not be accurate. 5.214 Call Stack (most recent call first): 5.214 vendor/llama.cpp/CMakeLists.txt:132 (include) 5.214 5.214 5.214 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD 5.214 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success 5.214 -- Found Threads: TRUE 5.214 CMake Error at vendor/llama.cpp/CMakeLists.txt:191 (find_library): 5.214 Could not find FOUNDATION_LIBRARY using the following names: Foundation 5.214 5.214 5.214 -- Configuring incomplete, errors occurred! 5.214 5.214 *** CMake configuration failed 5.214 [end of output] 5.214 5.214 note: This error originates from a subprocess, and is likely not a problem with pip. 5.215 ERROR: Failed building wheel for llama-cpp-python 5.215 Failed to build llama-cpp-python 5.215 ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects ------ Dockerfile:11 -------------------- 9 | ENV CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 10 | 11 | >>> RUN pip install -U llama-cpp-python --no-cache-dir 12 | 13 | RUN pip install 'llama-cpp-python[server]' -------------------- ERROR: failed to solve: process "/bin/sh -c pip install -U llama-cpp-python --no-cache-dir" did not complete successfully: exit code: 1 ``` [1]: https://llama-cpp-python.readthedocs.io/en/latest/install/macos/
I counstructed ECharts confidence bands around the main series, similar to [example](https://echarts.apache.org/examples/en/editor.html?c=confidence-band). However, in the example there is no legend. I use legend and I want to show main data as one legend item and confidence band as another. Since confidence band is in fact realized as two series stacked one on another, they both appear in legend and user can enable or disable one or another, which is not desired behaviour. Both confidence band series should appear as one item in legend and clicking on symbol in legend should show/hide both of them in the same time. One solution comes to mind, to show in legend only one of series and use `legendselectchanged` event to catch clicking on it and hide/show both series together, but it feels a bit hacky, perhaps there is better solution?
ECharts confidence bands in legend as one item
|legend|echarts|confidence-interval|
If you are trying to remove the "3" from the constructor of the array, the machine will not know how much memory to allocate. So how can you loop through 3 items in the array if the memory was never allocated? You cannot.That would be a memory overflow. You need to allocate the space in the array (3) before you loop that many times through it.
I can reproduce the issue with `databricks CLI` as well. ``` databricks workspace import /Shared/TestFile-LocalZip.py --file TestFile-Local.zip --format SOURCE databricks workspace import /Shared/TestFile-LocalPython.py --file TestFile-Local.py --language PYTHON ``` [![enter image description here][1]][1] [![enter image description here][2]][2] According to the [document](https://learn.microsoft.com/en-us/azure/databricks/files/#work-with-workspace-files), it is more likely a [limitation](https://learn.microsoft.com/en-us/azure/databricks/files/#workspace-files-limitations) of `Azure Databricks` rather than an issue caused by `Azure DevOps`. [![enter image description here][3]][3] Since it says *There is limited support for workspace file operations from **serverless compute***, I figured out a workaround with the help of `My Personal Compute Cluster`, where I could run `curl` command to copy file from Azure Storage Account into Azure Databricks Workspace as a *File* not a `Notebook`. Here are my steps. Upload my `.py` file into Azure Storage Account blob and generate `Blob SAS URL`; [![enter image description here][4]][4] Connect to `My Personal Compute Cluster` -> In the `Terminal` run the `curl` command; ``` BlobSASURL="https://xxxxxxxx.blob.core.windows.net/testcontainer/TestFile-Storage.py?xxxxxxxxx" curl "$BlobSASURL" -o /Workspace/Shared/TestFile-Storage.py ``` [![enter image description here][5]][5] [![enter image description here][6]][6] [1]: https://i.stack.imgur.com/YWzk5.png [2]: https://i.stack.imgur.com/ML03O.png [3]: https://i.stack.imgur.com/r4kTg.png [4]: https://i.stack.imgur.com/pF0fP.png [5]: https://i.stack.imgur.com/67Dvb.png [6]: https://i.stack.imgur.com/W1iHa.png
I am adding with this code to my `TextBoxSafety` a Star to current position. But internet says the code which i am using `ChrW(&H2B50)` is a yellow star - which i also want. But when i click on the `Label9`, it apears in colorless mode so black. How can i solve it, that i have it with color. Also tried with another codes/emojies Private Sub Label9_Click() Dim currentText As String Dim cursorPosition As Integer currentText = TextBox1SafetyText cursorPosition = TextBoxSafety.SelStart TextBoxSafety.Text = Left(currentText, cursorPosition) & ChrW(&H2B50) & Mid(currentText, cursorPosition + 1) TextBoxSafety.SelStart = cursorPosition + 1 End Sub [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/nbxPq.png
excel VBA: Adding emoji to textbox via click but no color
|excel|vba|
Iam trying to run a pipeline in Azure DevOps that checks the code with Snyk and then validates the dependencies using Composer, builds the Docker image then pushes it to ACR, and finally I will implement either ArgoCD or FluxCD solution for a GitOps-based approach to fetch the new image from the Azure Repo when there's a new change. However, I keep getting this error on the Docker build task as well as the Install dependencies task...I've been trying to troubleshoot the error but didn't find anything relevant. Here's the YAML so far: ``` trigger: - main pr: - main pool: vmImage: 'ubuntu-latest' variables: ACRServiceConnection: 'ACRServiceConEH' jobs: - job: security displayName: 'Security' steps: - checkout: self - task: SnykSecurityScan@1 inputs: serviceConnectionEndpoint: 'SnykConnection' testType: 'code' codeSeverityThreshold: 'high' failOnIssues: false projectName: '$(SNYK_PROJECT)' organization: '$(SNYK_ORG)' - job: composer dependsOn: security displayName: 'Composer Validation' steps: - checkout: self - task: Bash@3 inputs: script: | targetDirectory=$(System.DefaultWorkingDirectory)/my_project_directory if [ ! -d "$targetDirectory" ]; then mkdir -p "$targetDirectory" fi cd "$targetDirectory" composer install --prefer-dist --no-progress displayName: 'Install dependencies' - task: Bash@3 inputs: script: composer validate --strict displayName: 'Validate composer.json and composer.lock' - job: build displayName: 'Build' dependsOn: security pool: vmImage: 'ubuntu-latest' steps: - checkout: self - task: Bash@3 inputs: script: | docker build . --file Dockerfile --tag $(imageName):$(Build.SourceVersion) displayName: 'Docker build' - task: Docker@2 inputs: containerRegistry: $(ACRServiceConnection) repository: $(imageName) command: 'push' tags: '$(Build.SourceVersion)' displayName: 'Docker push' ``` Starting: Install dependencies ============================================================================== Task : Bash Description : Run a Bash script on macOS, Linux, or Windows Version : 3.236.1 Author : Microsoft Corporation Help : https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/bash ============================================================================== ##[error]Invalid file path '/home/vsts/work/1/s'. Finishing: Install dependencies
Scriptblock function parameter containing $_ not working inside module
|powershell|
Instead of drawing the bitmap directly at a fixed position (0, 0), you can scale the bitmap based on the size of the canvas. Calculate the scaling factor based on the canvas size and scale the bitmap accordingly before drawing it. Bitmap backgroundBitmap = BitmapFactory.decodeResource(getResources(), R.drawable.backgame); float scaleX = (float) canvas.getWidth() / backgroundBitmap.getWidth(); float scaleY = (float) canvas.getHeight() / backgroundBitmap.getHeight(); Matrix matrix = new Matrix(); matrix.postScale(scaleX, scaleY); Bitmap scaledBackgroundBitmap = Bitmap.createBitmap(backgroundBitmap, 0, 0, backgroundBitmap.getWidth(), backgroundBitmap.getHeight(), matrix, true); canvas.drawBitmap(scaledBackgroundBitmap, 0, 0, paint);
I'm running a command Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope Process -Confirm:$false -Force;Install-Module PSWindowsUpdate;Get-WindowsUpdate -KBArticleID KB5035845 -Install -IgnoreReboot and regardless what I can think of, terminal keeps asking me for user input, I just want everything to be automatically accepted so I can push this command onto hundreds of devices. I tried adding -Confirm:$false at the end of each script and still nothing. As you can see from the image below. [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/EiyPZ.png
installing PSWindows Update non-interactively
|powershell|
This model of scikit still does not support a progress bar. Models which support it are https://scikit-learn.org/stable/modules/classes.html#module-sklearn.ensemble
I'm trying to run Spring Boot + Hibernate + REST application from external Tomcat 9.0.0 server. When running locally from Intellij (which also uses embedded Tomcat server) everything works fine, but on external Tomcat initially it looks fine, the server is starting up, but after a moment the application stops working ``` [Hibernate Connection Pool Validation Thread] org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading Illegal access: this web application instance has been stopped already. Could not load [org.apache.logging.log4j.message.SimpleMessage]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access. java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already. Could not load [org.apache.logging.log4j.message.SimpleMessage]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access. at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading(WebappClassLoaderBase.java:1289) at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForClassLoading(WebappClassLoaderBase.java:1277) at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1142) at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1104) at org.apache.logging.log4j.message.AbstractMessageFactory.newMessage(AbstractMessageFactory.java:59) at org.jboss.logging.Log4j2Logger.doLog(Log4j2Logger.java:55) at org.jboss.logging.Logger.debug(Logger.java:552) at org.jboss.logging.DelegatingBasicLogger.debug(DelegatingBasicLogger.java:284) at org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl$PooledConnections.validate(DriverManagerConnectionProviderImpl.java:332) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) ``` Any ideas what may cause this issue?
I have a chronicle queue version 5.25ea12 and I receive the following error at net.openhft.chronicle.core.internal.ClassUtil.getSetAccessible0Method(ClassUtil.java:32) at net.openhft.chronicle.core.internal.ClassUtil.<clinit>(ClassUtil.java:13) at net.openhft.chronicle.core.Jvm.getField(Jvm.java:541) at net.openhft.chronicle.core.Jvm.maxDirectMemory0(Jvm.java:906) at net.openhft.chronicle.core.Jvm.<clinit>(Jvm.java:145) Caused by: java.lang.IllegalAccessException: module java.base does not open java.lang.reflect to unnamed module @566776ad at java.base/java.lang.invoke.MethodHandles.privateLookupIn(MethodHandles.java:279) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) ... 20 more I'm trying to run the application under JDK17 and with the given parameters -Dio.netty.tryReflectionSetAccessible=true --add-exports=java.base/jdk.internal.ref=ALL-UNNAMED --add-exports=jav a.base/sun.nio.ch=ALL-UNNAMED --add-exports=jdk.unsupported/sun.misc=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED --add-opens=jdk.compiler/com.sun.tools.javac=ALL-UNNAMED --add-opens=java.base/java .lang=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED Any suggestions? Thanks in advance
I'm sure the solution to my issue is somewhere in the documentation, but I cannot figure out how to configure a default service that gets used when setting up my global external load balancer on GCP. It might be relevant to mention that all my services behind the load balancer are Cloud Run services. I have my base URL, say example.com, and a bunch of services that are attached to it. For example, I have a example.com/login and a example.com/api. This part works perfectly fine so far. Now I would like to add a "default" service that gets called when the user access the base URL. What I have (and is working as expected) is something like this: gcloud compute network-endpoint-groups create $SERVERLESS_NEG_NAME \ --region=$REGION \ --network-endpoint-type=serverless \ --cloud-run-url-mask="${BASE_URL}/<service>" # create backend service gcloud compute backend-services create $BACKEND_SERVICE_NAME \ --load-balancing-scheme=EXTERNAL_MANAGED \ --global # add serverless NEG to backend service gcloud compute backend-services add-backend $BACKEND_SERVICE_NAME \ --global \ --network-endpoint-group=$SERVERLESS_NEG_NAME \ --network-endpoint-group-region=$REGION # create URL map with just one backend service gcloud compute url-maps create $URL_MAP_NAME \ --default-service=$BACKEND_SERVICE_NAME How would I add a default service (e.g. default-service) which gets forwarded to when the base URL is accessed?
default service for GCP load balancer
|google-cloud-platform|google-cloud-run|gcp-load-balancer|
The problem in my code was that i was manually calling **tabsContent.SelectPanel(name)**; which ends up invoking **OnSelectedPanelChanged** and **OnSelectedTabChanged** Changing the code to following fixed it: ``` <Row> <Column ColumnSize="ColumnSize.IsAuto.OnDesktop"> <Tabs @ref="@tabs" @bind-SelectedTab="@selectedTab"> <Items> <Tab Name="FirstTab">First tab</Tab> <Tab Name="SecondTab">Second Tab</Tab> <Tab Name="ThirdTab">Third tab</Tab> </Items> </Tabs> </Column> </Row> <TabsContent @bind-SelectedPanel="@selectedTab> <TabPanel Name="FirstTab"></TabPanel> <TabPanel Name="SecondTab"></TabPanel> <TabPanel Name="ThirdTab"></TabPanel> . . . @code { public string selectedTab { get; set; } Tabs tabs = default!; TabsContent tabsContent = default!; } ``` Controlling the tabs in code via **selectedTab** variable was enough. Details in this [github issue][1] from creator. [1]: https://github.com/Megabit/Blazorise/issues/5431
import scala.reflect.runtime.universe._ object CaseClassUtils { def productElementNames[T: TypeTag]: Set[String] = typeOf[T].members.collect { case m: MethodSymbol if m.isCaseAccessor => m.name.toString }.toSet }
I have a Pandas DataFrame containing a dataset `D` of instances drawn from a distribution `x`. `x` could be say uniform, or gaussian for example. I want to draw `n` samples from `D` according to some new `target_distribution` that is in general different than `x`. How can I do this efficiently? Right now, I sample a value `x`, subset `D` such that it contains all `x +- eps` and sample from that. But this is quite slow when the datasets get bigger. People must have come up with a better solution. Maybe the solution is already good but could be implemented more efficiently? I could split `x` into strata, which would be faster, but is there a solution without this? My current code, which works fine but is slow (1 min for 30k/100k, but I have 200k/700k or so.) ```python import numpy as np import pandas as pd import numpy.random as rnd from matplotlib import pyplot as plt from tqdm import tqdm n_target = 30000 n_dataset = 100000 x_target_distribution = rnd.normal(size=n_target) # In reality this would be x_target_distribution = my_dataset["x"].sample(n_target, replace=True) df = pd.DataFrame({ 'instances': np.arange(n_dataset), 'x': rnd.uniform(-5, 5, size=n_dataset) }) plt.hist(df["x"], histtype="step", density=True) plt.hist(x_target_distribution, histtype="step", density=True) def sample_instance_with_x(x, eps=0.2): try: return df.loc[abs(df["x"] - x) < eps].sample(1) except ValueError: # fallback if no instance possible return df.sample(1) df_sampled_ = [sample_instance_with_x(x) for x in tqdm(x_target_distribution)] df_sampled = pd.concat(df_sampled_) plt.hist(df_sampled["x"], histtype="step", density=True) plt.hist(x_target_distribution, histtype="step", density=True) ``` [1]: https://stats.stackexchange.com/
The problem is fixed! 1. After checking OVH, I discovered that the database was full and had become **Read Only**, preventing access to pages that require data insertion. 2. Identified the table with the maximum size; in my case, I found that the cache table had the largest size. Cleared the cache table to free up space. 3. Restarted the quota on OVH, and verified the database size. 4. Now it's green, with more space, and the pages work fine. 5. Configured cache performance in Drupal and activated cache clearing.
I think I'm on the right track by adding the packages: ** facebook_app_events: 0.19.1 flutter_login_facebook: 1.9.0** ** But I would like to know if anyone has done it with Flutter and PixelFacebook?** example with the facebook pixel in flutter thanks for helps!!!
How to implement facebook pixel in flutter in 2024 any example?
|flutter|facebook|package|facebook-pixel|
null
I am trying to fit FTIR spectra with other reference spectra. Input file path spectra_path = '' **dict_keys(['__header__', '__version__', '__globals__', 'AB', 'wh']) Shape of 'AB': (3301, 117) Shape of 'wh': (1, 2)** wavenumber_path = '' **dict_keys(['__header__', '__version__', '__globals__', 'wavenumber']) Shape of 'wavenumber': (1, 3301)** refnames_path = '' **dict_keys(['__header__', '__version__', '__globals__', 'refnames']) Shape of 'refnames': (117,)** input_path = '' **dict_keys(['__header__', '__version__', '__globals__', 'ab', 'wn']) Shape of 'ab': (467, 65537) Shape of 'wn': (1, 467)** import scipy.io as sio import pandas as pd import itertools def create_data_frame(spectra_file, refnames_file, wavenumber_file): spectra_mat = sio.loadmat(spectra_file) data = spectra_mat['AB'].T data = data[:, ::-1] wavenumber_mat = sio.loadmat(wavenumber_file) wavenumber = wavenumber_mat['wavenumber'][0] wavenumber = wavenumber[::-1] refnames_mat = sio.loadmat(refnames_file) names = refnames_mat['refnames'].flatten() wav = wavenumber.tolist() names = np.char.array(names).tolist() final_data_frame = pd.DataFrame(data, columns=wav) final_data_frame['ID'] = names # File ID or Reference names return final_data_frame final_data_frame = create_data_frame(spectra_path, refnames_path, wavenumber_path) print(final_data_frame.shape) final_data_frame ref_spectra_df = create_data_frame(spectra_path, refnames_path, wavenumber_path) wavenumber_columns = [col for col in ref_spectra_df.columns if col != 'ID' and 900 <= float(col) <= 1800] ref_spectra = ref_spectra_df[wavenumber_columns] print(ref_spectra.shape) dat2 = sio.loadmat(refnames_path) names = dat2['refnames'].flatten() names = [str(name[0]) for name in names] names = list(itertools.chain.from_iterable(names) if any(isinstance(i, list) for i in names) else names) (117, 901) After this when I perform regression for fittin, i end up with error. I am doubtful there is mismatch in the shape of the data but couldn't figure it out. Regression code snippet from sklearn.linear_model import LinearRegression from scipy.interpolate import interp1d import pandas as pd import numpy as np X = df_input ref_wavenumbers = np.linspace(1800, 900, ref_spectra.shape[1]) input_wavenumbers = np.linspace(1800, 900, X.shape[1]) interpolator = interp1d(ref_wavenumbers, ref_spectra, axis=1, kind='linear') ref_spectra_interpolated = interpolator(input_wavenumbers) regression_model = LinearRegression(positive=True) regression_model.fit(ref_spectra_interpolated.T, X.T) coefs = regression_model.coef_ coefs_df = pd.DataFrame(coefs.T, index=X.index) coefs_df.columns = names coefs_df = coefs_df.reindex(coefs_df.mean().sort_values(ascending=False).index, axis=1) norm = coefs_df.mean() coefs_df = coefs_df / norm coefs_df = coefs_df.dropna(axis=1, how='any') coefs_df.to_csv('CLS fitting results - Serum.csv') X=df_input which is input dataframe. the size is (256,256,467) which i reorganized as (256*256,467) which makes it (65536,467) so i further modified the regression code to iterate over each pixel. regression_model = LinearRegression(positive=True) coefs = np.zeros((X.shape[0], ref_spectra_interpolated.shape[1])) for i in range(X.shape[0]): if isinstance(X, pd.DataFrame): y = X.iloc[i, :].values else: y = X[i, :] regression_model.fit(ref_spectra_interpolated, y.reshape(-1, 1)) coefs[i, :] = regression_model.coef_ ValueError Traceback (most recent call last) <ipython-input-35-fdd8039b578c> in <cell line: 6>() 9 else: 10 y = X[i, :] ---> 11 regression_model.fit(ref_spectra_interpolated, y.reshape(-1, 1)) 12 coefs[i, :] = regression_model.coef_ 3 frames /usr/local/lib/python3.10/dist-packages/sklearn/utils/validation.py in check_consistent_length(*arrays) 395 uniques = np.unique(lengths) 396 if len(uniques) > 1: --> 397 raise ValueError( 398 "Found input variables with inconsistent numbers of samples: %r" 399 % [int(l) for l in lengths] ValueError: Found input variables with inconsistent numbers of samples: [117, 467] **Additional Information** Final DataFrame shape: (117, 3302) 4000 3999 3998 3997 3996 3995 3994 3993 \ 0 0.0 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.010017 1 0.0 0.076237 0.075786 0.092208 0.083860 0.089052 0.084329 0.093639 2 0.0 0.006262 0.009665 0.011625 0.016771 0.020251 0.021349 0.016950 3 0.0 0.073635 0.108422 0.166655 0.219240 0.290049 0.360189 0.441440 4 0.0 0.293801 0.386299 0.399522 0.278570 0.125340 0.022092 0.000000 3992 3991 ... 708 707 706 705 704 \ 0 0.006314 0.031430 ... 0.000000 0.000000 0.000000 0.000000 0.000000 1 0.101848 0.128834 ... 0.000000 0.000000 0.000000 0.000000 0.000000 2 0.010785 0.004299 ... 0.006163 0.004709 0.003217 0.001170 0.000319 3 0.519642 0.607271 ... 0.000000 0.000000 0.000000 0.000000 0.000000 4 0.001895 0.040333 ... 0.019329 0.022445 0.023159 0.017931 0.006998 703 702 701 700 ID 0 0.000000 0.000000 0.000000 0.0 ATP 1 0.000000 0.000000 0.000000 0.0 Acid-phosphatase 2 0.000000 0.000000 0.000185 0.0 Actin 3 0.000000 0.000000 0.000000 0.0 Adenine 4 0.003938 0.004616 0.022361 0.0 Ala-Phe [5 rows x 3302 columns] Reference Spectra shape: (117, 901) 1800 1799 1798 1797 1796 1795 1794 \ 0 0.398656 0.400172 0.401489 0.404073 0.408325 0.414702 0.421624 1 0.690701 0.688172 0.684635 0.681072 0.677156 0.673283 0.669346 2 0.191147 0.197091 0.203515 0.210285 0.216416 0.223856 0.233328 3 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 4 0.090553 0.101870 0.105062 0.107536 0.110965 0.119128 0.127208 1793 1792 1791 ... 909 908 907 906 \ 0 0.429084 0.437233 0.447926 ... 0.003007 0.003197 0.003463 0.003506 1 0.666719 0.665631 0.666432 ... 0.002496 0.002457 0.002665 0.002789 2 0.245261 0.257144 0.268808 ... 0.000177 0.000900 0.002059 0.002893 3 0.000000 0.000000 0.000000 ... 0.055016 0.057460 0.060365 0.061447 4 0.139671 0.156604 0.181457 ... 0.000000 0.000000 0.000000 0.000000 905 904 903 902 901 900 0 0.003749 0.004070 0.004753 0.005364 0.006069 0.006608 1 0.003047 0.003060 0.003115 0.002932 0.002834 0.002494 2 0.003389 0.003392 0.003278 0.002821 0.002229 0.001372 3 0.060328 0.056912 0.053430 0.050287 0.048376 0.046766 4 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 [5 rows x 901 columns] Names length: 117 First few names: ['A', 'A', 'A', 'A', 'A'] Interpolated Reference Spectra shape: (117, 467) Coefficients shape: (65537, 117) [`plt.plot(input_wavenumbers, ref_spectra_interpolated[:, i], label=name)`][1] [1]: https://i.stack.imgur.com/V1QNI.png
Could you try to be a bit more specific? If you want to dynamically obtain a file given some key, or match your keys with keys present in the bucket you can list the items in the bucket then just filter a list of strings with regex. This is the most naive approach, and better solutions can be implemented specific to your task. You can split the key of each file at some character (for example `/`). In pseudocode (assuming you have boto3 setup in the environment you are executing your python code): ``` import boto3 ### # Necessary boto3 setup and auth would normally be here ### s3_client = boto3.client("s3") objects = s3_client.list_objects_v2(Bucket=bucket_name)["Contents"] for file in objects: # file key is the URI string # if the names do not match check with a debugger if the URI # looks like you would expect it to look file_key: str = file["Key"] key_split = file_key.split('/') # ensure the file_key is unquoted ``` If you have already defined the **exact** keys inside the curly brackets, then to download a json file from s3 you would ``` file_content = s3_client.get_object(Bucket=bucket_name, Key=file_key) ## process the data accordingly ``` but again, the intent of your question is missing here. Could you try to provide a minimal (at least theoretical) working example of what you would like to accomplish?
Illegal access: this web application instance has been stopped already. Could not load [org.apache.logging.log4j.message.SimpleMessage]
|java|spring|spring-boot|hibernate|tomcat|
null
>Access to the path '/https/webapi-docker-demo.pfx' is denied. It is the path "inside the container". So what you need is make sure the Container inside user have the permission to access path. You could change your Dockerfile like below it will work. ``` FROM mcr.microsoft.com/dotnet/aspnet:8.0-jammy AS base //USER app RUN mkdir /https RUN chown 777 /https WORKDIR /app ... ... //USER $APP_UID ```
I have a Symfony 6.4 application that uses a few custom bundles, some of which include JavaScript files. The source files are in src/MyBundle/Resources/public/js/. I use the assets:install command to compile them into the public/bundles/ folder. My issue is that when I make changes to these JS files, then run assets:install again, the filenames don't change and so user browsers tend to keep using cached files. How can I bust the cache on bundle assets? I already use Webpack to generate file hashes for my application assets, but I can't get it to work on files in my bundles.
What's the solution for cache busting on Symfony 6 bundle assets?
|symfony|webpack|caching|
I have a table I'm trying to sort by column values It looks something like this: |CaseID|Stage|EventDate| |------|-----|---------| |1 |A |01/01/10 | |1 |B |01/03/10 | |1 |B |01/04/10 | |1 |C |01/05/10 | |2 |A |02/01/10 | |2 |B |02/02/10 | |2 |C |02/03/10 | |2 |C |02/05/10 | I'm trying to organize the data by the Stage so that only the latest EventDate shows something like this |CaseID|A |B |C | |------|--------|--------|--------| |1 |01/01/10|01/04/10|01/05/10| |2 |02/01/10|02/02/10|02/05/10| I did a group by statement ``` SELECT CaseID ,CASE WHEN Stage = 'A' THEN MAX(EventDate) END AS A ,CASE WHEN Stage = 'B" THEN MAX(EventDate) END AS B ,CASE WHEN Stage = 'C' THEN MAX(EventDate) END AS C FROM StageTable GROUP BY CaseID, Stage ``` But this returned too many rows with NULL placeholders |CaseID|A |B |C | |------|--------|--------|--------| |1 |01/01/10|NULL |NULL | |1 |NULL |01/04/10|NULL | |1 |NULL |NULL |01/05/10| |2 |02/01/10|NULL |NULL | |2 |NULL |02/02/10|NULL | |2 |NULL |NULL |02/05/10| I'd like for each row to condense, but I don't know where I went wrong. I've seen other questions with similar questions, but they all seemed to have issues with joint tables showing duplicate results. Any suggestions would be helpful
Group By Clause Giving Too Many Return Rows
|sql|group-by|null|
null
|reactjs|next.js|next.js13|
Why do I get error "Value does not fall within the expected range." # Path to the shortcut you want to change $shortcutPath = "C:\folder with space\terminal.exe - Portable.lnk" # New EXE target path $newTargetPath = """C:\folder with space\terminal.exe"" /portable" $shell = New-Object -ComObject WScript.Shell $Location = "C:\folder with space\" $shortcut = $shell.CreateShortcut("$Location\terminal.exe - Portable.lnk") $shortcut.TargetPath = $newTargetPath #$shortcut.IconLocation = "shell32.dll,21" $shortcut.Save()
I am exporting a CSV file with 2 million records and 150+ columns. It is utilizing high RAM around 30-40 GB. What is the best way to load this data into a CSV file? ```javascript using (Stream stream = File.OpenWrite(fileInfo)) { stream.SetLength(0); using (StreamWriter writer = new StreamWriter(stream)) { writer.WriteLine(string.Join(Seperator, newColumnList.ToArray())); foreach (DataRow row in dataTable.Rows) { IEnumerable<string> fields = null; if (fileextension.ToLower() == "txt") { fields = row.ItemArray .Select(field => field.ToString()); } else { fields = row.ItemArray.Select(field => string.Concat("\"", field.ToString().Replace("\"", "\"\""), "\"")); } writer.WriteLine(String.Join(Seperator, fields)); } writer.Flush(); } } ```