instruction stringlengths 0 30k ⌀ |
|---|
|android|flutter| |
null |
ill try to get my first laravel Project running and im using an SQL Server . Ill have just installed sqlsrv and my normal connections do work. But in Laravel ill get a error message "could not find driver" when i i run the enviroment server with "artisan serve"
my php.ini shows me that sqlsrv is enabled
sqlsrv
sqlsrv support enabled
ExtensionVer 5.11.1+17301
Directive Local Value Master Value
sqlsrv.ClientBufferMaxKBSize 10240 10240
sqlsrv.LogSeverity 0 0
sqlsrv.LogSubsystems 0 0
sqlsrv.WarningsReturnAsErrors On On`
my .env look like this:
DB_CONNECTION=sqlsrv
DB_HOST=192.168.xx.xx
DB_PORT=1433
DB_DATABASE=webdev_laravel
DB_USERNAME=webdev_applikation
DB_PASSWORD=password
ill tried
- server restart
- clear cache
i read a lot of other threads with the same error but i dont want to use .PDO and i think its optional , or isnt it ?
Thank you guys
|
PHP Laravel SQLSRV could not find driver |
|php|laravel| |
A quick aside, we can cast to-and-fro from one table type to another, with restrictions:
/** create a TYPE **/
CREATE TYPE tyNewNames AS ( a int, b int , c int ) ;
SELECT
/** note: order, type & # of columns must match exactly**/
ROW((rec).*)::tyNewNames AS "rec newNames" -- f1,f2,f3 --> a,b,c
, (ROW((rec).*)::tyNewNames).* -- expand the new names
,'<new vs. old>' AS "<new vs. old>"
,*
FROM
(
SELECT
/** inspecting rec: PG assigned stand-in names f1, f2, f3, etc... **/
rec /* a record*/
,(rec).* -- expanded fields f1, f2, f3
FROM (
SELECT ( 1, 2, 3 ) AS rec -- an anon type record
) cte0
)cte1
;
+---------+-+-+-++------------+--------+--+--+--+
|rec |rec |
|newnames |a|b|c|<new vs. old>|oldnames|f1|f2|f3|
+---------+-+-+-++------------+--------+--+--+--+
|(1,2,3) |1|2|3|<new vs. old>|(1,2,3) |1 |2 |3 |
+---------+-+-+-++------------+--------+--+--+--+
A compressed example of this code might look like this:
SELECT ( ( ROW( (rec).* ) )::tyNewNames ).* ;
db fiddle(uk)
[https://dbfiddle.uk/dlTxd8Y3][1]
However as Erwin points out, with the exception of a few cases, ROW() is a noise word. It's also possible to simply recast the record on-the-fly with the new field names, and without the nested CTE's:
/** create a TYPE **/
CREATE TYPE tyNewNames AS ( a int, b int , c int ) ;
SELECT ( 1, 2, 3 ) -- an anon type record
, (ROW( 1, 2, 3 )).* -- anon field names f1, f2, f3 with ROW() wrapper
, ( ( 1, 2, 3 )).* -- anon field names f1, f2, f3 w/out ROW() wrapper
/** cast to new names OTF , f1,f2,f3 --> a,b,c **/
, ( ROW( 1 ,2 ,3 )::tyNewNames ).* -- row() wrapper
, ( ( 1 ,2 ,3 )::tyNewNames ).* -- no row() wrapper
;
+-------+--+--+--+---+--+---+--+-+-+--+-+-+
|row |f1|f2|f3| f1|f2|f3 | a|b|c| a|b|c|
+-------+--+--+--+---+--+---+--+-+-+--+-+-+
|(1,2,3)|1 |2 |3 | 1| 2| 3 | 1|2|3| 1|2|3|
+-------+--+--+--+---+--+---+--+-+-+--+-+-+
[https://dbfiddle.uk/OPJcuzs7][2]
[1]: https://dbfiddle.uk/dlTxd8Y3
[2]: https://dbfiddle.uk/OPJcuzs7 |
The answer above is correct. Since Angular 17, by default you don't need to import anything into the module. All the necessary modules and components are imported directly into the components, but there is the `standalone: true` option by default. But if you want it to be like in the old versions, you can always turn it to false. |
Android Studio Using recently added resources in compose preview in multi-module project |
The matrix you've posted is symmetric, and real-valued. (In other words, `A = A.T`, and it has no complex numbers.) This matters because all matrices which are symmetric and real-valued are [normal matrices](https://en.wikipedia.org/wiki/Normal_matrix). [Source](https://en.wikipedia.org/wiki/Symmetric_matrix#Symmetry_implies_normality). If the matrix is normal, then any polar decomposition of it follows `P @ U = U @ P`. [Source](https://math.stackexchange.com/questions/3038582/prove-that-the-polar-decomposition-of-normal-matrices-a-su-is-such-that-su).
Any square diagonal matrix is also symmetric. [Source](https://en.wikipedia.org/wiki/Diagonal_matrix#Properties). However, technically the matrix you have posted is not diagonal - it has entries outside its main diagonal. The matrix is only [tri-diagonal](https://en.wikipedia.org/wiki/Tridiagonal_matrix). These matrices are not necessarily symmetric. However, if your tridiagonal matrix is symmetric and real-valued, then its polar decomposition is commutative.
In addition to mathematically proving this idea, you can also check it experimentally. The following code generates thousands of matrices, and their polar decompositions, and checks if they are commutative.
```
import numpy as np
from scipy.linalg import polar
N = 4
iterations = 10000
for i in range(iterations):
A = np.random.randn(N, N)
# A = A + A.T
U, P = polar(A)
are_equal = np.allclose(U @ P, P @ U)
if not are_equal:
print("Matrix A does not have commutative polar decomposition!")
print("Value of A:")
print(A)
break
if (i + 1) % (iterations // 10) == 0:
print(f"Checked {i + 1} matrices, all had commutative polar decompositions")
```
If you run this, it will immediately find a counter-example, because the matrix is not symmetric. However, if you uncomment `A = A + A.T`, which forces the random matrix to be symmetric, then all of the matrices work.
Lastly, if you need a left-sided polar decomposition, you can use `polar(A, side='left')` to get that. The [documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.polar.html) explains how to do this. |
I have a problem that showing Lifetime amount by using *ALL() or REMOVEFILTERS()* functions on my report.
I have created a new date table as slicer that implies **Yesterday, Last 7 Days, Same Day Last Month, All, Custom** etc., and it works perfect. But when I selected any slicer rather than **"All"**, Lifetime Revenue does not show correct amount.
Since the new slicer table consists duplicate dates, the cross-filter direction on relationship is not single as below.
[![enter image description here][1]][1]
The dax query, I've used is to get Life Time Revenue:
Revenue_Lifetime =
CALCULATE(
SUM(Revenue[Revenue]),
ALL('Date Periods'[Date],'Date Periods'[Type])
)
Here is a sample report that shows the problem:
[SampleReport][1]
Thank you,
Kisa
[1]: https://wetransfer.com/downloads/5166e4fe6803a1ceafff5554f32f1a2620240330183511/5ca780 |
To achieve what you're looking for, you need to listen for a few different events.
I'll give you this code snippet I wrote based on your codepen. It detects the events you'll need to look for.
1. Window Resize
2. Image Scaling up/down
3. Extra check: checking if the image itself was resized (if you add the ability later in your code)
```
document.addEventListener('DOMContentLoaded', function() {
// Get the reference to the div container
const mapContainer = document.getElementById('map1');
// Find the image inside the container
const imageElement = mapContainer.querySelector('img');
// Add event listener to window resize event
window.addEventListener('resize', function() {
console.log('Window resized');
// Detect if the image was scaled
detectImageScaling();
});
// Add event listener to image resize event
const imageResizeObserver = new ResizeObserver(entries => {
for (let entry of entries) {
console.log('Image resized');
}
});
// Start observing the image element
imageResizeObserver.observe(imageElement);
// Function to detect scaling changes of your map1.img
function detectImageScaling() {
const currentWidth = imageElement.clientWidth;
const currentHeight = imageElement.clientHeight;
// Check if the width or height has changed since the last check
if (imageElement.dataset.prevWidth !== currentWidth || imageElement.dataset.prevHeight !== currentHeight) {
if (imageElement.dataset.prevWidth && imageElement.dataset.prevHeight) {
const scalingFactorX = currentWidth / imageElement.dataset.prevWidth;
const scalingFactorY = currentHeight / imageElement.dataset.prevHeight;
if (scalingFactorX > 1 || scalingFactorY > 1) {
console.log('Image scaled up');
} else if (scalingFactorX < 1 || scalingFactorY < 1) {
console.log('Image scaled down');
}
}
// Update previous width and height
imageElement.dataset.prevWidth = currentWidth;
imageElement.dataset.prevHeight = currentHeight;
}
// Call this function recursively to continue detecting scaling changes
requestAnimationFrame(detectImageScaling);
}
});
``` |
I try to debug after successfully compiled dll to use ini Godot engine. I configured my launch.vs.json
{
"version": "0.2.1",
"defaults": {},
"configurations": [
{
"type": "default",
"project": "C:\\dev\\my\\godot\\Godot_v4.2.1-stable_win64.exe",
"name": "Godot Game",
"args": [ "--path", "C:\\dev\\my\\godot\\godotcpp\\test1\\godot-cpp-template\\demo" ]
},
{
"type": "default",
"project": "C:\\dev\\my\\godot\\Godot_v4.2.1-stable_win64.exe",
"name": "Godot Editor",
"args": [ "C:\\dev\\my\\godot\\godotcpp\\test1\\godot-cpp-template\\demo\\project.godot" ]
}
]
}
Godot_v4.2.1-stable_win64.exe is in the correct path:
[![enter image description here][1]][1]
I can see the debug entries in VS++:
[![enter image description here][2]][2]
but when I click one of them I'm getting this error:
[![enter image description here][3]][3]
And it's not read only:
[![enter image description here][4]][4]
[1]: https://i.stack.imgur.com/sS5yb.png
[2]: https://i.stack.imgur.com/Rl3H6.png
[3]: https://i.stack.imgur.com/4k8nO.png
[4]: https://i.stack.imgur.com/P32JG.png |
I have a opencart 2.3 store and I have downloaded a extension that shows all products on one page, it creates a category called All and displays all products in that category. It's working all ok but I would like to have the product filter displayed in the left column so it's the same as the other categories. For example the category here https://www.beechwoodsolutions.co.uk/sites/simply-heavenly-foods/index.php?route=product/category&path=271 has the product filter in the left column. On the all products category page here https://www.beechwoodsolutions.co.uk/sites/simply-heavenly-foods/index.php?route=product/category&path=-1 I would like to have the product filter displayed. The extension I downloaded is https://www.opencart.com/index.php?route=marketplace/extension/info&extension_id=29713. I have contacted the developer but don't think they are active anymore so was seeing if anyone was able to help please |
Display filter on all products category like the other category pages in opencart 2.3 |
|php|opencart|opencart2.3| |
when click matlab Registration(use the NFRI function),occur matlab has encountered an internal problem needs to close.
this is the error detail.
```
------------------------------------------------------------------------
Access violation detected at Sun Mar 31 11:08:39 2024
------------------------------------------------------------------------
Configuration:
Crash Decoding : Disabled
Default Encoding : GBK
MATLAB Architecture: win64
MATLAB Root : E:\Matlab2013b\Mingchuan Matlab2013B\matlab2013
MATLAB Version : 8.2.0.701 (R2013b)
Operating System : Microsoft Windows 8
Processor ID : x86 Family 175 Model 80 Stepping 0, AuthenticAMD
Virtual Machine : Java 1.7.0_11-b21 with Oracle Corporation Java HotSpot(TM) 64-Bit Server VM mixed mode
Window System : Version 6.2 (Build 9200)
Fault Count: 2
Abnormal termination:
Illegal instruction
Register State (from fault):
RAX = 0000000096366000 RBX = 0000000096367000
RCX = 00000000043ef6b0 RDX = 0000000096797800
RSP = 00000000043ef4c0 RBP = 00000000043eff80
RSI = 00000000043eff88 RDI = 00000000043eff80
R8 = 0000000000000080 R9 = 0000000096367000
R10 = 0000000000000780 R11 = 0000000000000280
R12 = 0000000000000004 R13 = 0000000000000048
R14 = 0000000000000004 R15 = 0000000000000004
RIP = 00000001317805b2 EFL = 00010212
CS = 0033 FS = 0053 GS = 002b
Stack Trace (from fault):
[ 0] 0x00000001317805b2 E:\Matlab2013b\Mingchuan Matlab2013B\matlab2013\bin\win64\mkl.dll+24642994 xerbla+22640098
[ 1] 0x0000000130dbd484 E:\Matlab2013b\Mingchuan Matlab2013B\matlab2013\bin\win64\mkl.dll+14406788 xerbla+12403892
[ 2] 0x0000000130db2db8 E:\Matlab2013b\Mingchuan Matlab2013B\matlab2013\bin\win64\mkl.dll+14364088 xerbla+12361192
[ 3] 0x00000001300d10e5 E:\Matlab2013b\Mingchuan Matlab2013B\matlab2013\bin\win64\mkl.dll+00856293 mkl_cbwr_set+00472453
[ 4] 0x0000000130019d5a E:\Matlab2013b\Mingchuan Matlab2013B\matlab2013\bin\win64\mkl.dll+00105818 dgemm+00000362
[ 5] 0x000000005d0eac3b E:\Matlab2013b\Mingchuan Matlab2013B\matlab2013\bin\win64\libmwmathlinalg.dll+00437307 UMFPACK_ComplexLUFactor::extractFactors+00103211
[ 6] 0x000000005d0eb21d E:\Matlab2013b\Mingchuan Matlab2013B\matlab2013\bin\win64\libmwmathlinalg.dll+00438813 mfMatrixMult+00000125
[ 7] 0x000000005d0ecec3 E:\Matlab2013b\Mingchuan Matlab2013B\matlab2013\bin\win64\libmwmathlinalg.dll+00446147 mfMatrixMult+00007459
[ 8] 0x000000005d0ecfd6 E:\Matlab2013b\Mingchuan Matlab2013B\matlab2013\bin\win64\libmwmathlinalg.dll+00446422 mfMatrixMult+00007734
[ 9] 0x000000005d0ed1e3 E:\Matlab2013b\Mingchuan Matlab2013B\matlab2013\bin\win64\libmwmathlinalg.dll+00446947 mfMatrixMult+00008259
[ 10] 0x000000018000d7ff E:\Matlab2013b\Mingchuan Matlab2013B\matlab2013\bin\win64\m_dispatcher.dll+00055295 Mfh_file::dispatch_fh+00001167
[ 11] 0x000000018000ddb7 E:\Matlab2013b\Mingchuan Matlab2013B\matlab2013\bin\win64\m_dispatcher.dll+00056759 Mfunction_handle::dispatch+00000487
[ 12] 0x0000000011c268a7 E:\Matlab2013b\Mingchuan Matlab2013B\matlab2013\bin\win64\m_interpreter.dll+00420007 inFunctionHandleInterface::DestroyWorkspace+00242727
[ 13] 0x0000000011c29b38 E:\Matlab2013b\Mingchuan Matlab2013B\matlab2013\bin\win64\m_interpreter.dll+00432952 inFunctionHandleInterface::DestroyWorkspace+00255672
```
1、uninstall and reinstall,but also occur
expect result:
can run success |
ALL() and REMOVEFILTER() Don't Work Correctly on My Report |
|powerbi|dax| |
I use hibernate for a desktop application and the database server is in another country.
Unfortunately, connection problems are very common at the moment.
**These are excerpts from the log file on the database server:**
1. 2024-03-19 14:08:42 2378 [Warning] Aborted connection 2378 to db: 'CMS_DB' user: 'JOHN' host: 'bba-83-130-102-145.alshamil.net.ae' ( Got an error reading communication packets)
2. 2024-03-19 13:44:45 1803 [Warning] Aborted connection 1803 to db: 'CMS_DB' user: 'REMA' host: '188.137.160.92' (Got timeout reading communication packets)
3. 2024-03-19 11:51:08 1526 [Warning] Aborted connection 1526 to db: 'unconnected' user: 'unauthenticated' host: '92.216.164.102' (Got an error reading packet communications)
4. 2024-03-19 11:51:08 1526 [Warning] Aborted connection 1526 to db: 'unconnected' user: 'unauthenticated' host: '92.216.164.102' (This connection closed normally without authentication)
5. 2024-03-19 11:55:26 1545 [Warning] IP address '94.202.229.78' could not be resolved: No such host is known.
**In addition, these error messages often appear on the client-side:**
javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: Unable to acquire JDBC Connection
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:154)
at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1542)
at org.hibernate.query.Query.getResultList(Query.java:165)
at
**Also this:**
Caused by: java.sql.SQLTransactionRollbackException: (conn=9398) Deadlock found when trying to get lock; try restarting transaction
at org.mariadb.jdbc.internal.util.exceptions.ExceptionFactory.createException(ExceptionFactory.java:76)
at org.mariadb.jdbc.internal.util.exceptions.ExceptionFactory.create(ExceptionFactory.java:153)
at org.mariadb.jdbc.MariaDbStatement.executeExceptionEpilogue(MariaDbStatement.java:274)
at org.mariadb.jdbc.ClientSidePreparedStatement.executeInternal(ClientSidePreparedStatement.java:229)
at org.mariadb.jdbc.ClientSidePreparedStatement.execute(ClientSidePreparedStatement.java:149)
at org.mariadb.jdbc.ClientSidePreparedStatement.executeUpdate(ClientSidePreparedStatement.java:181)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:1502)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:197)
... 41 more
Caused by: org.mariadb.jdbc.internal.util.exceptions.MariaDbSqlException: Deadlock found when trying to get lock; try restarting transaction
at org.mariadb.jdbc.internal.util.exceptions.MariaDbSqlException.of(MariaDbSqlException.java:34)
at
So far I had the following c3p0 configuration in my hibernate.cfg.xml.
```
<!-- Related to the connection START -->
<property name="connection.driver_class">org.mariadb.jdbc.Driver</property>
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
<!-- Related to Hibernate properties START -->
<property name="hibernate.connection.driver_class">org.mariadb.jdbc.Driver</property>
<property name="hibernate.show_sql">false</property>
<property name="hibernate.format_sql">false</property>
<property name="hibernate.current_session_context_class">thread</property>
<property name="hibernate.temp.use_jdbc_metadata_defaults">false</property>
<property name="hibernate.generate_statistics">true</property>
<property name="hibernate.enable_lazy_load_no_trans">true</property>
<!-- c3p0 Setting -->
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<property name="hibernate.c3p0.min_size">4</property>
<property name="hibernate.c3p0.max_size">15</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">20</property>
<property name="hibernate.c3p0.acquire_increment">3</property>
<property name="hibernate.c3p0.idle_test_period">100</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.unreturnedConnectionTimeout">30</property>
<property name="hibernate.c3p0.debugUnreturnedConnectionStackTraces">true</property>
```
Can someone look into whether the values for the remote connection make sense? Any change recommendation is warmly welcomed!
Thanks in advance! |
I have this data the id here (lr_01HT89SX627dFDPCAXBS4H2T9d) is dynamic.
https://www.example.com/api/pub/v2/verifications/lr_01HT89SX627dFDPCAXBS4H2T9d
I need to extract the id- lr_01HT89SX627dFDPCAXBS4H2T9d from https://www.example.com/api/pub/v2/verifications/ the id will be different for every response.
Header: HTTP/2 201
Header: date: Sat, 30 Mar 2024 18:26:29 GMT
Header: content-length: 0
Header: location: https://www.example.com/api/pub/v2/verifications/lr_01HT89SX627dFDPCAXBS4H2T9d
Header: cf-cache-status: DYNAMIC
I cant figure out how to do it. |
I have three emails one of them it seems has been deleted and I don't know why but the other two which all three were connected but the other two I can't get into because I don't have my old phone and I don't have the two step codes that I need to get into one of them so what application can I use to access my Gmail account
I've only tried on the Google site how to access these things and nothing has worked |
How to access a Gmail account |
|gmail| |
null |
Flextable or dplyr R - change order of rows |
|r|dplyr|row|flextable| |
There is more to this answer. In the specific use case of 1999.99, the 2 functions will give the same results numerically. However, as others have noted, intval return as int and floor returns it as a float. I am not clear why floor returns as float as it is not necessary. Also, as others have noted, intval simply chops off the decimal where floor returns the "lowest" integer, which impacts negative numbers.
For the remaining difference is that intval will accept way many more variable types and attempt to return a logical answer where floor expects only integer or float. For example, if false or true is fed into floor, you will get an error, but intval will convert it to 0 and 1 respectively. |
[See the image for how it is getting](https://i.stack.imgur.com/222LJ.png)
[My installed plugins in WP](https://i.stack.imgur.com/qbClG.png)
I wanted to remove the "span" element from the add to cart product title. I provided the both images one with problem and the other are my installed plugins.
I think its in the Boostify Header Footer Builder(The client says that). |
I found "Span" element in the product title after adding in the cart. Solve this |
|wordpress|woocommerce| |
null |
I'm asking how are labels managed by C.
EX:
int a = atoi(argv[1][0]);
if(a>3)
lbl: printf("A");
printf("B");
goto lbl;
is "equivalent" to
if(a>3)
{
// label handeled as an istruction
lbl:
}
printf("A");
printf("B");
goto lbl;
or to
if(a>3)
{
// label handeled like a preprocessor directive
lbl:
printf("A");
}
printf("B");
goto lbl;
The code make no actual work, but is useful to highlight the question.
Tests on GCC suggests me that the 2nd way is how it's handled, but I'm not sure if it's to be expected or is an undefined behaviour |
Visual Studio C++ Access to path is denied |
Here is an example to break a string into multiple lines in a @dataclass for use with QT style sheets. Note the inclusion of indentation and the slashes.`@dataclass
class button_a:
i: str = "QPushButton:hover {background: qradialgradient(cx:0, cy:0, radius:1,fx:0.5,fy:0.5,stop:0 white, stop:1 green);"\
"color:qradialgradient(cx:0, cy:0, radius:1,fx:0.5,fy:0.5,stop:0 yellow, stop:1 brown);"\
"border-color: purple;}"
` |
Running this should work:
gphoto2 --set-config eosremoterelease="2" --wait-event=1s --set-config eosremoterelease="4"
|
I am trying to run an ionic/angular app locally using an Android emulator.
I receive the error below when using "ionic capacitor run android".
Any ideas?
Thanks
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/2IvZj.png |
Unable to run ionic app on Android emulator - [error] ADBs is unresponsive after 5000ms |
|android|angular|ionic-framework| |
After drop_na it shows 0 obs. of 68 variables:
[![enter image description here][1]][1]
[![enter image description here][2]][2]
[![enter image description here][3]][3]
After drop_na I don't see any results in the Table except dates. This was not the case when I tried it before, I could see the values in the table.
```r
library(tidyverse)
WDI_GDP <- read_csv("C:/Users/ASYA/Desktop/P_Data_Extract_From_World_Development_Indicators/b0351889-13b3-4cbe-a5c0-a2dd9d633eab_Data.csv")
WDI_GDP <- WDI_GDP %>%
mutate(across(contains("[YR"), ~na_if(.x,"..")) %>%
mutate(across(contains("[YR"), as.numeric)))
WDI_GDP <- drop_na(WDI_GDP)
```
[1]: https://i.stack.imgur.com/oqjo8.png
[2]: https://i.stack.imgur.com/elrGR.png
[3]: https://i.stack.imgur.com/txJEv.png |
null |
I'm trying to use javascript to place an image overlay over an existing HTML image when the user clicks on it. I have that part working, but am wondering if it is possible to reposition the superimposed images when the window resizes so they maintain their position relative to the base image. I'd also like to scale the inserted images on window resize. I have a little codepen here: <https://codepen.io/Matt-Denko/pen/abxLqrR>.
<!-- begin snippet: js hide: false console: false babel: false -->
<!-- language: lang-js -->
window.addEventListener("load", function () {
document.getElementById('myImg').onclick = function(event) {
var i = new Image();
i.src = 'https://picsum.photos/30/30?grayscale';
i.style.position = "absolute";
var yOffset = event.clientY;
var xOffset = event.clientX;
// Apply offsets for CSS margins etc.
yOffset -= 148;
xOffset -= 40;
// Set image insertion coordinates
i.style.top = yOffset + 'px';
i.style.left = xOffset + 'px';
// Append the image to the DOM
document.getElementById('map1').appendChild(i);
}
});
<!-- language: lang-css -->
.responsive {
position: relative;
max-width: 1200px;
width: 100%;
height: 100%;
}
<!-- language: lang-html -->
<div class="container">
<div style="height: auto; padding: 3px 0 3px 0;">
<div class="card">
<div class="card_content">
<h1>Map PNG Testing</h1>
<div id="map1" class="responsive">
<img src="https://picsum.photos/seed/picsum/200/300" id="myImg" class="responsive">
</div>
</div>
</div>
</div>
</div>
<!-- end snippet -->
|
matlab has encountered an internal problem needs to close |
|matlab| |
null |
null |
null |
null |
null |
null |
The problem you're running into is like trying to fit a square peg in a round hole because of the way Livewire handles scripts in its component files. Basically, Livewire gets a bit confused when it sees standard JavaScript mixed in with its own system directly in the blade files.
When you put your script tag directly in the blade with @script, Livewire tries to process it in its own special way. But, it's not always great at understanding the usual JavaScript stuff like var or const when it's done like this. That's why you're seeing that "Unexpected token 'var'" error. It's as if Livewire is saying, "I don't know what you're trying to do here!"
Moving your JavaScript code (var options...) to a different spot and it suddenly working is kind of like finding an unexpected workaround. It changes how Livewire sees and deals with the script, even though it's not immediately obvious why that fixes the error.
Leverage @push and @stack: These Livewire directives are awesome for managing where and when your scripts load on the page, giving you more control and avoiding conflicts.
@push('scripts')
<script src="https://cdn.jsdelivr.net/npm/apexcharts"></script>
<script>
document.addEventListener('livewire:load', function () {
// Your script
});
</script>
@endpush
|
I had a similar issue. I was trying to publish .NET core API to Azure from visual studio community edition in Mac, but my webapp was not appearing in my visual studio. Eventually I figure out I created my azure webapp in Linux OS, when I create new app in Windows OS it started appearing in my visual studio.
I am sure OS should not be a reason for this, or I need to do some settings in my visual studio which I am not aware of, but it is working what I wanted.
I am sharing my experience here, if it could help someone.
Thanks |
|visual-studio|network-security| |
> However, it seems that some line will still be skipped. I find that the default compile optimization is -O2.
This is normal, kernel code is not written to be compiled with optimizations turned off. In fact, it heavily relies on compiler optimizations, and from [my own experience][1] I know for a fact that it won't compile with `-O0`. It may compile with `-O1` in some cases, but I don't think that's a guarantee.
You can try with `-O1`, that will get you somewhere, but I would recommend keeping the default of `-O2`. And of course, enable the generation of debugging information in your configuration (in `make menuconfig`, under *"Kernel hacking" -> "Compile-time checks and compiler options"*). There is also `CONFIG_READABLE_ASM` that if enabled will try generating more human readable assembly code.
> is there a way to debug the kernel with finer granularity?
Not really. The only real way to do this is to be proficient enough in assembly to understand what is going on, what got optimized and what did not. Even with lower optimization levels, a lot of kernel code uses static and inline functions that will get simplified and inlined by the compiler. Debugging that kind of code you will see GDB jumping all over the place constantly. You cannot do much about it.
Using GDB with `layout split` is the way to go, to show both code and assembly together. Have a look at [this GDB doc page][2] and [this other question][3] for more insights.
If you want to get proficient at debugging in this way, start by studying what is the calling convention on the architecture you are working on and what instructions are used for calls, returns and branches. That will make you understand what parts of the assembly match the C code at a glance. Other than that, it's only a matter of practice.
[1]: https://github.com/mebeim/systrack/blob/56f3652fc7653349f7f50f5c663468992cb81a34/src/systrack/kernel.py#L637
[2]: https://sourceware.org/gdb/current/onlinedocs/gdb.html/TUI-Commands.html
[3]: https://stackoverflow.com/q/10115540/3889449 |
I have an application in C# that needs to share some binary data with Python, and I'm planning to use shared memory (memory mapped file). Thus, I need to have the same binary structure on both sides.
I create a struct in C# (`ST_Layer`) that has an array of items of another struct (`ST_Point`). To access those items I have defined a function `getPoint` that will return a pointer to the position of the requested `ST_Point`.
```cs
using System.Diagnostics;
using System.Runtime.InteropServices;
using System.Threading;
class TestProgram
{
[StructLayout(LayoutKind.Explicit, Pack = 8)]
public struct ST_Point
{
[FieldOffset(0)] public double X;
[FieldOffset(8)] public double Y;
}
[StructLayout(LayoutKind.Explicit, Pack = 8)]
public unsafe struct ST_Layer
{
[FieldOffset(0)] public ushort nPoints;
[FieldOffset(8)] public double fHeight;
[MarshalAs(UnmanagedType.ByValArray, SizeConst = 10)]
[FieldOffset(16)] public ST_Point[] Points;
public unsafe ST_Point* getPoint(int n)
{
fixed (ST_Layer* thisPtr = &this)
{
int ptr = (int)&(thisPtr->Points) + n * ((16 + 3) / 4) * 4; // Sizeof ST_Point 16
return (ST_Point*)ptr;
}
}
}
static void Main(string[] args)
{
unsafe
{
ST_Layer layer = new ST_Layer();
layer.nPoints = 3;
layer.fHeight = 4;
layer.getPoint(0)->X = 5;
layer.getPoint(3)->Y = 6;
for (int i = 0; i < 10; i++)
Debug.WriteLine("Data [" + i + "] " + layer.getPoint(i)->X + ", " + layer.getPoint(i)->Y + " - ");
while (true)
Thread.Sleep(50);
}
}
}
```
I have 2 questions:
- The previous code works if I compile the code for x86, but not for x64 or 'Any CPU' ('System.AccessViolationException' exception in 'layer.getPoint(0)-\>X'). Is it an expected behaviour? What can I do to solve this exception?
- I'm using the `getPoint` function to access the `ST_Point` array because I have not seen a better way of accessing it. Is it possible to access it as a normal array (i.e. `layer.Points[0].X`)? I can't create the array of `ST_Point` (`layer.Points = new ST_Point[10]`), as it'll be created outside the `ST_Layer`, and data will not be passed to Python.
I've seen [this](https://stackoverflow.com/questions/19168658/accommodating-nested-unsafe-structs-in-c-sharp) and [this](https://stackoverflow.com/questions/7309838/improper-marshaling-c-sharp-array-to-a-c-unmanaged-array), but don't know how to access the individual fields of `ST_Point`.
Thank you for your help.
\[Also, thank you R. Martinho Fernandes for your [hint regarding the x86/x64 issue](https://stackoverflow.com/a/18252141/3744282)\] |
|c#|pointers|struct|pinvoke|shared-memory| |
Good morning!
I've attached my code for your reference. I hope you find it helpful.
```
public function changeCategoryId(Request $request) {
$selectedCategory = $request->input('category_id');
$filteredContent = // Logic to filter content based on $selectedCategory
return view('content', ['ranges' => $filteredContent])->render();
}
```
Make sure to create a blade partial view for the filtered content (e.g., content.blade.php). This partial view should contain the HTML structure you want to update dynamically.
With this setup, when the user selects a category from the dropdown, an Ajax request is sent to your Laravel backend, which filters the content based on the selected category ID and returns the filtered content as a partial view. Finally, the JavaScript updates the content on the page with the received partial view. |
null |
The top rated answer does not work with the new Reflection implementation of [JEP416](https://openjdk.org/jeps/416) in e.g. Java 21 that uses MethodHandles and ignores the flags value on the Field abstraction object.
One solution is to use Unsafe, however with [this JEP](https://openjdk.org/jeps/8323072) Unsafe and the important `long objectFieldOffset(Field f)` and
`long staticFieldOffset(Field f)` methods are getting deprecated for removal so for example this will not work in the future:
```java
final Unsafe unsafe = //..get Unsafe (...and add subsequent --add-opens statements for this to work)
final Field ourField = Example.class.getDeclaredField("changeThis");
final Object staticFieldBase = unsafe.staticFieldBase(ourField);
final long staticFieldOffset = unsafe.staticFieldOffset(ourField);
unsafe.putObject(staticFieldBase, staticFieldOffset, "it works");
```
I do not recommend this but it is possible in Java 21 with the new reflection implementation when making heavy use of the internal API if really needed.
# Java 21+ solution without `Unsafe`
The gist of it is to use a `MethodHandle` that can write to a static final field by getting it from the internal `getDirectFieldCommon(...)` method of the Lookup.
```java
MethodHandles.Lookup mh = MethodHandles.privateLookupIn(MyClassWithStaticFinalField.class, MethodHandles.lookup());
Method getDirectFieldCommonMethod = mh.getClass().getDeclaredMethod("getDirectFieldCommon", byte.class, Class.class, memberNameClass, boolean.class);
getDirectFieldCommonMethod.setAccessible(true);
//Invoke last method to obtain the method handle
MethodHandle o = (MethodHandle) getDirectFieldCommonMethod.invoke(mh, manipulatedReferenceKind, myStaticFinalField.getDeclaringClass(), memberNameInstanceForField, false);
o.invoke("new Value for static final field");
```
See my answer [here](https://stackoverflow.com/a/77705202/23144795) for a full working example on how to leverage the internal API to set a final field in Java 21 without Unsafe.
|
I'm putting this answer here for future me:
I was able to use MongoDbBuilder to get a single node replica set working with the following which allows me to use transactions:
var cancellationTokenSource = new CancellationTokenSource();
cancellationTokenSource.CancelAfter(TimeSpan.FromSeconds(10));
mongoContainer = new MongoDbBuilder()
.WithUsername("")
.WithPassword("")
.WithImage("mongo:latest")
.WithExtraHost("host.docker.internal", "host-gateway")
.WithCommand("--replSet", "rs0")
.WithWaitStrategy(Wait.ForUnixContainer())
.Build();
await mongoContainer.StartAsync(cancellationTokenSource.Token);
await mongoContainer.ExecScriptAsync($"rs.initiate({{_id:'rs0',members:[{{_id:0,host:'host.docker.internal:{mongoContainer.GetMappedPublicPort(27017)}'}}]}})", cancellationTokenSource.Token);
var connectionString = mongoContainer.GetConnectionString();
var client = new MongoClient(connectionString);
Using an empty string for `.WithUserName("")` and `.WithPassword("")` seems to be equivalent to "No Auth".
> Important: Make sure your hosts file (on host machine) has the entry `127.0.0.1 host.docker.internal` |
{"OriginalQuestionIds":[75638898],"Voters":[{"Id":807126,"DisplayName":"Doug Stevenson","BindingReason":{"GoldTagBadge":"google-cloud-firestore"}}]} |
Is there any difference between using map and using function if everything is known at compile time. (I'm new to Kotlin/Java and i couldn't find answer to this) Maximal number of items will never be higher than 200 and for the most of the time it would be below 10
Example:
```kt
val mappings = mapOf(
"PL" to "Poland",
"EN" to "England",
"DE" to "Germany",
"US" to "United States of America",
)
fun mappingsFunc(code: String): String? {
return when (code) {
"PL" -> "Poland"
"EN" -> "England"
"DE" -> "Germany"
"US" -> "United States of America"
else -> null
}
}
fun main() {
println(mappings["PL"])
println(mappingsFunc("US")!!)
}
```
Playground: https://pl.kotl.in/87YJH2onA
Both of them works, both syntax's are fine for me but i dunno which one is recommended. |
I have in my java application some data that I want to send them to PHP using JSON to insert them later in database. the problem is with the JSON string.
Main.java
String args = "{\"nom\":\""+hostName+"\",\"host_name\":\""+hostName+"\", \"os_name\":\""+nameOS+"\",\"os_type\":\""+osType+"\",\"os_version\":\""+osVersion+"\"}";
Cpu.main(args);
Cpu.java
package webservice;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.io.PrintStream;
import java.net.HttpURLConnection;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.URLConnection;
public class Cpu {
public static void main(String args) {
try {
// make json string, try also hamburger
// send as http get request
URL url = new URL("http://localhost:8080/parc/index.php?order=" + args);
URLConnection conn = url.openConnection();
// Get the response
BufferedReader rd = new BufferedReader(new InputStreamReader(conn.getInputStream()));
String line;
while ((line = rd.readLine()) != null) {
System.out.println(line);
}
rd.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
PHPFILE.php
<?php
$order = $_GET["order"];
$obj = json_decode($order);
$nom = $obj->{"nom"};
$host_name = $obj->{"host_name"};
$os_name = $obj->{"os_name"};
$os_type = $obj->{"os_type"};
$os_version = $obj->{"os_version"};
echo $host_name;
echo json_last_error(); // 4 (JSON_ERROR_SYNTAX)
echo json_last_error_msg(); // unexpected character
$array = array("nom" => $nom, "host_name" => $host_name, "os_name" => $os_name, "os_type" => $os_type, "os_version" => $os_version);
echo json_encode($array);
?>
Now I think that the problem with the JSON string format which is "args" because when I change the variables `hostname`, `os_type`... like this : (`String json = "{\"name\":\"Frank\",\"food\":\"pizza\",\"quantity\":3}";`) (from a tutorial) it works normally. |
sending data from java to PHP with JSON string doesn't work |
Using Interfaces vs Lambda Functions in Jetpack Compose Composables |
|android|kotlin|android-jetpack-compose| |
I am attempting to create a powershell script to allow the above however keep coming unstuck with most of the threads only showing servers. Any help would be massively appreciated.
As above - hoping to output this xls/csv |
Powershell Script to collate all users within a number of local PC 'Remote Desktop Users' |
|powershell| |
null |
{"Voters":[{"Id":213269,"DisplayName":"Jonas"},{"Id":3890632,"DisplayName":"khelwood"},{"Id":2756409,"DisplayName":"TylerH"}],"SiteSpecificCloseReasonIds":[18]} |
My query
```
select
custname, case when date < '11/26/2023' then -1 else datepart(wk,date) end 'week#', sum(amount) sales,
count(salesid) orders from SalesTable inner join CustomerTable c
on salestable.CustID=c.CustID
where date < '1/27/2024' and c.CustID = 10285 or c.CustID = -2
group by c.custid,custname, [address],case when date < '11/26/2023' then -1 else datepart(wk,date) end,
case when date < '11/26/2023' then '11/25/2023' else DATEADD(dd,7-(DATEPART(dw,date)),date) end
order by 1,2
```
Gets all customers sales (sum amount, week number, amount orders) by one week any row.
like:
[enter image description here](https://i.stack.imgur.com/3qC7j.png)
How do I PIVOT one row per customer, 'week' number as column-\> amount, 'week' number as column -\> orders.
and again the next week number, like example:
[enter image description here](https://i.stack.imgur.com/Z8sHj.png) |
The function responsible is [here in dask/utils](https://github.com/dask/dask/blob/main/dask/utils.py#L1777) ([permalink](https://github.com/dask/dask/blob/1b711beb3985efd7b895db2709c7b7cd869216f0/dask/utils.py#L1777)) and it only supports powers of 2, not 10. This in contrast to the time units immediately below. You could ask for this to be a configurable thing, but someone would have to put in a little work. |
When I run `./waf configure -v` locally, I get
```
Checking for 'gcc' (C compiler) : 13:52:02 runner ['/usr/bin/gcc', '-dM', '-E', '-']
/usr/bin/gcc
Checking for 'gcc' (C compiler) : 13:52:02 runner ['clang', '-dM', '-E', '-']
not found
Checking for 'clang' (C compiler) : 13:52:02 runner ['clang', '-dM', '-E', '-']
clang
Checking for 'gcc' (C compiler) : 13:52:02 runner ['/usr/bin/gcc', '-dM', '-E', '-']
/usr/bin/gcc
Checking for 'gcc' (C compiler) : 13:52:02 runner ['clang', '-dM', '-E', '-']
not found
Checking for 'clang' (C compiler) : 13:52:02 runner ['clang', '-dM', '-E', '-']
clang
Checking for 'gcc' (C compiler) : 13:52:02 runner ['x86_64-w64-mingw32-gcc', '-dM', '-E', '-']
x86_64-w64-mingw32-gcc
```
However, when I run it on github actions, I NEED it to specifically use `clang-15` and `gcc-13` which are installed manually. It actually does discover `clang-15` on its own if I just run `./waf configure -v`, but it still gives up for absolutely no reason.
So I tried running `CC=clang-15 ./waf configure -v` - what happens next may shock you:
```
Checking for 'gcc' (C compiler) : 18:43:33 runner ['clang-15', '-dM', '-E', '-']
not found
Checking for 'clang' (C compiler) : 18:43:33 runner ['clang-15', '-dM', '-E', '-']
clang-15
Checking for 'gcc' (C compiler) : 18:43:33 runner ['clang-15', '-dM', '-E', '-']
not found
Checking for 'clang' (C compiler) : 18:43:33 runner ['clang-15', '-dM', '-E', '-']
clang-15
Checking for 'gcc' (C compiler) : 18:43:33 runner ['clang-15', '-dM', '-E', '-']
not found
Checking for 'clang' (C compiler) : 18:43:33 runner ['clang-15', '-dM', '-E', '-']
clang-15
Checking for 'gcc' (C compiler) : 18:43:33 runner ['clang-15', '-dM', '-E', '-']
not found
Checking for 'clang' (C compiler) : 18:43:34 runner ['clang-15', '-dM', '-E', '-']
clang-15
Checking for 'gcc' (C compiler) : 18:43:34 runner ['clang-15', '-dM', '-E', '-']
not found
Checking for 'clang' (C compiler) : 18:43:34 runner ['clang-15', '-dM', '-E', '-']
not found
Checking for 'icc' (C compiler) : 18:43:34 runner ['clang-15', '-dM', '-E', '-']
not found
could not configure a C compiler!
```
The astute reader will notice that (a) these outputs are different, and (b) on gha, Waf has very mysteriously decided that it will refuse to work even though it patently found `clang-15` successfully.
What is going on here? This doesn't make any sense at all. It's incoherent.
Like, fair play it's finding different compilers on my system vs gha; they have different compilers installed so this makes sense. But, truly, WHY is it attempting to find `icc` on gha but not my computer, and WHY is it giving up due to not finding a C compiler on gha but not my computer when, again, it finds a C compiler on both?
I certainly have no idea what could possibly be causing this failure, but it should be obvious already to anyone who does. Nevertheless, here is the [gha workflow](https://github.com/hacatu/Number-Theory-Utils/blob/master/.github/workflows/cov_and_docs.yml) and here is the [wscript](https://github.com/hacatu/Number-Theory-Utils/blob/master/wscript) (beware it's chonky).
I've searched Waf's gitlab issues, stackoverflow, and google, but I didn't find much useful. I did find that Waf will display the output of the compiler on an empty program if you look in its log files, since this was relevant in one gitlab issue where the user had a custom compiler that Waf didn't recognize, but I really have gcc and clang so this should not be the problem. I also found that there's some way to make ubuntu use clang-15 and gcc-13 by default, I think this might actually help but I remember it (shock of shocks) not working on gha in the past. |
Encountering
`Fatal error: Uncaught mysqli_sql_exception: Table 'kopsis.setting_email' doesn't exist in /www/wwwroot/kopsis/config.php:66 Stack trace: #0 /www/wwwroot/kopsis/config.php(66): mysqli->query() #1 /www/wwwroot/kopsis/index.php(2): include('...') #2 {main} thrown in /www/wwwroot/kopsis/config.php on line 66`
This error occurs when running my code on a VPS with AAPanel. It works fine on cPanel hosting with PHP 7.4. How can I resolve this issue on the VPS with AAPanel and php 7.4 also?"
I've already switched to PHP 7.3, 8.1, and still encountering the same issue, even triggering new errors |
Encountering 'Fatal error: Uncaught mysqli_sql_exception |
|mysql|panel|vps| |
null |
It depends on what you are trying to do.
It looks like you are trying to do something from within a Monobehavior.
class MyClass
{
// Drag sprite sheet from assets to this inspector slot.
public Sprite[] sprites;
void Awake() {
foreach(Sprite sprite in sprites) {
... do something with sprite
}
}
}
If you are wanting to modify the sprites in the texture in code you should do it in an Editor Script. The solutions provided here will work but if you are using newer versions of Unity i.e 2022.2 or newer Property 'UnityEditor.TextureImporter.spritesheet' is obsolete: Support for accessing sprite meta data through spritesheet has been removed. You should be using the ISpriteEditorDataProvider
Code taken from [https://docs.unity3d.com/2022.2/Documentation/Manual/Sprite-data-provider-api.html][1]
public class SpriteSheetProcessor : AssetPostprocessor
{
private void OnPreprocessTexture()
{
var textureImporter = (TextureImporter)assetImporter;
textureImporter.textureType = TextureImporterType.Sprite;
textureImporter.mipmapEnabled = false;
textureImporter.filterMode = FilterMode.Point;
textureImporter.spritePixelsPerUnit = 8;
textureImporter.spriteImportMode = SpriteImportMode.Multiple;
var factory = new SpriteDataProviderFactories();
factory.Init();
var dataProvider = factory.GetSpriteEditorDataProviderFromObject(assetImporter);
dataProvider.InitSpriteEditorDataProvider();
/* Use the data provider */
CreateSprites(dataProvider);
// or
SetPivot(dataProvider)
// Apply the changes made to the data provider
dataProvider.Apply();
// Reimport the asset to have the changes applied
textureImporter.SaveAndReimport();
}
CreateNewSprites(ISpriteEditorDataProvider dataProvider)
{
var spriteRects = dataProvider.GetSpriteRects().ToList();
if (spriteRects.Count >= 2) return;
var spriteNameFileIdDataProvider = dataProvider.GetDataProvider<ISpriteNameFileIdDataProvider>();
var nameFileIdPairs = new List<SpriteNameFileIdPair>();
var spriteIndex = 0;
spriteRects = new List<SpriteRect>();
for (var x = 0; x < width; x += 8)
{
for (var y = 0; y < height; y += 8)
{
var sprite = new SpriteRect
{
name = $"{textureImporter.name}_{spriteIndex++}",
spriteID = GUID.Generate(),
rect = new Rect(x, y, 8, 8),
alignment = SpriteAlignment.Center
};
spriteRects.Add(sprite);
// Register the new Sprite Rect's name and GUID with the ISpriteNameFileIdDataProvider
nameFileIdPairs.Add(new SpriteNameFileIdPair(sprite.name, sprite.spriteID));
}
}
dataProvider.SetSpriteRects(spriteRects.ToArray());
spriteNameFileIdDataProvider.SetNameFileIdPairs(nameFileIdPairs);
}
private void void SetPivot(ISpriteEditorDataProvider dataProvider)
{
// Get all the existing Sprites
var spriteRects = dataProvider.GetSpriteRects();
// Loop over all Sprites and update the pivots
foreach (var rect in spriteRects)
{
rect.pivot = new Vector2(0.1f, 0.2f);
rect.alignment = SpriteAlignment.Custom;
}
// Write the updated data back to the data provider
dataProvider.SetSpriteRects(spriteRects);
}
}
Or you can use [https://docs.unity3d.com/2022.2/Documentation/ScriptReference/AssetDatabase.LoadAllAssetsAtPath.html][2] or
[https://docs.unity3d.com/2022.2/Documentation/ScriptReference/AssetDatabase.LoadAllAssetRepresentationsAtPath.html][3]
[1]: https://docs.unity3d.com/2022.2/Documentation/Manual/Sprite-data-provider-api.html
[2]: https://docs.unity3d.com/2022.2/Documentation/ScriptReference/AssetDatabase.LoadAllAssetsAtPath.html
[3]: https://docs.unity3d.com/2022.2/Documentation/ScriptReference/AssetDatabase.LoadAllAssetRepresentationsAtPath.html |
I came across a workaround that was suggested in an unofficial capacity by someone at AppDynamics during their local lab explorations. While this solution isn't officially supported by AppDynamics, it has proven to be effective for adjusting the log levels for both the Proxy and the Watchdog components within my AppDynamics setup. I'd like to share the steps involved, but please proceed with caution and understand that this is not a sanctioned solution.
I recommend changing only the log4j2.xml file, because the proxy messages look like are responsible for almost 99% of the log messages.
Here's a summary of the steps:
- **Proxy Log Level:** The `log4j2.xml` file controls this. You can find it within the appdynamics_bindeps module. For example, in my WSL setup, it's located at `/home/wsl/.pyenv/versions/3.11.6/lib/python3.11/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j2.xml`. In the Docker image python:3.9, the path is `/usr/local/lib/python3.9/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j2.xml`. Modify the seven log level itens `<AsyncLogger>` within the `<Loggers>` section to one of the following: **debug, info, warn, error, or fatal**.
- **Watch Dog Log Level:** This can be adjusted in the `proxy.py` file found within the appdynamics Python module. For example, in my WSL setup, it's located at `/home/wsl/.pyenv/versions/3.11.6/lib/python3.11/site-packages/appdynamics/scripts/pyagent/commands/proxy.py`. In the Docker image python:3.9, the path is `/usr/local/lib/python3.9/site-packages/appdynamics/scripts/pyagent/commands/proxy.py`. You will need to hardcode the log level in the configure_proxy_logger and configure_watchdog_logger functions by changing the level variable.
## My versions
```bash
$ pip freeze | grep appdynamics
appdynamics==24.2.0.6567
appdynamics-bindeps-linux-x64==24.2.0
appdynamics-proxysupport-linux-x64==11.68.3
```
## Original files
### log4j2.xml
```
<Loggers>
<!-- Modify each <AsyncLogger> level as needed -->
<AsyncLogger name="com.singularity" level="info" additivity="false">
<AppenderRef ref="Default"/>
<AppenderRef ref="RESTAppender"/>
<AppenderRef ref="Console"/>
</AsyncLogger>
</Loggers>
```
### proxy.py
```python
def configure_proxy_logger(debug):
logger = logging.getLogger('appdynamics.proxy')
level = logging.DEBUG if debug else logging.INFO
pass
def configure_watchdog_logger(debug):
logger = logging.getLogger('appdynamics.proxy')
level = logging.DEBUG if debug else logging.INFO
pass
```
## My Script to create environment variables to log4j2.xml and proxy.py
### update_appdynamics_log_level.sh
```bash
#!/bin/sh
# Check if PYENV_ROOT is not set
if [ -z "$PYENV_ROOT" ]; then
# If PYENV_ROOT is not set, then set it to the default value
export PYENV_ROOT="/usr/local/lib"
echo "PYENV_ROOT was not set. Setting it to default: $PYENV_ROOT"
else
echo "PYENV_ROOT is already set to: $PYENV_ROOT"
fi
echo "=========================== log4j2 - appdynamics_bindeps module ========================="
# Find the appdynamics_bindeps directory
APP_APPD_BINDEPS_DIR=$(find "$PYENV_ROOT" -type d -name "appdynamics_bindeps" -print -quit)
if [ -z "$APP_APPD_BINDEPS_DIR" ]; then
echo "Error: appdynamics_bindeps directory not found."
exit 1
fi
echo "Found appdynamics_bindeps directory at $APP_APPD_BINDEPS_DIR"
# Find the log4j2.xml file within the appdynamics_bindeps directory
APP_LOG4J2_FILE=$(find "$APP_APPD_BINDEPS_DIR" -type f -name "log4j2.xml" -print -quit)
if [ -z "$APP_LOG4J2_FILE" ]; then
echo "Error: log4j2.xml file not found within the appdynamics_bindeps directory."
exit 1
fi
echo "Found log4j2.xml file at $APP_LOG4J2_FILE"
# Modify the log level in the log4j2.xml file
echo "Modifying log level in log4j2.xml file"
sed -i 's/level="info"/level="${env:APP_APPD_LOG4J2_LOG_LEVEL:-info}"/g' "$APP_LOG4J2_FILE"
echo "log4j2.xml file modified successfully."
echo "=========================== watchdog - appdynamics module ==============================="
# Find the appdynamics directory
APP_APPD_DIR=$(find "$PYENV_ROOT" -type d -name "appdynamics" -print -quit)
if [ -z "$APP_APPD_DIR" ]; then
echo "Error: appdynamics directory not found."
exit 1
fi
echo "Found appdynamics directory at $APP_APPD_DIR"
# Find the proxy.py file within the appdynamics directory
APP_PROXY_PY_FILE=$(find "$APP_APPD_DIR" -type f -name "proxy.py" -print -quit)
if [ -z "$APP_PROXY_PY_FILE" ]; then
echo "Error: proxy.py file not found within the appdynamics directory."
exit 1
fi
echo "Found proxy.py file at $APP_PROXY_PY_FILE"
# Modify the log level in the proxy.py file
echo "Modifying log level in proxy.py file"
sed -i 's/logging.DEBUG if debug else logging.INFO/os.getenv("APP_APPD_WATCHDOG_LOG_LEVEL", "info").upper()/g' "$APP_PROXY_PY_FILE"
echo "proxy.py file modified successfully."
```
## Dockerfile
Dockerfile to run pyagent with FastAPI and run this script
```dockerfile
# Use a specific version of the python image
FROM python:3.9
# Set the working directory in the container
WORKDIR /app
# First, copy only the requirements file and install dependencies to leverage Docker cache
COPY requirements.txt ./
RUN python3 -m pip install --no-cache-dir -r requirements.txt
# Now copy the rest of the application to the container
COPY . .
# Make the update_appdynamics_log_level.sh executable and run it
RUN chmod +x update_appdynamics_log_level.sh && \
./update_appdynamics_log_level.sh
# Set environment variables
ENV APP_APPD_LOG4J2_LOG_LEVEL="warn" \
APP_APPD_WATCHDOG_LOG_LEVEL="warn"
EXPOSE 8000
# Command to run the FastAPI application with pyagent
CMD ["pyagent", "run", "uvicorn", "main:app", "--proxy-headers", "--host","0.0.0.0", "--port","8000"]
```
## Files changed by the script
### log4j2.xml
```xml
<Loggers>
<!-- Modify each <AsyncLogger> level as needed -->
<AsyncLogger name="com.singularity" level="${env:APP_APPD_LOG4J2_LOG_LEVEL:-info}" additivity="false">
<AppenderRef ref="Default"/>
<AppenderRef ref="RESTAppender"/>
<AppenderRef ref="Console"/>
</AsyncLogger>
</Loggers>
```
### proxy.py
```python
def configure_proxy_logger(debug):
logger = logging.getLogger('appdynamics.proxy')
level = os.getenv("APP_APPD_WATCHDOG_LOG_LEVEL", "info").upper()
pass
def configure_watchdog_logger(debug):
logger = logging.getLogger('appdynamics.proxy')
level = os.getenv("APP_APPD_WATCHDOG_LOG_LEVEL", "info").upper()
pass
```
## Warning
Please note, these paths and methods may vary based on your AppDynamics version and environment setup. Always backup files before making changes and be aware that updates to AppDynamics may overwrite your customizations.
I hope this helps! |
Not sure this is really an answer but it took me some time to figure out so I'm going to share this anyway.
I wanted to change
Array
(
[0] => email= joeri@bespired.nl
[1] => name = Joeri
[2] => token= AB2240824==
)
into
Array
(
[email] => joeri@bespired.nl
[name] => Joeri
[token] => AB2240824==
)
And got that solved with:
$values = array_combine(
array_map(fn($v) => trim(explode("=", $v, 2)[0]), $values),
array_map(fn($v) => trim(explode("=", $v, 2)[1]), $values)
);
Hope this helps someone. |
I missed the `content-type` part in headers, and language.
The final codes example:
conn = http.client.HTTPSConnection("partner.steam-api.com")
headers = {'Content-Type': 'application/x-www-form-urlencoded'}
orderid = uuid.uuid4().int & (1<<64)-1
print("orderid = ", orderid)
key = "xxxxxxxxxxxxxxxxxxx" # omitted for security reason
steamid = "xxxxxxxxxxxxxxxxxxx" # omitted for security reason
pid = "testItem1"
appid = "480"
itemcount = 1
currency = 'CNY'
amount = 350
language = 'zh-CN'
description = 'testing_description'
urlSandbox = "/ISteamMicroTxnSandbox/"
s = f'key={key}&orderid={orderid}&appid={appid}&steamid={steamid}&itemcount={itemcount}&language={language}¤cy={currency}&itemid[0]={pid}&qty[0]={1}&amount[0]={amount}&description[0]={description}'
conn.request('POST', url=f'{urlSandbox}InitTxn/v3/', headers=headers, body=s)
r = conn.getresponse()
print("InitTxn result = ", r.read())
Thought I get to send the HTTPs request in proper format, I still got a failure response ```b'{"response":{"result":"Failure","params":{"orderid":"9829049831725601326"},"error":{"errorcode":3,"errordesc":"Item 0 - Invalid item id"}}}'``` |
Contrary to what romainl writes in [his answer](https://stackoverflow.com/a/78250848/796259), Vim's own [`:help :!`](https://vimhelp.org/various.txt.html#%3A%21) suggests the following:
>On Unix the command normally runs in a non-interactive
>shell. If you want an interactive shell to be used
>(to use aliases) set 'shellcmdflag' to "-ic".
The caveat is that your shell will not terminate after running your command but prompt you for input instead. You'll have to `fg` to return to Vim. I would guess that's why romainl discarded it as a solution.
FWIW, I wrote a tiny alias a while ago which will return me to Vim when I type `fg` regardless of whether I'm in the shell through Vim's `:shell` or `:terminal` commands or <kbd>Ctrl-Z</kbd>:
alias fg="[[ -v VIM ]] && exit || fg"
This will work for `:!` with `'shellcmdflag'` set to "-ic" as well. |
|docker|apache-kafka|docker-compose|spring-kafka| |
In both C and C++, pointers of *unrelated* types are NOT implicitly convertible to each other (however, in C but not C++, converting from a `void*` pointer to any other pointer type is allowed implicitly).
In this example, since `fte_t` is a different type than `uint8_t`, a type-cast is required. |
So I have a scientific data Excel file validation form in django that works well. It works iteratively. Users can upload files as they accumulate new data that they add to their study. The `DataValidationView` inspects the files each time and presents the user with an error report that lists issues in their data that they must fix.
We realized recently that a number of errors (but not all) can be fixed automatically, so I've been working on a way to generate a copy of the file with a number of fixes. So we rebranded the "validation" form page as a "build a submission page". Each time they upload a new set of files, the intention is for them to still get the error report, but also automatically receive a downloaded file with a number of fixes in it.
I learned just today that there's no way to both render a template and kick off a download at the same time, which makes sense. However, I had been planning to not let the generated file with fixes hit the disk.
Is there a way to present the template with the errors and automatically trigger the download without previously saving the file to disk?
This is my `form_valid` method currently (without the triggered download, but I had started to do the file creation before I realized that both downloading and rendering a template wouldn't work):
```
def form_valid(self, form):
"""
Upon valid file submission, adds validation messages to the context of
the validation page.
"""
# This buffers errors associated with the study data
self.validate_study()
# This generates a dict representation of the study data with fixes and
# removes the errors it fixed
self.perform_fixes()
# This sets self.results (i.e. the error report)
self.format_validation_results_for_template()
# HERE IS WHERE I REALIZED MY PROBLEM. I WANTED TO CREATE A STREAM HERE
# TO START A DOWNLOAD, BUT REALIZED I CANNOT BOTH PRESENT THE ERROR REPORT
# AND START THE DOWNLOAD FOR THE USER
return self.render_to_response(
self.get_context_data(
results=self.results,
form=form,
submission_url=self.submission_url,
)
)
```
Before I got to that problem, I was compiling some pseudocode to stream the file... This is totally untested:
```
import pandas as pd
from django.http import HttpResponse
from io import BytesIO
def download_fixes(self):
excel_file = BytesIO()
xlwriter = pd.ExcelWriter(excel_file, engine='xlsxwriter')
df_output = {}
for sheet in self.fixed_study_data.keys():
df_output[sheet] = pd.DataFrame.from_dict(self.fixed_study_data[sheet])
df_output[sheet].to_excel(xlwriter, sheet)
xlwriter.save()
xlwriter.close()
# important step, rewind the buffer or when it is read() you'll get nothing
# but an error message when you try to open your zero length file in Excel
excel_file.seek(0)
# set the mime type so that the browser knows what to do with the file
response = HttpResponse(excel_file.read(), content_type='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet')
# set the file name in the Content-Disposition header
response['Content-Disposition'] = 'attachment; filename=myfile.xlsx'
return response
```
So I'm thinking either I need to:
1. Save the file to disk and then figure out a way to make the results page start its download
2. Somehow send the data embedded in the results template and sent it back via javascript to be turned into a file download stream
3. Save the file somehow in memory and trigger its download from the results template?
What's the best way to accomplish this?
*UPDATED THOUGHTS*:
I recently had done a simple trick with a `tsv` file where I embedded the file content in the resulting template with a download button that used javascript to grab the `innerHTML` of the tags around the data and start a "download".
I thought, if I encode the data, I could likely do something similar with the excel file content. I could base64 encode it.
I reviewed past study submissions. The largest one was 115kb. That size is likely to grow by an order of magnitude, but for now 115kb is the ceiling.
I googled to find a way to embed the data in the template and I got [this][1]:
```
import base64
with open(image_path, "rb") as image_file:
image_data = base64.b64encode(image_file.read()).decode('utf-8')
ctx["image"] = image_data
return render(request, 'index.html', ctx)
```
I recently was playing around with base64 encoding in javascript for some unrelated work, which leads me to believe that embedding is do-able. I could even trigger it automatically. Anyone have any caveats to doing it this way?
## Update
I have spent all day trying to implement @Chukwujiobi_Canon's suggestion, but after working through a lot of errors and things I'm inexperienced with, I'm at the point where I am stuck. I don't have any errors, and my server console shows a request that comes in, but the form data doesn't seem to be (correctly) accompanying the `post`.
I implemented the django code first and I think it is working correctly. When I submit the form without the javascript, the browser downloads the multipart stream, and it looks as expected:
```
--3d6b6a416f9b5
Content-Type: application/octet-stream
Content-Range: bytes 0-9560/9561
PK?N˝Ö€]'[Content_Types].xm...
...
--3d6b6a416f9b5
Content-Type: text/html
Content-Range: bytes 0-16493/16494
<!--use Bootstrap CSS and JS 5.0.2-->
...
</html>
--3d6b6a416f9b5--
```
In the chat, @Chukwujiobi_Canon advised me how to submit the form in javascript and make the resulting multipart stream start a download and open a new tab for the error resort/html. I worked with it until I got past all the errors, but I don't know why the form data isn't getting sent and/or processed. Here's the javascript:
```
document.addEventListener("DOMContentLoaded", function(){
validation_form = document.getElementById("submission-validation");
// Take over form submission
validation_form.addEventListener("submit", (event) => {
event.preventDefault();
submit_validation_form();
});
async function submit_validation_form() {
// Put all of the form data into a variable (formdata)
const formdata = new URLSearchParams();
for (const pair of new FormData(validation_form)) {
formdata.append(pair[0], pair[1]);
}
try {
// Submit the form and get a response (which can only be done inside an async function)
alert("Submitting form");
let response;
response = await fetch("{% url 'validate' %}", {
method: "post",
body: formdata,
//I commented out the headers based on my googling on the topic...
//headers: {
//"Content-Type": "multipart/form-data",
// "Content-Type": "application/x-www-form-urlencoded",
//},
})
alert("Processing result");
let result;
result = await response.text();
const parsed = parseMultipartBody(result, "{{ boundary }}");
alert("Starting download and rendering page");
parsed.forEach(part => {
if (part["headers"]["content-type"] === "text/html") {
const url = URL.createObjectURL(
Blob(
part["body"],
{type: "text/html"}
)
);
windows.open(url, "_blank");
}
else if (part["headers"]["content-type"] === "application/octet-stream") {
const url = URL.createObjectURL(
Blob(
part["body"],
{type: "application/octet-stream"}
)
);
window.location = url;
}
});
} catch (e) {
console.error(e);
}
}
function parseMultipartBody (body, boundary) {
return body.split(`--${boundary}`).reduce((parts, part) => {
if (part && part !== '--') {
const [ head, body ] = part.trim().split(/\r\n\r\n/g)
parts.push({
body: body,
headers: head.split(/\r\n/g).reduce((headers, header) => {
const [ key, value ] = header.split(/:\s+/)
headers[key.toLowerCase()] = value
return headers
}, {})
})
}
return parts
}, [])
}
})
```
The alerts all come up, but happen too quick. The file I'm submitting should generate about 20 seconds worth of server console output, but all I get is one line:
```
[30/Mar/2024 18:52:49] "POST /DataRepo/validate HTTP/1.1" 200 19974
```
[1]: https://nemecek.be/blog/8/django-how-to-send-image-file-as-part-of-response |
Method this.aQueue.add(*args) which returns an job instance is endlessly awaited. After receiving and deserializing data i try to get a job instance in the method RequestService.sendData of app.service.ts module, but can't do it and the rest of code is not running
app.service.ts
```
import { Injectable } from '@nestjs/common';
import { Job, Queue } from 'bull';
import { InjectQueue } from '@nestjs/bull';
import { plainToClass } from 'class-transformer';
import { RequestScheme, ResponseScheme } from './app.schemes';
@Injectable()
export class RequestService {
constructor(
@InjectQueue('request') private requestQueue: Queue
){}
async sendData(data: RequestScheme): Promise<ResponseScheme> {
let responseData: ResponseScheme
data = plainToClass(RequestScheme, data)
console.log("data in controller", data) // data is deserialized as i expect
const jobInstance = await this.requestQueue.add(
'request', data, { delay: data.wait }
) // this method is running and never awaited
console.log(`Job: ${jobInstance}`)
async function setJobData(jobInstance: Job){
return new Promise((resolve, reject) => {
this.requestQueue.on('completed', function(job: Job, result: ResponseScheme, error){
if (jobInstance.id == job.id) {
responseData = result
job.remove()
resolve(result)
}
if (error) reject(error)
})
})}
await setJobData(jobInstance)
return responseData
}
}
```
app.processor.ts
```
import { Job } from 'bull';
import {
Processor,
Process,
OnQueueActive,
OnQueueCompleted
} from '@nestjs/bull';
import { ResponseScheme } from './app.schemes';
@Processor('request')
export class RequestConsumer {
@Process('request')
async process_request(job: Job){
console.log(`Job ${job.id} proceed`)
}
@OnQueueActive()
onActive(job: Job){
console.log(`Data ${job.data} were sended`)
}
@OnQueueCompleted()
onComplete(job: Job){
const response = new ResponseScheme()
response.answer = job.data.answer
return response
}
}
```
app.module.ts
```
import { Global, Module, NestModule } from '@nestjs/common';
import { BullModule } from '@nestjs/bull';
import { RequestController } from './app.controller';
import { RequestService } from './app.service';
import { RequestConsumer } from './app.processor';
@Module({
imports: [
BullModule.forRoot({
redis: {
host: 'localhost',
port: 6379,
maxRetriesPerRequest: null
}
}),
BullModule.registerQueue({
name: 'request'
})
],
controllers: [RequestController],
providers: [
RequestService
],
exports: [
RequestService
]
})
export class AppModule {
configure(consumer: RequestConsumer){}
}
```
app.controller.ts
```
import { Job } from 'bull';
import { Body, Controller, Get, HttpException, HttpStatus, Post, Res } from '@nestjs/common';
import { RequestService } from './app.service';
import { RequestScheme, ResponseScheme } from './app.schemes';
@Controller('request')
export class RequestController {
constructor(private readonly requestService: RequestService) {}
@Get()//this works good
about(): string{
return "Hello! This is the request"
}
@Post()
async getHello(@Body() data: RequestScheme): Promise<ResponseScheme> {
console.log("POST", "data", data) //client data, good as it's expected
let responseData: ResponseScheme
responseData = await this.requestService.sendData(data)
return responseData
}
}
```
Based on manual[https://docs.nestjs.com/techniques/queues] this method is standard for nestjs.
```
const job = await this.audioQueue.add(
{
foo: 'bar',
},
{ lifo: true },
);
```
But my variant is running endlessly - it doesn’t matter what data I have in wait (string or number):
```
const jobInstance = await this.requestQueue.add(
'request', data, { delay: data.wait }
)
```
Also, i tried to get hardcode data from RequestService.sendData by hidding "add a job" method, and it works. But i need to add a job.
|
Method "add a job" is never awaited |
|nestjs| |
null |
I have view pager in the main activity, and loading another view pager with two fragments .but that fragment getting loaded while the acrtivty getting loaded.It should called when the user taps on it.
class HomeActivity : BaseActivity(), DialogUtils.DialogManager {
companion object {
var currentFragmentPosition: Int = 0
}
private var isFromSearchHistory: Boolean = false
private var isFromVisitHistory: Boolean = false
lateinit var binding: ActivityHomeBinding
val homeFragment = HomeFragment.newInstance()
val bookingFragment = BookingFragment.newInstance()
val searchHistoryFragment = SearchHistoryFragment.newInstance()
// val tripFragment = TripFragment.newInstance()
val visitedStationsFragment = VisitedStationFragment.newInstance()
class VisitedStationFragment : BaseFragment() {
lateinit var binding: FragmentVisitedStationsBinding
var tabLayout: TabLayout? = null
var viewPager: ViewPager? = null
override fun onCreateView(
inflater: LayoutInflater,
container: ViewGroup?,
savedInstanceState: Bundle?
): View {
binding = FragmentVisitedStationsBinding.inflate(inflater, container, false)
return binding.root
} |
view pager life cycle |
|android|kotlin| |
The top rated answer does not work with the new Reflection implementation of [JEP416](https://openjdk.org/jeps/416) in e.g. Java 21 that uses MethodHandles and ignores the flags value on the Field abstraction object.
One solution is to use Unsafe, however with [this JEP](https://openjdk.org/jeps/8323072) Unsafe and the important `long objectFieldOffset(Field f)` and
`long staticFieldOffset(Field f)` methods are getting deprecated for removal so for example this will not work in the future:
```java
final Unsafe unsafe = //..get Unsafe (...and add subsequent --add-opens statements for this to work)
final Field ourField = Example.class.getDeclaredField("changeThis");
final Object staticFieldBase = unsafe.staticFieldBase(ourField);
final long staticFieldOffset = unsafe.staticFieldOffset(ourField);
unsafe.putObject(staticFieldBase, staticFieldOffset, "it works");
```
I do not recommend this but it is possible in Java 21 with the new reflection implementation when making heavy use of the internal API if really needed.
# Java 21+ solution without `Unsafe`
The gist of it is to use a `MethodHandle` that can write to a static final field by getting it from the internal `getDirectFieldCommon(...)` method of the Lookup.
```java
MethodHandles.Lookup lookup = MethodHandles.privateLookupIn(MyClassWithStaticFinalField.class, MethodHandles.lookup());
Method getDirectFieldCommonMethod = lookup.getClass().getDeclaredMethod("getDirectFieldCommon", byte.class, Class.class, memberNameClass, boolean.class);
getDirectFieldCommonMethod.setAccessible(true);
//Invoke last method to obtain the method handle
MethodHandle finalFieldHandle = (MethodHandle) getDirectFieldCommonMethod.invoke(lookup, manipulatedReferenceKind, myStaticFinalField.getDeclaringClass(), memberNameInstanceForField, false);
finalFieldHandle.invoke("new Value for static final field");
```
See my answer [here](https://stackoverflow.com/a/77705202/23144795) for a full working example on how to leverage the internal API to set a final field in Java 21 without Unsafe.
|
I'm trying to change my database MySQL to oracle. But when I try to change oracle table not create and below mention error is appear. however all the required sequence will create on oracle but not create table.
Error :
Caused by: java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:628)
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:562)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1145)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:726)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:291)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:492)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:108)
at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:997)
at oracle.jdbc.driver.OracleStatement.executeSQLStatement(OracleStatement.java:1507)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1287)
at oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:2137)
at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:2092)
at oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:328)
at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:95)
at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java)
at org.hibernate.tool.schema.internal.exec.GenerationTargetToDatabase.accept(GenerationTargetToDatabase.java:54)
... 38 common frames omitted
Caused by: oracle.jdbc.OracleDatabaseException: ORA-00942: table or view does not exist
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:632)
... 53 common frames omitted
Hibernate: drop temp_his cascade constraints |
How to give JPA create table privileges on oracle |
|spring|oracle-database|jpa|oracle-sqldeveloper|oracle19c| |
From the comments, it appears the real question is not how to `UPDATE` an `ALWAYS GENERATED` column, but how to migrate data from a temporal table into a *new* temporal table. If you need to copy data from a different (temporal) table to a new one, and retain the history dates, then create the new table(s) *not* as a temporal table and then `ALTER` it to one **after** you `INSERT` all the data. If you start by creating the new table as a temporal table then retaining the Start/End dates isn't possible, as you cannot `INSERT` into, or `UPDATE` the values of your `GENERATED ALWAYS` columns.
This means you create the table, and history tables, *not* as a temporal table. Such as like below (the `CONSTRAINT`s are **important**):
```sql
CREATE TABLE dbo.MyTemporalTable (ID int IDENTITY PRIMARY KEY,
SomeString varchar(10) NULL,
ValidFrom datetime2(7) NOT NULL CONSTRAINT DF_MyTemporalTable_ValidFrom DEFAULT SYSDATETIME(),
ValidTo datetime2(7) NOT NULL CONSTRAINT DF_MyTemporalTable_ValidTo DEFAULT '9999-12-31T23:59:59.9999999')
GO
--Assumes existence of a history schema
CREATE TABLE history.MyTemporalTable (ID int NOT NULL,
SomeString varchar(10) NULL,
ValidFrom datetime2(7) NOT NULL,
ValidTo datetime2(7) NOT NULL);
```
Now you `INSERT` all your data into these 2 tables. I'm just going to use some made up data here, it doesn't come from another table:
```sql
DECLARE @SomeArbitraryTime datetime2(7)= SYSDATETIME();
INSERT INTO dbo.MyTemporalTable (SomeString,ValidFrom)
VALUES('abc', DATEADD(DAY, -1, @SomeArbitraryTime)),
('def',@SomeArbitraryTime);
INSERT INTO history.MyTemporalTable (ID,
SomeString,
ValidFrom,
ValidTo)
VALUES(2,'xyz',DATEADD(DAY, -1, @SomeArbitraryTime),@SomeArbitraryTime);
```
Now all the data is in the 2 tables, we can make the tables temporal:
```sql
ALTER TABLE dbo.MyTemporalTable ADD PERIOD FOR SYSTEM_TIME (ValidFrom,ValidTo);
GO
ALTER TABLE dbo.MyTemporalTable SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = history.MyTemporalTable));
```
And now we can do a couple of queries, and get the expected results:
```sql
SELECT *
FROM dbo.MyTemporalTable;
DECLARE @12HoursAgo datetime2(7) = DATEADD(HOUR, -12, SYSDATETIME());
SELECT *
FROM dbo.MyTemporalTable FOR SYSTEM_TIME AS OF @12HoursAgo;
```
Which returns the following 2 data sets:
ID |SomeString |ValidFrom |ValidTo
-|-|-|-
1 |abc |2024-03-25 12:38:49.1272105 |9999-12-31 23:59:59.9999999
2 |def |2024-03-26 12:38:49.1272105 |9999-12-31 23:59:59.9999999
ID |SomeString |ValidFrom |ValidTo
-|-|-|-
1 |abc |2024-03-25 12:38:49.1272105 |9999-12-31 23:59:59.9999999
2 |xyz |2024-03-25 12:38:49.1272105 |2024-03-26 12:38:49.1272105
---
*Clean up:*
```sql
ALTER TABLE dbo.MyTemporalTable SET (SYSTEM_VERSIONING = OFF);
GO
DROP TABLE dbo.MyTemporalTable;
DROP TABLE history.MyTemporalTable;
```
[db<>fiddle](https://dbfiddle.uk/hhp0Om3n) |
I wrote an [ansible-role for openwisp2][1] to ease its deployment, it's a series of django apps. To ease the deployment as much as possible, I wrote a simple (probably trivial) [SECRET_KEY generator script][2]:
```python
#!/usr/bin/env python
"""
Pseudo-random django secret key generator
"""
from __future__ import print_function
import random
chars = 'abcdefghijklmnopqrstuvwxyz' \
'ABCDEFGHIJKLMNOPQRSTUVXYZ' \
'0123456789' \
'#()^[]-_*%&=+/'
SECRET_KEY = ''.join([random.SystemRandom().choice(chars) for i in range(50)])
print(SECRET_KEY)
```
which is called by ansible to generate the secret key the first time the ansible playbook is run.
Now, that works fine BUT I think it defeats the built-in security measures Django has in generating a strong key which is also very hard to guess.
At the time I looked at other ways of doing it but didn't find much, now I wonder: **is there a function for generating settings.SECRET_KEY in django?**
That would avoid this kind of home baked solutions that even though they work they are not effective when it comes to security.
[1]: https://github.com/nemesisdesign/ansible-openwisp2/
[2]: https://github.com/nemesisdesign/ansible-openwisp2/blob/master/files/generate_django_secret_key.py |
null |
I have a flink job with overall parallelism of 4. There are two component which are memory and CPU intensive, for which I have created slotsharing group (H). Currently I have 2 task manager with 2CPU core each having 4 slot. When i deploy my job most of time the slotsharing components are not evenly distributed among task manager, which means at times 1 task manager have 4 instance of slotSharing group(H), ideal would be 2 on each task manager so that memory and cpu allocated to each task manager can be used optimally by the slotsharing group (H).
I started looking into
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/finegrained_resource/
Two question
- If I specify 0.75 Cu for SlotSharingGroup (H) and 0.25 for default slotSharingGroup (D), will it force to enable only 2 instance of operators from H on each machine ?
- How will GC behave ? say in this case if TM has 4GB and I allocate 3GB to H and 1GB to D, if H reaches close to 3GB but D is still free, will ZGC starts cleaning up unused memory for H ? this is where main confusion is since ZGC is at JVM level, how can ZGC know that memory allocated to a particular slot is about to be fill ?
|