instruction stringlengths 0 30k ⌀ |
|---|
change the version of multer,
install version 1.4.4 |
I try to go from Spring-Boot 2.7 to 3.1 - its regarding Security.
What I have under old version 2.
public class SecurityConfiguration extends WebSecurityConfigurerAdapter
{
@Override
protected void configure (HttpSecurity http) throws Exception
{
http.cors ().and ()
.csrf ().disable ()
.authorizeRequests ()
.antMatchers ("/web/test").permitAll ()
.antMatchers ("/web/**").hasAnyRole ("USER")
.anyRequest ().authenticated ()
.and ()
.addFilter (new SecurityAuthenticationFilter (authenticationManager ()))
.addFilter (new SecurityAuthorizationFilter (authenticationManager ()))
.sessionManagement ()
.sessionCreationPolicy (SessionCreationPolicy.STATELESS);
}
What I already have for version 3.
@Bean
public SecurityFilterChain securityFilterChain (HttpSecurity http) throws Exception
{
http
.cors (Customizer.withDefaults ())
.csrf (AbstractHttpConfigurer::disable)
.authorizeHttpRequests ((requests) -> requests
.requestMatchers ("/web/test").permitAll ()
.requestMatchers ("/web/**").hasRole ("USER")
.anyRequest ().authenticated ()
)
//.addFilter (new SecurityAuthenticationFilter (authenticationManager ()))
.sessionManagement (httpSecuritySessionManagementConfigurer ->
httpSecuritySessionManagementConfigurer.sessionCreationPolicy (SessionCreationPolicy.STATELESS))
;
return http.build();
But here I struggle with authenticationManager () of former WebSecurtyConfigurationAdapter - for my 2 custom filters.
They are
public class SecurityAuthorizationFilter extends BasicAuthenticationFilter
{
public SecurityAuthorizationFilter (AuthenticationManager authenticationManager)
{
super (authenticationManager);
}
@Override
protected void doFilterInternal (HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws IOException, ServletException
{
UsernamePasswordAuthenticationToken upa = getAuthentication (request);
if (upa == null)
{
filterChain.doFilter (request, response);
}
else
{
SecurityContextHolder.getContext ().setAuthentication (upa);
filterChain.doFilter (request, response);
}
}
@SuppressWarnings ("unchecked")
private UsernamePasswordAuthenticationToken getAuthentication (HttpServletRequest request)
{
String token = request.getHeader (SecurityConstants.TOKEN_HEADER);
if (token != null && token.startsWith (SecurityConstants.TOKEN_PREFIX) == true)
{
byte [] signingKey = SecurityConstants.JWT_SECRET.getBytes ();
token = token.replace (SecurityConstants.TOKEN_PREFIX, "");
Jws <Claims> claim = Jwts.parserBuilder ().setSigningKey (signingKey).build ().parseClaimsJws (token);
String usr = claim.getBody ().getSubject ();
List <LinkedHashMap <?, ?>> cs = claim.getBody ().get ("roles", List.class);
List <SimpleGrantedAuthority> claims = new ArrayList <SimpleGrantedAuthority> ();
for (int i = 0; i < cs.size (); i ++)
{
claims.add (new SimpleGrantedAuthority (cs.get (i).get ("authority"). toString ()));
}
if (usr.length () > 0)
{
return new UsernamePasswordAuthenticationToken (usr, null, claims);
}
}
return null;
}
}
and
public class SecurityAuthenticationFilter extends UsernamePasswordAuthenticationFilter
{
private final AuthenticationManager authenticationManager;
public SecurityAuthenticationFilter (AuthenticationManager authenticationManager)
{
this.authenticationManager = authenticationManager;
setFilterProcessesUrl (SecurityConstants.AUTH_LOGIN_URL);
}
@Override
public Authentication attemptAuthentication (HttpServletRequest request, HttpServletResponse response)
{
String usr = request.getParameter ("username");
String pwd = request.getParameter ("password");
UsernamePasswordAuthenticationToken upat = new UsernamePasswordAuthenticationToken (usr, pwd);
return authenticationManager.authenticate (upat);
}
@Override
protected void successfulAuthentication (HttpServletRequest request, HttpServletResponse response, FilterChain filterChain, Authentication authentication) throws java.io.IOException, ServletException
{
UserDetails user = ((UserDetails) authentication.getPrincipal ());
@SuppressWarnings ("unchecked")
Collection <GrantedAuthority> roles = (Collection <GrantedAuthority>) user.getAuthorities ();
String token = Jwts.builder ()
.signWith (Keys.hmacShaKeyFor (SecurityConstants.JWT_SECRET.getBytes ()), SignatureAlgorithm.HS512)
.setHeaderParam ("typ", SecurityConstants.TOKEN_TYPE)
.setIssuer (SecurityConstants.TOKEN_ISSUER)
.setAudience (SecurityConstants.TOKEN_AUDIENCE)
.setSubject (user.getUsername ())
.setExpiration (new java.util.Date (System.currentTimeMillis () + 60 * 60 * 24 * 1000)) // 24h lifetime of token
.claim ("roles", roles)
.compact ();
response.addHeader (SecurityConstants.TOKEN_HEADER, SecurityConstants.TOKEN_PREFIX + token);
}
}
**Question**
How can I integration my 2 filters in Spring3 security??
authManager () is not available there. |
$user = Auth::user();
$account = $user->account;
$loans = $user->loans;
$transactions = $user->transactions;
return view('User.dashboard', compact('user', 'account', 'loans', 'transactions'));
My controller code:
My view code:
@isset($loans)
@foreach($loans as $loan)
<div class="widget__status-title text-grey">Transactions</div>
<div class="widget__spacer"></div>
<div class="widget__trade"><span class="widget__trade-count">{{$loan->amount}}</span>
</div>
@endforeach
@endisset
Please I need it to display pending loans, successful transactions in the dashboard |
Can you programmatically generate a link to open a Word document and navigate to a particular location within it (preferably a comment)? Eseentially I want to reproduce the "Create link" feature that is available on comments in the UI, and use a hyperlink.
I have inspected the link that is created by "Create link" and it contains a parameter like this: `nav=eyJjIjo2OTk2Njg1ODd9`. I inspected the OpenXML in the Word document and the value isn't in it.
I've also tried using bookmarks including their name as an anchor in the url, but that did not work. |
|c#|sharepoint|ms-word|office365|openxml-sdk| |
I am new to ironPython Scripting and I am trying the do the following.
I page in Spotfire and I created a document property input field and a listbox filter
All I am trying to do is, based on the table name given in get the listbox filter to unque values of a column of the selected table.
from Spotfire.Dxp.Application.Filters import ListBoxFilter
# Get the input variable select_table (assuming it's a Document Property)
select_table = Document.Properties["select_table"]
# Get the ListBox filter named "ReFC.WellList"
well_list_filter = None
for filter in Document.FilteringSchemes:
if filter.Title == "ReFC.WellList" and isinstance(filter, ListBoxFilter):
well_list_filter = filter
break
# Check if the ListBox filter was found
if well_list_filter is not None:
# Clear existing values from the ListBox filter
well_list_filter.Reset()
# Determine which table to use based on the selection
if select_table == "A":
table = Document.Data.Tables["Table A"]
column_name = "Well" # Assuming the column name in Table A is "Well"
elif select_table == "B":
table = Document.Data.Tables["Table B"]
column_name = "Well" # Assuming the column name in Table B is "Well"
else:
print("Invalid selection")
# Get unique values from the selected table column
unique_values = set(row[column_name] for row in table.GetRows())
# Add the unique values to the ListBox filter
well_list_filter.SetSelection(unique_values)
# Optionally, select all values in the ListBox filter
well_list_filter.IncludeAllValues = True
else:
print("ListBox filter 'ReFC.WellList' not found")
it doesnt seem to be working.. Any idea where I am going wrong? |
IronPython Spotfre: Set Listbox Filter Values from user input of Table Name |
|ironpython|spotfire|tibco|spotfire-analyst| |
null |
null |
I was given this question:
**a** Given page reference strings: **1,2,3,4.2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6.**
Compare the number of page faults for **LRU, FIFO and Optimal** page replacement algorithms,
assume **4 frames** are available.
**b** Which method can you rank as the best and why?
I have tried solving it myself but keep getting stuck.
I even used tools like chatGPT and Gemini but they both kept giving contradicting values every time I regenerate the response. |
I have the following controller's action:
```C#
[HttpPost("api/v1/addProduct")]
[Consumes("multipart/form-data")]
public Task<ProductDto?> AddProduct([FromForm] ProductRequest request, CancellationToken cancellationToken)
{
return _productService.AddProduct(request, cancellationToken);
}
```
and the following model:
```C#
public class ProductRequest
{
[Required]
public string Title { get; set; }
public string? Description { get; set; }
[Required]
public string PreviewDescription { get; set; }
[Required]
public IFormFile PreviewImage { get; set; }
public IFormFile[]? Images { get; set; }
[Required]
public int[] CategoryIds { get; set; }
public bool? IsAvailable { get; set; }
}
```
I successfully sent data to the server:
[![enter image description here][1]][1]
But by some reason, I see the following in the debugger:
[![enter image description here][2]][2]
Everything is ok with `previewImage` but where are `images`? Why `previewImage` is here, but there is no `images`? I sent them in exactly the same way as I sent `previewImage`. We can see it at the request payload screen. Help me please to figure it out.
[1]: https://i.stack.imgur.com/N6lJx.png
[2]: https://i.stack.imgur.com/oruB8.png |
Problem to upload several images per one request |
|.net|asp.net-core-webapi|form-data| |
It appears that it's important to express that you want the 7 specific items extracted by calling for tokens=1-7.
Token 1 will be %%G, with each successive, stipulated token using %%H, %%I, etc.
Wanting to express one line for each token comes by echoing them 1 echo command at a time for each token's variable, either as one per line stacked or parenthetically grouped sequentially with &s:
do (echo %%G) & (echo %%H) & (echo %%I)... etc
In total, the following worked for me:
```
for /f "tokens=1-7" %G in ("1 2 7 16 21 26 688") do (
echo %%G
echo %H
echo %I
echo %J
echo %K
echo %L
echo %M
)
``` |
You have to use a `\U` + 8-hexadecimal code instead of `\u` + 4-hexadecimal code when the code point is more than four digits. You'll also need an appropriate font so it may or may not show up with the correct glyph on your system.
```py
import unicodedata as ud
mytable = mystr.maketrans('ABC', '\U00011D72\U00011D73\U00011D74')
mystr_translated = 'ABC'.translate(mytable)
for c in mystr_translated:
print(c, ud.name(c))
```
Output:
```none
GUNJALA GONDI LETTER KHA
GUNJALA GONDI LETTER TA
GUNJALA GONDI LETTER THA
``` |
It appears that it's important to express that you want the 7 specific items extracted by calling for tokens=1-7.
Token 1 will be %%G, with each successive, stipulated token using %%H, %%I, etc.
Wanting to express one line for each token comes by echoing them 1 echo command at a time for each token's variable, either as one per line stacked or parenthetically grouped sequentially with &s:
do (echo %%G) & (echo %%H) & (echo %%I)... etc
In total, the following worked for me:
```
for /f "tokens=1-7" %%G in ("1 2 7 16 21 26 688") do (
echo %%G
echo %%H
echo %%I
echo %%J
echo %%K
echo %%L
echo %%M
)
``` |
Please how I fetch user account balance, withdrawals, Loans and Transactions to display in the dashboard? |
|php|html|laravel|controller|laravel-blade| |
null |
So I want to add logging to my c++ project, and people say spdlog is good.
I think spdlog has very clear installation instructions here:
https://github.com/gabime/spdlog/tree/v1.x
Clone the repo, create build folder and run cmake.
And then an example of a cmakeFile.
But I dont know cmake, and I know to little about libraries.
Cmake to my understanding is a file where you can specify everything needed to build a project. But it seems my IDE (currently codeBlocks for windows) does all that for me, so I have never learnt Cmake.
Libraries to my understanding is prebuilt classes/functions that I can include in my project.
I already use SDL in my project. To get that one to work I think I just ran some install program that created the library files like this
[![SDL Folders][1]][1]
and then I have added include path in
Codeblocks->project->build options->Search directories->compiler
and lib path in
Codeblocks->project->build options->Search directories->linker
and some settings in
Codeblocks->project->build options->Linker settings->other linker options
And then I can Include headers and use functions and classes.
What confuses me about the spdlog cmake file example (and the cmake in [this question][2]) is that they references external files (example.cpp and drumhero.h). Shouldent it just specify the spdlog files, and an output directory where the created library files will be put?
**Why does the library cmake need to know about my project files?** It should just create a library usuable by any project, not just mine.
I feel like I have fundamentally missunderstod something. Can someone point me in the right direction?
[1]: https://i.stack.imgur.com/etmpo.png
[2]: https://stackoverflow.com/questions/66211808/visual-studio-not-finding-spdlog-lib |
How to install spdlog library? |
|c++|cmake|libraries|spdlog| |
In v2 this is now configurable, see https://xunit.net/docs/configuration-files
So you need to add to app.config:
<appSettings>
<add key="xunit.methodDisplay" value="method"/>
</appSettings>
and as noted in the comments, you might need to restart Visual Studio.
|
I'd love to know the reason for this slow down as well as I have encountered it with a similar task. I 'solved' it with using h5py instead of pickle. All tested on Windows 11, will run it on Linux next week.
My task is to read millions of numpy images and get a region dynamically. The application dictates that the images are stored in batches of about 3000 to 6000 in files.
# pickle
```
imagesDict = {i: np.random.randint(0, 255, (300, 300), dtype=np.uint8) for i in range(4000)}
with open(filePath, 'wb') as file:
pickle.dump(imagesDict, file, pickle.HIGHEST_PROTOCOL)
thumbs = []
num_image_sets = 0
durations_s_sum = 0.
for i in range(500):
start_s = time.perf_counter()
with open(filePath, 'rb') as file:
imagesDict: dict[int, np.ndarray] = pickle.load(file)
for key in imagesDict.keys():
image = imagesDict[key]
thumb = image[:50, :50].copy()
thumbs.append(thumb)
durations_s_sum += (time.perf_counter() - start_s)
num_image_sets += 1
if 50 <= num_image_sets:
memory_info = psutil.Process(os.getpid()).memory_info()
print(f"{durations_s_sum:4.1f}s for 50 image sets of 4000 images, rss={memory_info.rss/1024/1024:6,.0f}MB, vms={memory_info.vms/1024/1024:6,.0f}MB")
durations_s_sum = 0.
num_image_sets = 0
```
The speed of pickle.load() slows down with every iteration, quickly getting to an unacceptable level:
```
10.6s for 50 image sets of 4000 images, rss= 1,575MB, vms= 1,579MB
10.0s for 50 image sets of 4000 images, rss= 2,117MB, vms= 2,134MB
11.5s for 50 image sets of 4000 images, rss= 2,632MB, vms= 2,662MB
14.2s for 50 image sets of 4000 images, rss= 3,150MB, vms= 3,193MB
16.3s for 50 image sets of 4000 images, rss= 3,670MB, vms= 3,726MB
19.1s for 50 image sets of 4000 images, rss= 4,212MB, vms= 4,280MB
22.6s for 50 image sets of 4000 images, rss= 4,746MB, vms= 4,824MB
25.4s for 50 image sets of 4000 images, rss= 5,276MB, vms= 5,367MB
29.2s for 50 image sets of 4000 images, rss= 5,817MB, vms= 5,919MB
35.3s for 50 image sets of 4000 images, rss= 6,360MB, vms= 6,472MB
```
# h5py
```
with h5py.File(filePath, 'w') as h5:
for i in range(4000):
image = np.random.randint(0, 255, (300, 300), dtype=np.uint8)
h5.create_dataset(str(i), data=image)
thumbs = []
num_image_sets = 0
durations_s_sum = 0.
for i in range(500):
start_s = time.perf_counter()
with h5py.File(filePath, "r") as h5:
for key in h5.keys():
image = h5[key]
thumb = image[:50, :50]
thumbs.append(thumb)
durations_s_sum += (time.perf_counter() - start_s)
num_image_sets += 1
if 50 <= num_image_sets:
memory_info = psutil.Process(os.getpid()).memory_info()
print(f"{durations_s_sum:4.1f}s for 50 image sets of 4000 images, rss={memory_info.rss/1024/1024:6,.0f}MB, vms={memory_info.vms/1024/1024:6,.0f}MB")
durations_s_sum = 0.
num_image_sets = 0
```
h5py is slower but the duration is almost constant at about 19s, so it wins over time:
```
20.3s for 50 image sets of 4000 images, rss= 646MB, vms= 637MB
20.3s for 50 image sets of 4000 images, rss= 1,166MB, vms= 1,167MB
19.7s for 50 image sets of 4000 images, rss= 1,685MB, vms= 1,697MB
19.4s for 50 image sets of 4000 images, rss= 2,208MB, vms= 2,229MB
19.7s for 50 image sets of 4000 images, rss= 2,731MB, vms= 2,764MB
19.8s for 50 image sets of 4000 images, rss= 3,255MB, vms= 3,298MB
19.4s for 50 image sets of 4000 images, rss= 3,778MB, vms= 3,832MB
19.9s for 50 image sets of 4000 images, rss= 4,303MB, vms= 4,366MB
19.6s for 50 image sets of 4000 images, rss= 4,826MB, vms= 4,899MB
19.9s for 50 image sets of 4000 images, rss= 5,349MB, vms= 5,434MB
```
Also, if memory fragmentation was the issue, why does h5py not show a similar behaviour? |
Airflow not show the overwrited values on config list command (EmailOperator dag case) |
|docker|docker-compose|airflow| |
null |
{"OriginalQuestionIds":[1129216],"Voters":[{"Id":5648954,"DisplayName":"Nick Parsons","BindingReason":{"GoldTagBadge":"javascript"}}]} |
Make sure that you are using FULLTEXT INDEX on QTestId Column , also this query will give you result based on '18 ~ 14 ~ 10' as a 1 variable and not 18 , 14 , 10 seperatly . |
|kubernetes|kubernetes-helm|minikube|fluxcd| |
I would like to receive the coordinates of a point when I click on the map.
I have a Google Maps map in an Angular project, I want to receive coordinates to place a marker there.
//////// please help me
[my html code](https://i.stack.imgur.com/3J8rD.png)
[TS code](https://i.stack.imgur.com/bUvQc.png) |
Getting coordinates in Google Maps. Angular |
|javascript| |
null |
The next steps work for me (after I added `C:\emsdk\upstream\emscripten` to the Path variable):
```
> cd assimp-5.2.5 && mkdir build && cd build
> emcmake cmake ..
> emmake make
```
These commands generated the `libassimp.a` library:
[![enter image description here][1]][1]
I try to set up this library in Qt Creator like this:
```
INCLUDEPATH += "E:/libs/assimp-5.2.5/include"
LIBS += -L"E:/libs/assimp-5.2.5/build/lib"
LIBS += -lassimp
```
I had this error: `error: 'assimp/config.h' file not found`. I solved this error by copying this file `E:\libs\assimp-5.2.5\build\include\assimp\config.h` to this directory: `E:\libs\assimp-5.2.5\include\assimp`.
Issues: `:-1: error: [Makefile:69: .\load-with-assimp-wasm-opengles2-qt6-cpp.js] Error 1` But I have already built another examples with OpenGL ES 2.0 to WASM without the problem.
[![enter image description here][2]][2]
Compiler output:
```
wasm-ld: error: E:/libs/assimp-5.2.5/build/lib\libassimp.a(AssbinLoader.cpp.o): undefined symbol: uncompress
wasm-ld: error: E:/libs/assimp-5.2.5/build/lib\libassimp.a(Compression.cpp.o): undefined symbol: inflateInit_
wasm-ld: error: E:/libs/assimp-5.2.5/build/lib\libassimp.a(Compression.cpp.o): undefined symbol: inflateInit2_
wasm-ld: error: E:/libs/assimp-5.2.5/build/lib\libassimp.a(Compression.cpp.o): undefined symbol: inflate
wasm-ld: error: E:/libs/assimp-5.2.5/build/lib\libassimp.a(Compression.cpp.o): undefined symbol: inflate
wasm-ld: error: E:/libs/assimp-5.2.5/build/lib\libassimp.a(Compression.cpp.o): undefined symbol: inflate
wasm-ld: error: E:/libs/assimp-5.2.5/build/lib\libassimp.a(Compression.cpp.o): undefined symbol: inflateReset
wasm-ld: error: E:/libs/assimp-5.2.5/build/lib\libassimp.a(Compression.cpp.o): undefined symbol: inflateSetDictionary
wasm-ld: error: E:/libs/assimp-5.2.5/build/lib\libassimp.a(Compression.cpp.o): undefined symbol: inflateEnd
wasm-ld: error: E:/libs/assimp-5.2.5/build/lib\libassimp.a(Compression.cpp.o): undefined symbol: inflateEnd
wasm-ld: error: E:/libs/assimp-5.2.5/build/lib\libassimp.a(Compression.cpp.o): undefined symbol: inflateInit2_
wasm-ld: error: E:/libs/assimp-5.2.5/build/lib\libassimp.a(unzip.c.o): undefined symbol: get_crc_table
wasm-ld: error: E:/libs/assimp-5.2.5/build/lib\libassimp.a(unzip.c.o): undefined symbol: crc32
wasm-ld: error: E:/libs/assimp-5.2.5/build/lib\libassimp.a(Compression.cpp.o): undefined symbol: inflate
wasm-ld: error: E:/libs/assimp-5.2.5/build/lib\libassimp.a(unzip.c.o): undefined symbol: crc32
em++: error: 'C:/emsdk/upstream/bin\wasm-ld.exe -o .\load-with-assimp-wasm-opengl2-qt6-cpp.wasm C:/Qt/6.7.0/wasm_singlethread/lib/libQt6BundledFreetype.a C:/Qt/6.7.0/wasm_singlethread/lib/libQt6BundledLibpng.a main.obj load-with-assimp-wasm-opengl2-qt6-cpp.js_plugin_import.obj -LE:/libs/assimp-5.2.5/build/lib E:/libs/assimp-5.2.5/build/lib\libassimp.a C:/Qt/6.7.0/wasm_singlethread/lib/objects-Release/QWasmIntegrationPlugin_resources_1/.rcc/qrc_wasmfonts_init.cpp.o C:/Qt/6.7.0/wasm_singlethread/lib/objects-Release/QWasmIntegrationPlugin_resources_2/.rcc/qrc_wasmwindow_init.cpp.o C:/Qt/6.7.0/wasm_singlethread/lib/objects-Release/Gui_resources_1/.rcc/qrc_qpdf_init.cpp.o C:/Qt/6.7.0/wasm_singlethread/lib/objects-Release/Gui_resources_2/.rcc/qrc_gui_shaders_init.cpp.o C:/Qt/6.7.0/wasm_singlethread/plugins/platforms/libqwasm.a C:/Qt/6.7.0/wasm_singlethread/plugins/iconengines/libqsvgicon.a C:/Qt/6.7.0/wasm_singlethread/plugins/imageformats/libqgif.a C:/Qt/6.7.0/wasm_singlethread/plugins/imageformats/libqico.a C:/Qt/6.7.0/wasm_singlethread/plugins/imageformats/libqjpeg.a C:/Qt/6.7.0/wasm_singlethread/lib/libQt6BundledLibjpeg.a C:/Qt/6.7.0/wasm_singlethread/plugins/imageformats/libqsvg.a C:/Qt/6.7.0/wasm_singlethread/lib/libQt6Svg.a C:/Qt/6.7.0/wasm_singlethread/lib/objects-Release/Widgets_resources_1/.rcc/qrc_qstyle_init.cpp.o C:/Qt/6.7.0/wasm_singlethread/lib/objects-Release/Widgets_resources_2/.rcc/qrc_qstyle1_init.cpp.o C:/Qt/6.7.0/wasm_singlethread/lib/objects-Release/Widgets_resources_3/.rcc/qrc_qstyle_fusion_init.cpp.o C:/Qt/6.7.0/wasm_singlethread/lib/objects-Release/Widgets_resources_4/.rcc/qrc_qmessagebox_init.cpp.o C:/Qt/6.7.0/wasm_singlethread/lib/libQt6OpenGLWidgets.a C:/Qt/6.7.0/wasm_singlethread/lib/libQt6OpenGL.a C:/Qt/6.7.0/wasm_singlethread/lib/libQt6Widgets.a C:/Qt/6.7.0/wasm_singlethread/lib/libQt6Gui.a C:/Qt/6.7.0/wasm_singlethread/lib/libQt6BundledHarfbuzz.a C:/Qt/6.7.0/wasm_singlethread/lib/libQt6BundledFreetype.a C:/Qt/6.7.0/wasm_singlethread/lib/libQt6BundledLibpng.a C:/Qt/6.7.0/wasm_singlethread/lib/libQt6Core.a C:/Qt/6.7.0/wasm_singlethread/lib/libQt6BundledZLIB.a C:/Qt/6.7.0/wasm_singlethread/lib/libQt6BundledPcre2.a -LC:\emsdk\upstream\emscripten\cache\sysroot\lib\wasm32-emscripten --whole-archive -lfetch -lembind-rtti --no-whole-archive -lGL-webgl2 -lal -lhtml5 -lstubs-debug -lnoexit -lc-debug -ldlmalloc -lcompiler_rt -lc++-noexcept -lc++abi-debug-noexcept -lsockets -mllvm -combiner-global-alias-analysis=false -mllvm -enable-emscripten-sjlj -mllvm -disable-lsr C:\Users\8OBSER~1\AppData\Local\Temp\tmp1971j9eilibemscripten_js_symbols.so --export-if-defined=main --export-if-defined=__start_em_asm --export-if-defined=__stop_em_asm --export-if-defined=__start_em_lib_deps --export-if-defined=__stop_em_lib_deps --export-if-defined=__start_em_js --export-if-defined=__stop_em_js --export-if-defined=__main_argc_argv --export-if-defined=fflush --export=emscripten_stack_get_end --export=emscripten_stack_get_free --export=emscripten_stack_get_base --export=emscripten_stack_get_current --export=emscripten_stack_init --export=__cxa_demangle --export=stackSave --export=stackRestore --export=stackAlloc --export=__errno_location --export=malloc --export=free --export=__wasm_call_ctors --export-table -z stack-size=5242880 --initial-memory=52428800 --no-entry --max-memory=2147483648 --stack-first' failed (returned 1)
mingw32-make: *** [Makefile:69: .\load-with-assimp-wasm-opengl2-qt6-cpp.js] Error 1
15:05:07: The process "C:\Qt\Tools\mingw1120_64\bin\mingw32-make.exe" exited with code 2.
Error while building/deploying project load-with-assimp-wasm-opengl2-qt6-cpp (kit: WebAssembly Qt 6.7.0 (single-threaded))
When executing step "Make"
```
load-with-assimp-wasm-opengles2-qt6-cpp.pro
```
QT += core gui openglwidgets widgets
win32: LIBS += -lopengl32
INCLUDEPATH += "E:/libs/assimp-5.2.5/include"
LIBS += -L"E:/libs/assimp-5.2.5/build/lib"
LIBS += -lassimp
CONFIG += c++11
# You can make your code fail to compile if it uses deprecated APIs.
# In order to do so, uncomment the following line.
# disables all the APIs deprecated before Qt 6.0.0
DEFINES += QT_DISABLE_DEPRECATED_BEFORE=0x060000
SOURCES += \
main.cpp
```
main.cpp
```cpp
#include <QtCore/QDebug>
#include <QtGui/QMatrix4x4>
#include <QtGui/QVector3D>
#include <QtGui/QOpenGLFunctions>
#include <QtOpenGL/QOpenGLBuffer>
#include <QtOpenGL/QOpenGLShaderProgram>
#include <QtOpenGLWidgets/QOpenGLWidget>
#include <QtWidgets/QApplication>
#include <QtWidgets/QMessageBox>
#include <assimp/Importer.hpp>
#include <assimp/postprocess.h>
#include <assimp/scene.h>
class OpenGLWidget : public QOpenGLWidget, private QOpenGLFunctions
{
public:
OpenGLWidget()
{
setWindowTitle("OpenGL ES 2.0, Qt6, C++");
resize(380, 380);
}
private:
QOpenGLBuffer m_vertPosBuffer;
QOpenGLShaderProgram m_program;
int m_numVertices;
QMatrix4x4 m_modelMatrix;
void initializeGL() override
{
initializeOpenGLFunctions();
glClearColor(0.1f, 0.1f, 0.1f, 1.f);
glEnable(GL_DEPTH_TEST);
Assimp::Importer importer;
const char *path = "assets/models/plane-blender.dae";
const aiScene *scene = importer.ReadFile(path, aiProcess_Triangulate | aiProcess_FlipUVs);
if (!scene || scene->mFlags & AI_SCENE_FLAGS_INCOMPLETE || !scene->mRootNode)
{
qDebug() << "Assimp Error:" << importer.GetErrorString();
QMessageBox::critical(this, "Assimp Error:", importer.GetErrorString());
return;
}
m_numVertices = scene->mMeshes[0]->mNumVertices;
float vertPositions[m_numVertices * 3];
int vertPosIndex = 0;
for (int i = 0; i < m_numVertices; i++)
{
vertPositions[vertPosIndex++] = scene->mMeshes[0]->mVertices[i].x;
vertPositions[vertPosIndex++] = scene->mMeshes[0]->mVertices[i].y;
vertPositions[vertPosIndex++] = scene->mMeshes[0]->mVertices[i].z;
// qDebug() << scene->mMeshes[0]->mVertices[i].x << ", "
// << scene->mMeshes[0]->mVertices[i].y << ", "
// << scene->mMeshes[0]->mVertices[i].z;
// qDebug() << "\n";
}
m_vertPosBuffer.create();
m_vertPosBuffer.bind();
m_vertPosBuffer.allocate(vertPositions, sizeof(vertPositions));
const char *vertShaderSrc =
"attribute vec3 aPosition;"
"uniform mat4 uModelMatrix;"
"void main()"
"{"
" gl_Position = uModelMatrix * vec4(aPosition, 1.0);"
"}";
const char *fragShaderSrc =
"void main()"
"{"
" gl_FragColor = vec4(0.5, 0.2, 0.7, 1.0);"
"}";
m_program.create();
m_program.addShaderFromSourceCode(QOpenGLShader::ShaderTypeBit::Vertex,
vertShaderSrc);
m_program.addShaderFromSourceCode(QOpenGLShader::ShaderTypeBit::Fragment,
fragShaderSrc);
m_program.link();
}
void resizeGL(int w, int h) override
{
glViewport(0, 0, w, h);
}
void paintGL() override
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
m_modelMatrix.setToIdentity();
m_modelMatrix.scale(0.5);
m_program.bind();
m_program.setUniformValue("uModelMatrix", m_modelMatrix);
m_vertPosBuffer.bind();
m_program.setAttributeBuffer("aPosition", GL_FLOAT, 0, 3);
m_program.enableAttributeArray("aPosition");
glDrawArrays(GL_TRIANGLES, 0, m_numVertices);
}
};
int main(int argc, char *argv[])
{
QApplication::setAttribute(Qt::ApplicationAttribute::AA_UseDesktopOpenGL);
QApplication app(argc, argv);
OpenGLWidget w;
w.show();
return app.exec();
}
```
[1]: https://i.stack.imgur.com/TuWBm.png
[2]: https://i.stack.imgur.com/ll38s.png |
I think that you should do that not exactly when making content with ckeditor, but **after submitting** this html and **before saving** it.
*For example by utilize scraping libraries or by handle searching for
*img* in ready ckeditor html code, you can make some manipulations with 'style' attribute then.*
----------
Another approach - is to override *img* objects styles by some stylesheets on page, where you want to show the ckeditor content.
.html
<div id="content">
**ckeditor html**
</div>
.css
#content img {
max-width: 100% !important;
height: auto !important;
}
----------
Maybe you can of course do changes on ckeditor side by figured out with ckeditor plugins or overriding config, but it can cost unexpected problems. |
I got the same problem (extra colon in the end of an input label). The solution I used is to have label = '' when defining form fields and directly write the label in html followed by inserting the form field. |
I'm currently in the process of transitioning from using Terraform for managing my GitLab CI/CD pipelines to using OpenTofu. In this migration, I also need to integrate OIDC (OpenID Connect) authentication into my GitLab pipelines.
Previously, my .gitlab-ci.yml file looked something like this with Terraform:
```
You should upgrade to the latest version. You can find the latest version at https://gitlab.com/gitlab-com/gl-security/security-operations/infrastructure-security-public/oidc-modules/-/releases
include:
- remote: 'https://gitlab.com/gitlab-com/gl-security/security-operations/infrastructure-security-public/oidc-modules/-/raw/3.1.2/templates/gcp_auth.yaml'
- template: "Terraform/Base.gitlab-ci.yml"
variables:
WI_POOL_PROVIDER: //iam.googleapis.com/projects/$GCP_PROJECT_NUMBER/locations/global/workloadIdentityPools/$WORKLOAD_IDENTITY_POOL/providers/$WORKLOAD_IDENTITY_POOL_PROVIDER
SERVICE_ACCOUNT: $SERVICE_ACCOUNT
TF_ROOT: infrastructure
TF_STATE_NAME:tfstate
stages:
- validate
- test
- build
- deploy
validate:
extends: .terraform:validate
needs: []
build:
extends:
- .google-oidc:auth
- .terraform:build
deploy:
extends:
- .google-oidc:auth
- .terraform:deploy
dependencies:
- build
```
Now, I want to replace the Terraform-based setup with OpenTofu, while also incorporating OIDC authentication into my pipeline. However, I'm unsure about how to structure the .gitlab-ci.yml file and configure OpenTofu to achieve this.
Could someone provide guidance on how to migrate from Terraform to OpenTofu for GitLab CI/CD pipelines, particularly focusing on integrating OIDC authentication into the pipeline setup? Any examples, tips, or resources would be greatly appreciated. Thank you! |
How do I solve a page-fault problem involving LRU, FIFO and Optimal page replacement algorithms? |
|page-fault| |
null |
I am creating a memoization example with a function that adds up / averages the elements of an array and compares it with the cached ones to retrieve them in case they are already stored.
In addition, I want to store only if the result of the function differs considerably (passes a threshold e.g. 5000 below).
I created an example using a decorator to do so, the results using the decorator is slightly slower than without the memoization which is not OK, also is the logic of the decorator correct ?
My code is attached below:
import time
import random
from collections import OrderedDict
def memoize(f):
cache = {}
def g(*args):
if args[1] == 'avg':
sum_key_arr = sum(args[0])/ len(list(args[0]))
elif args[1] == 'sum':
sum_key_arr = sum(args[0])
print(sum_key_arr)
if sum_key_arr not in cache:
for key, value in OrderedDict(sorted(cache.items())).items():# key in dict cannot be an array so I use the sum of the array as the key
if abs(sum_key_arr - key) <= 5000:#threshold is great here so that all values are approximated!
#print('approximated')
return cache[key]
else:
#print('not approximated')
cache[sum_key_arr] = f(args[0],args[1])
return cache[sum_key_arr]
return g
@memoize
def aggregate(dict_list_arr,operation):
if operation == 'avg':
return sum(dict_list_arr) / len(list(dict_list_arr))
if operation == 'sum':
return sum(dict_list_arr)
return None
t = time.time()
for i in range(200,150000):
res = aggregate(list(range(i)),'avg')
elapsed = time.time() - t
print(res)
print(elapsed)
|
How to get green block
I have 1 mesh, It has been colored with texture
[enter image description here][1]
[1]: https://i.stack.imgur.com/N7jYR.png |
Get block in Mesh Unity |
|unity-game-engine|unityscript| |
My confusion was due to my ignorance of order of evaluation.
I thought the evaluation went as follows(which is wrong):
1. In `.*`, `.` is evaluated first, say `.` is replaced by 'x'. Here 'x' is just a literal character.
2. Then `x*` would be evaluated leading to strings like "", "x", "xx", "xxx", ...etc.
3. Thus my doubt as to how it produced strings like "ab12b3v34".
But the correct order of evaluation is:
1. In `.*`, `*` is evaluated first, thus resulting in another regex pattern like ` `, `.`, `..`, `...`, ...etc, where `.` are still wildcards.
2. Then the wildcard `.` is/are replaced by any character, thus resulting in string(without new-line character) such as "ab12b3v34"(in this case from `.........`).
Thanks to all who commented for your patience.
|
[[enter image description here](https://i.stack.imgur.com/ASnrs.png)]
this is my code
```
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Edge(r"C:\Users\siddhi.tharval\Downloads\edgedriver_win64 (1).exe")
driver.get('https://www.makemytrip.com/flights/')
driver.maximize_window()
#to choose round trip
round_trip = driver.find_element(By.XPATH,'//*[@id="top-banner"]/div[2]/div/div/div/div[1]/ul/li[2]')
round_trip.click()
time.sleep(4)
#From city
fromcity = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.ID, 'fromCity'))).click()
#from city input box
fromfield=driver.find_element(By.XPATH, "//input[@placeholder='From']")
fromfield.send_keys(dep_city)
#dropdown first suggestion
firstsuggestion= driver.find_element(By.XPATH, '//*[@id="react-autowhatever-1-section-0-item-0"]')
firstsuggestion.click()
#tocity
tocity= WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.ID, 'toCity'))).click()
#tocity input box
tofield= driver.find_element(By.XPATH, "//input[@placeholder='To']")
tofield.send_keys(des_city)
#dropdown first suggestion
sugges=driver.find_element(By.XPATH, '//*[@id="react-autowhatever-1-section-0-item-0"]').click()
time.sleep(10)
#start date
startdate = driver.find_element(By.XPATH, "//div[@class='DayPicker-Day'][2]").click()
time.sleep(10)
#end date
enddate = driver.find_element(By.XPATH, "//div[@class='DayPicker-Day'][3]").click()
#From
search= WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.XPATH, '//a[contains(@class,"widgetSearchBtn")]'))).click()
```
please give me solution to get flights page on my webdriver instead of this network error. after clicking on search button in webdriver website it is not opening flights list page showing nework problem as i shared image of error. |
I want to read out linux kernel statistics of a single thread using _netlink_ socket and _taskstats_.
I could get _taskstats_ to work using a _Python_ wrapper (https://github.com/facebook/gnlpy) but I want to do a C implementation.
After setting up the socket, the message parameters and sending, the receiving `nl_recvmsgs_default(sock)` always returns an error code `-7 ("Invalid input data or parameter")` or `-12 ("Object not found")` depending on how I create the message to send.
I checked all method invocations before `nl_recvmsgs_default(sock)` but dont get any error back. I guess I am missing a part in setting up the message or socket, but dont know what part it is.
#include <stdlib.h>
#include <unistd.h>
#include <linux/taskstats.h>
#include <netlink/netlink.h>
#include <netlink/genl/genl.h>
#include <netlink/genl/ctrl.h>
int callback_message(struct nl_msg *, void *);
int main(int argc, char ** argv) {
struct nl_sock * sock;
struct nl_msg * msg;
int family;
sock = nl_socket_alloc();
// Connect to generic netlink socket on kernel side
genl_connect(sock);
// get the id for the TASKSTATS generic family
family = genl_ctrl_resolve(sock, "TASKSTATS");
// Allocate a new netlink message and inherit netlink message header.
msg = nlmsg_alloc();
genlmsg_put(msg, NL_AUTO_PID, NL_AUTO_SEQ, family, 0, 0, TASKSTATS_CMD_GET, TASKSTATS_VERSION))
//error code: -7 NLE_INVAL "Invalid input data or parameter",
nla_put_string(msg, TASKSTATS_CMD_ATTR_REGISTER_CPUMASK, "0");
//error code: -12 NLE_OBJ_NOTFOUND "Obj not found"
//nla_put_string(msg, TASKSTATS_CMD_ATTR_PID, "583");
nl_send_auto(sock, msg);
nlmsg_free(msg);
// specify a callback for inbound messages
nl_socket_modify_cb(sock, NL_CB_MSG_IN, NL_CB_CUSTOM, callback_message, NULL);
// gives error code -7 or -12 depending on the two nla_put_string alternatives above
printf("recv code (0 = success): %d", nl_recvmsgs_default(sock));
}
int callback_message(struct nl_msg * nlmsg, void * arg) {
struct nlmsghdr * nlhdr;
struct nlattr * nlattrs[TASKSTATS_TYPE_MAX + 1];
struct nlattr * nlattr;
struct taskstats * stats;
int rem;
nlhdr = nlmsg_hdr(nlmsg);
int answer;
if ((answer = genlmsg_parse(nlhdr, 0, nlattrs, TASKSTATS_TYPE_MAX, NULL))
< 0) {
printf("error parsing msg\n");
}
if ((nlattr = nlattrs[TASKSTATS_TYPE_AGGR_TGID]) || (nlattr =
nlattrs[TASKSTATS_TYPE_AGGR_PID]) || (nlattr =
nlattrs[TASKSTATS_TYPE_NULL])) {
stats = nla_data(nla_next(nla_data(nlattr), &rem));
printf("---\n");
printf("pid: %u\n", stats->ac_pid);
printf("command: %s\n", stats->ac_comm);
printf("status: %u\n", stats->ac_exitcode);
printf("time:\n");
printf(" start: %u\n", stats->ac_btime);
printf(" elapsed: %llu\n", stats->ac_etime);
printf(" user: %llu\n", stats->ac_utime);
printf(" system: %llu\n", stats->ac_stime);
printf("memory:\n");
printf(" bytetime:\n");
printf(" rss: %llu\n", stats->coremem);
printf(" vsz: %llu\n", stats->virtmem);
printf(" peak:\n");
printf(" rss: %llu\n", stats->hiwater_rss);
printf(" vsz: %llu\n", stats->hiwater_vm);
printf("io:\n");
printf(" bytes:\n");
printf(" read: %llu\n", stats->read_char);
printf(" write: %llu\n", stats->write_char);
printf(" syscalls:\n");
printf(" read: %llu\n", stats->read_syscalls);
printf(" write: %llu\n", stats->write_syscalls);
} else {
printf("unknown attribute format received\n");
}
return 0;
} |
I am using pg-promise for performing a multi row insert of around 8 million records and facing the following error:
Error: Connection terminated unexpectedly
at Connection.<anonymous> (/usr/src/app/node_modules/pg/lib/client.js:132:73)
at Object.onceWrapper (node:events:509:28)
at Connection.emit (node:events:390:28)
at Socket.<anonymous> (/usr/src/app/node_modules/pg/lib/connection.js:63:12)
at Socket.emit (node:events:390:28)
at TCP.<anonymous> (node:net:687:12)
I have already tried to configure the connection with parameters like idleTimeoutMillis, connectionTimeoutMillis (also tried adding and removing parameters like keepAlive, keepalives_idle, statement_timeout, query_timeout) and still facing the issue.
const db = pgp({
host: host,
port: port,
database: database,
user: user,
password: pass,
idleTimeoutMillis: 0, // tried with multiple values
connectionTimeoutMillis: 0, // tried with multiple values
keepAlive: true,
keepalives_idle: 300,
statement_timeout: false,
query_timeout: false,
});
|
I have a project that I've created with Laravel 10. After publishing my project, I encountered a CORS error in the console on the client side, which uses React. Despite trying many methods, I was unable to resolve the issue. While researching, I found a package called FruitCake CORS but encountered an error when I tried to install this package. Can you help?
I've tried many methods but couldn't resolve the issue on the published website. This was the only method left, resulting in an error when I tried to install it with Composer. |
Composer installation fails and reverts ./composer.json and ./composer.lock to original content |
|php|laravel|cors|composer-php|laravel-10| |
This is from a .ipynb jupyter notebook in vscode in windows 10:
even when I pip install pandas in the terminal it still gives this error:
```
ModuleNotFoundError Traceback (most recent call last)
Cell In[1], line 2
1 import numpy as np
----> 2 import pandas as pd
3 import matplotlib.pyplot as plt
5 # pip install wheel webencodings jinja2 packaging markupsafe werkzeug flatbuffers libclang typing-extensions wrapt
ModuleNotFoundError: No module named 'pandas'
```
I tried typing 'pip install pandas' in my vscode terminal and it showed this:
```
Requirement already satisfied: pandas in c:\users\s2hun\desktop\machine learning intro\magic\lib\site-packages (2.2.1)
Requirement already satisfied: numpy<2,>=1.26.0 in c:\users\s2hun\desktop\machine learning intro\magic\lib\site-packages (from pandas) (1.26.4)
Requirement already satisfied: python-dateutil>=2.8.2 in c:\users\s2hun\desktop\machine learning intro\magic\lib\site-packages (from pandas) (2.9.0.post0)
Requirement already satisfied: pytz>=2020.1 in c:\users\s2hun\desktop\machine learning intro\magic\lib\site-packages (from pandas) (2024.1)
Requirement already satisfied: tzdata>=2022.7 in c:\users\s2hun\desktop\machine learning intro\magic\lib\site-packages (from pandas) (2024.1)
Requirement already satisfied: six>=1.5 in c:\users\s2hun\desktop\machine learning intro\magic\lib\site-packages (from python-dateutil>=2.8.2->pandas) (1.16.0)
```
I then ran the cell which contained this code:
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
When I did so the following error appeared:
```
ModuleNotFoundError Traceback (most recent call last)
Cell In[2], line 2
1 import numpy as np
----> 2 import pandas as pd
3 import matplotlib.pyplot as plt
5 # pip install wheel webencodings jinja2 packaging markupsafe werkzeug flatbuffers libclang typing-extensions wrapt
ModuleNotFoundError: No module named 'pandas'
```
I don't know how to fix this and I have had the same problem installing various other python modules in the past. |
I am web scraping a URL https://ephtracking.cdc.gov/DataExplorer/ using the below code
```python
options = webdriver.ChromeOptions()
options.headless = False
options.add_argument("window-size=1920,1080")
options.page_load_strategy = 'none'
options.add_argument("--enable-javascript")
options.add_argument('--ignore-certificate-errors')
driver = Chrome(options=options, service=chrome_service)
driver.get(url)
wait= WebDriverWait(driver,20)
step1Content_click = wait.until(ExpectedConditions.presence_of_element_located((
By.XPATH,'//select[@id="contentArea"]//option[text()="Tornadoes"]')))
driver.execute_script("arguments[0].click();", step1Content_click)
```
However, it's unable to select the option `Tornadoes` from the drop-down list of the `Step1 Content` -> `Select Content Area`. |
It häppens because you are don't close you url path with '/', and, by default, `django.middleware.common.CommonMiddleware` is switched on.
In your case:
#encyclopedia/urls.py
from django.urls import path
from . import views
# List of urls
urlpatterns = [
path("", views.index, name="index"),
path("<str:title>/", views.entry, name="page"),
] |
```
TITLE
——
1 nuit
1 nuit
1 nuit
1 nuit
1 nuit
total : 5 nuits
——
1 nuit
1 nuit
1 nuit
total : 3 nuits
——
1 nuit
1 nuit
1 nuit
1 nuit
1 nuit
1 nuit
and so on...
```
I'm having this paragraph in which I'd like to select the last lines after the occurence of the last `——`
It should match and group the 6 following lines right after the `——`... I've tried pretty much everything that crossed my mind so far but I must be missing something here.
I tested `(?s:.*\s)\K——` that is able to match the last `——` of the document. But I can't seem to be able to select the lines after that match.
Thanks.
The point here is to count the lines after that. So if I'm only able to select the "1" or "nuit" that's fine...
The expected capture :
```
1 nuit
1 nuit
1 nuit
1 nuit
1 nuit
1 nuit
``` |
```python
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd
# Inicializa el driver del navegador (en este caso, Chrome)
driver = webdriver.Chrome()
# Navega a la página web
driver.get('https://yopmail.com/es/email-generator')
# Localiza el elemento que quieres copiar (necesitarás el selector CSS correcto aquí)
WebDriverWait(driver, 5)\
.until(EC.element_to_be_clickable((By.XPATH,
"/html/body/div/div[2]/main/div/div[2]/div/div[1]/div[1]/div")))
mails = driver.find_element_by_xpath("/html/body/div/div[2]/main/div/div[2]/div/div[1]/div[1]/div")
mails = mails.text
print(mails)
driver.quit()
```
The error:
```
Traceback (most recent call last):
File "C:\Users\mback\PycharmProjects\pythonProject\.idea\G-Mails.py", line 18, in <module>
mails = driver.find_element_by_xpath("/html/body/div/div[2]/main/div/div[2]/div/div[1]/div[1]/div")
AttributeError: 'WebDriver' object has no attribute 'find_element_by_xpath'
Process finished with exit code 1
```
[This is the piece of text i want to copy(just the Email)](https://i.stack.imgur.com/yih3s.png)
I want to copy the generated E-mail form the website an i cant, please help me
|
I've got a component which does this:
```jsx
<CodeBlock language="css" code={`p { color: hsl(1deg, 100%, 50%)};`} />
```
Inside the component I run Prism.js for code highlighting. This is all good. The problem I have is when the indentation of the general code leads to a situation like this:
```jsx
<Box>
<CodeBlock code={`p {
color: hsl(0deg, 100%, 50%);
}`} language="css" />
</Box>
```
and lots of extra whitespace gets added.
I want to be able to process and format the strings for whitespace. I can trim the edges of the string but I need to sort indentation. I have looked at i[ndent.js][1] but that doesn't remove whitespace from the beginning.
Does anyone know how I can solve this?
EDIT (on request):
Current Output:
```css
p {
color: hsl(0deg, 100%, 50%);
}
```
Required Output:
```css
p {
color: hsl(0deg, 100%, 50%);
}
```
[1]: https://zebzhao.github.io/indent.js/ |
I am working on a project and am trying to release the app but it is not working in the released version.
The app is perfectly fine in the debug version:

but this app is not working in the released version:
 |
Flutter project is working in debug version but not in the release version |
I am trying to call an external API from AWS Lambda but it currently does not provide me anything, no errors or sign of the API request even being sent out. I am frankly not sure what is driving the cause of the issue. I have tried the following.
1. Change the permissions to allow for HTTP requests
2. Connected the Lambda Function to a VPC
Below is my code, please let me know if there is anything else I should be looking into.
```
import fetch from 'node-fetch';
function findWordsInQuotations(sentence) {
// Regular expression to match words within double quotations
// This regex pattern captures words within double quotes
const regex = /"([^"]+)"/g;
// Use the match method to find all words in quotations
const matches = sentence.match(regex);
if (matches) {
// Extract and print the words within quotations
const wordsInQuotations = matches.map(match => match.replace(/"/g, '')); // Remove double quotes
return wordsInQuotations;
} else {
return [];
}
}
async function getReverseDictionary() {
let apiKey = process.env.ChatGPT_API_KEY;
const inputPhrase = "smart but evil";
const inputType = "";
const inputTone = "";
// Use the OpenAI GPT-3 API to find a singular word
let prompt = `What are the five best words for "${inputPhrase}", when you respond, just respond with the words, with the first letter capitalized and all of the words delimited by commas.`;
if (inputType.trim() === "") {
console.log("InputType is empty!");
} else {
prompt = prompt + 'For reference, this is for a ' + inputType;
}
if (inputTone.trim() === "") {
console.log("InputTone is empty!");
} else {
prompt = prompt + '. For reference, I am going for a ' + inputTone + ' tone.';
}
console.log(prompt);
const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'gpt-3.5-turbo',
messages: [
{
role: 'system',
content: "You are a dictionary"
},
{
role: 'user',
content: prompt
}
],
max_tokens: 50, // Limit the response to one token (word)
}),
});
console.log(response);
const data = await response.json();
console.log('the CHATGPT RESPONSE:' + data);
const chatGPToutput = data.choices[0].message.content;
console.log(chatGPToutput);
const singularWord = findWordsInQuotations(chatGPToutput);
const wordsArray = chatGPToutput.split(',');
console.log(wordsArray);
let resultOne = wordsArray[0];
let resultTwo = wordsArray[1];
let resultThree = wordsArray[2];
let resultFour = wordsArray[3];
let resultFive = wordsArray[4];
return resultFive;
}
export const handler = async (event) => {
// TODO implement
;
const response = {
Result_One: getReverseDictionary(),
body: JSON.stringify('Hello from Lambda!'),
};
return response;
};
``` |
null |
I am using pg-promise for performing a multi row insert of around 8 million records and facing the following error:
Error: Connection terminated unexpectedly
at Connection.<anonymous> (/usr/src/app/node_modules/pg/lib/client.js:132:73)
at Object.onceWrapper (node:events:509:28)
at Connection.emit (node:events:390:28)
at Socket.<anonymous> (/usr/src/app/node_modules/pg/lib/connection.js:63:12)
at Socket.emit (node:events:390:28)
at TCP.<anonymous> (node:net:687:12)
I have already tried to configure the connection with parameters like idleTimeoutMillis, connectionTimeoutMillis (also tried adding and removing parameters like keepAlive, keepalives_idle, statement_timeout, query_timeout) and still facing the issue.
const db = pgp({
host: host,
port: port,
database: database,
user: user,
password: pass,
idleTimeoutMillis: 0, // tried with multiple values
connectionTimeoutMillis: 0, // tried with multiple values
keepAlive: true,
keepalives_idle: 300,
statement_timeout: false,
query_timeout: false,
});
Any help is highly appreciated.
|
I'm trying to setup a scheduled task with ECS Fargate.Task was dockerized and will be run through AWS ECS with Fargate. Unfortunately the service I want to run needs to access an API of a partner where the IP address needs to be whitelisted. I see that for each execution of the task with Fargate a new ENI with an different IP is assigned.
How is it possible to assign a static IP address to a AWS ECS Fargate Task?
|
How can I have an ECS Fargate scheduled job access API with an IP address whitelist policy? |
I am not sure what is wrong. I have a basic understanding of git, but I am not an expert.
In my remote branch, I have a file that has a bunch of changes, etc in it.
In my local branch, this file is completely empty.
When I open up github, and the file, I notice it has something like this, so I believe something is out of sync:
<<<<<<< HEAD:my_repo/category_1/run_daily_jobs/summary_daily_job.kjb
<xloc>1328</xloc>
<yloc>80</yloc>
=======
<xloc>1280</xloc>
<yloc>128</yloc>
>>>>>>> 44abcxyzbunchofvalues:my_repo/summary_daily_job.kjb
Based on the comments, here is what I have done:
1. Went to github
2. Found the file
3. Googled "how to read a merge conflict in github" and edited the file appropriately.
4. Ran "git pull origin/my_branch"
When I ran this command here is what the message said:
From my_repository
* branch my_branch -> FETCH_HEAD
updating some value
Fast-forward
.../name_of_file.kjb | 367 --------------------
1 file changed, 367 deletions(-)
5. For some reason, the file is still blank though when I open it in the software I am using (Pentaho). Maybe this is a software issue at this point though? |
I just tried the AWS base image and it works! <br>
A new Dockerfile:
FROM public.ecr.aws/lambda/nodejs:20 as builder
COPY . ${LAMBDA_TASK_ROOT}
RUN npm install
RUN npm install datadog-lambda-js dd-trace
COPY --from=public.ecr.aws/datadog/lambda-extension:55 /opt/extensions/ /opt/extensions
ENV DD_LAMBDA_HANDLER="handler.handler"
CMD ["node_modules/datadog-lambda-js/dist/handler.handler"]
|
I try to install xlwings addin in a conda cmd, but getting this error:
>xlwings addin install
xlwings version: 0.29.1
FileNotFoundError(2, 'No such file or directory')
I downloaded https://github.com/ZoomerAnalytics/xlwings/releases/download/v0.11.4/xlwings.xlam as suggested here: https://stackoverflow.com/questions/45757157/unable-to-install-xlwings-add-in-into-excel, and saved it to C:\Users\$user\, but it gives me the error, what am I missing here? |
"FileNotFoundError" for xlwings add installation |
|windows|conda|excel-addins|xlwings|filenotfounderror| |
Put the `&` on the command you want to put in the background. In this context, that might be your whole loop:
```
#!/usr/bin/env bash
# ^^^^- ensures that array support is available
hValues=( 1 10 100 ) # best practice is to iterate over array elements, not...
kValues=( 0.01 0.1 ) # ...words in strings!
for hVal in "${hValues[@]}"; do
for kVal in "${kValues[@]}"; do
python program.py "$hVal" "$kVal"
done
done &
```
Notice how the `&` was moved to be after the `done` -- that way we background the entire loop, not a single command within it, so the backgrounded process -- running that loop -- invokes only one copy of the Python interpreter at a time. |
### Problem description
I'm writing a python library and I am planning to upload both sdist (.tar.gz) and wheel to PyPI. The [build docs say](https://build.pypa.io/) that running
```
python -m build
```
I get sdist created from the source tree and *wheel created from the sdist*, which is nice since I get the sdist tested here "for free". Now I want to run tests (pytest) against the wheel with multiple python versions. What is the easiest way to do that?
I have been using tox and I see there's an option for [setting package to "wheel"](https://tox.wiki/en/latest/config.html#package):
```
[testenv]
description = run the tests with pytest
package = wheel
wheel_build_env = .pkg
```
But that does not say *how* the wheel is produced. From the tox logs (with `-vvv`) I can see this:
```
.pkg: 760 I exit None (0.01 seconds) /home/niko/code/myproj> python /home/niko/code/myproj/.tox/.tox/lib/python3.10/site-packages/pyproject_api/_backend.py True flit_core.buildapi pid=251971 [tox/execute/api.py:280]
.pkg: 761 W build_wheel> python /home/niko/code/myproj/.tox/.tox/lib/python3.10/site-packages/pyproject_api/_backend.py True flit_core.buildapi [tox/tox_env/api.py:425]
Backend: run command build_wheel with args {'wheel_directory': '/home/niko/code/myproj/.tox/.pkg/dist', 'config_settings': None, 'metadata_directory': None}
```
These are the commands related creating the wheel. So this one uses [tox-dev/pyproject-api](https://github.com/tox-dev/pyproject-api) under the hood. But it is still a bit unclear whether tox
a) creates wheel directly from source tree<br>
b) creates wheel from sdist which is created from source tree in a way which *is identical to* `python -m build`<br>
c) creates wheel from sdist which is created from source tree in a way which *differs from* `python -m build`
## Question
I want to create a wheel from sdist which is created from the source tree, and I want to run unit tests against the wheel(s) with multiple python versions. I prefer using `python -m build` for the wheel creation. What would be the idiomatic way to run the tests against the wheel(s) ? Can I use tox for that?
|
1) Make your url names unique.
2) Make url parameters as below.
```python
path("products/",product_list_view,name="product_list"),
path("product/<int:pid>/",product_detail_view,name="product_detail"),
path("product/<slug:slug>/",get_product,name="get_product")
```
[ref][1]
[1]: https://docs.djangoproject.com/en/5.0/topics/http/urls/#example
3)
Make an error page template and show this page on exception
```
def get_product(request, slug):
try:
product = Product.objects.get(slug=slug)
if request.GET.get('size'):
size = request.GET.get('size')
print(size)
return render(request, "core/product_detail.html")
except Exception as e:
print(e)
return render(request, "core/error_page.html")
```
4) Obviously, you model have no field called slug. So below code wouldn't work.
```python
product = Product.objects.get(slug=slug)
``` |
You can use item-level targeting in Preferences Section of the Group Policy Object (GPO).
"Within a single Group Policy object (GPO), you can include multiple preference items, each customized for selected users or computers and each targeted to
apply settings only to the relevant users or computers"
Ref: https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn789189(v=ws.11)
So you can apply user preferences to set default printers based on the computer name they logon to.
"A Computer Name targeting item allows a preference item to be applied to computers or users only if the computer's name matches the specified computer name in the targeting item."
Please note that using item level targeting with computer name selection does not imply a computer policy, still this should be a single GPO applied to users including several preference settings to set different printers as default for domain users based on the logged on computer name.
|
Because it's actually a fun problem, I wrote an algorithm that doesn't rely on building then counting permutations, but rather on mathematically calculating them;
the `count_permutations_beginning_with` function literally does what its name says, and the `build_x` function recursively builds the rank-th combination.
```
from collections import Counter
lst = [2, 2, 2, 2, 4, 4, 5, 5, 5, 6, 6, 6, 8, 8]
expected_diff = 5_617_961
initial_digits = Counter(str(d) for d in sorted(lst))
def fac(n):
return n * fac(n-1) if n else 1
def count_permutations_beginning_with(n):
digits = initial_digits.copy()
digits.subtract(n)
prod = 1
for c in digits.values():
prod *= fac(c)
return fac(digits.total()) // prod
def build_x(rank, root='', remaining_digits=None):
if remaining_digits is None:
remaining_digits = initial_digits.copy()
print (rank, root, remaining_digits)
if not remaining_digits.total():
return root
for d in remaining_digits:
if not remaining_digits[d]:
continue
if (d_quantity := count_permutations_beginning_with(root + d)) < rank:
rank -= d_quantity
else:
break
remaining_digits.subtract(d)
return build_x(rank, root + d, remaining_digits)
x_rank = (count_permutations_beginning_with('') + 1 + expected_diff) // 2
print(build_x(x_rank))
# '58225522644668'
```
|
How to handle the Failed to load Clerk error |
|next.js|clerk| |
I cloned an extension from github trying to modify a shortcut to work for me.
[The extension][1]
But when I load the extension from the "app folder" (the one contain manifest.json) chrome gives back several errors
- Unrecognized manifest key 'browser_specific_settings'.
- Manifest version 2 is deprecated, and support will be removed in 2024.
- Uncaught SyntaxError: Cannot use import statement outside a module
And I believe this is because I didnt figure out the way to tell Google Chrome how to load the extension from the main folder.
Correct me if i'm wrong because I don't have any knowledge about building a Chrome extension, just trying to get the shortcuts to work.
[1]: https://github.com/pc035860/yt-timetag |
How do i load a Chrome extension when manifest.json is in a subfolder (app) instead of main folder |
|google-chrome-extension|google-chrome-devtools| |
null |
null |
null |
null |
I want to make my Python IDE->PyCharm, to scroll to the bottom of console window automatically,so that I can watch the latest output messages of my running code.[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/hJn2P.jpg |
How to scroll to the bottom of console window in PyCharm2019 automatically? |
|debugging|scroll|pycharm|console|auto| |
I am tryin to use the EMR studio workspace (notebooks) with EMR serverless application but it give me this error when I go to select the kernel (like python3). I have explored all the docs on the polices and trust polices but I don't understand why I am getting this error as a root user.
My trust policy for the role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "elasticmapreduce.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"aws:SourceAccount": "123"
},
"ArnLike": {
"aws:SourceArn": "arn:aws:elasticmapreduce:us-east-2:123:*"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "emr-serverless.amazonaws.com"
},
"Action": [
"sts:AssumeRole",
"sts:SetContext"
]
}
]
}
My policies attached to the role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "EMRServerlessInteractiveAccess",
"Effect": "Allow",
"Action": "emr-serverless:AccessInteractiveEndpoints",
"Resource": "arn:aws:emr-serverless:us-east-2:123:/applications/*"
},
{
"Sid": "ReadAccessForEMRSamples",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::*.elasticmapreduce",
"arn:aws:s3:::*.elasticmapreduce/*"
]
},
{
"Sid": "EMRServerlessRuntimeRoleAccess",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "*"
},
{
"Sid": "FullAccessToOutputBucket",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetEncryptionConfiguration",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::s3bu",
"arn:aws:s3:::s3bu/*"
]
},
{
"Sid": "GlueCreateAndReadDataCatalog",
"Effect": "Allow",
"Action": [
"glue:GetDatabase",
"glue:CreateDatabase",
"glue:GetDataBases",
"glue:CreateTable",
"glue:GetTable",
"glue:UpdateTable",
"glue:DeleteTable",
"glue:GetTables",
"glue:GetPartition",
"glue:GetPartitions",
"glue:CreatePartition",
"glue:BatchCreatePartition",
"glue:GetUserDefinedFunctions"
],
"Resource": [
"*"
]
},
{
"Sid": "AllowEMRReadOnlyActions",
"Effect": "Allow",
"Action": [
"elasticmapreduce:ListInstances",
"elasticmapreduce:DescribeCluster",
"elasticmapreduce:ListSteps"
],
"Resource": "*"
},
{
"Sid": "AllowEC2ENIActionsWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterfacePermission",
"ec2:DeleteNetworkInterface"
],
"Resource": [
"arn:aws:ec2:*:*:network-interface/*"
],
"Condition": {
"StringEquals": {
"aws:ResourceTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowEC2ENIAttributeAction",
"Effect": "Allow",
"Action": [
"ec2:ModifyNetworkInterfaceAttribute"
],
"Resource": [
"arn:aws:ec2:*:*:instance/*",
"arn:aws:ec2:*:*:network-interface/*",
"arn:aws:ec2:*:*:security-group/*"
]
},
{
"Sid": "AllowEC2SecurityGroupActionsWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:AuthorizeSecurityGroupEgress",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:RevokeSecurityGroupEgress",
"ec2:RevokeSecurityGroupIngress",
"ec2:DeleteNetworkInterfacePermission"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowDefaultEC2SecurityGroupsCreationWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:CreateSecurityGroup"
],
"Resource": [
"arn:aws:ec2:*:*:security-group/*"
],
"Condition": {
"StringEquals": {
"aws:RequestTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowDefaultEC2SecurityGroupsCreationInVPCWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:CreateSecurityGroup"
],
"Resource": [
"arn:aws:ec2:*:*:vpc/*"
],
"Condition": {
"StringEquals": {
"aws:ResourceTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowAddingEMRTagsDuringDefaultSecurityGroupCreation",
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:*:*:security-group/*",
"Condition": {
"StringEquals": {
"aws:RequestTag/for-use-with-amazon-emr-managed-policies": "true",
"ec2:CreateAction": "CreateSecurityGroup"
}
}
},
{
"Sid": "AllowEC2ENICreationWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface"
],
"Resource": [
"arn:aws:ec2:*:*:network-interface/*"
],
"Condition": {
"StringEquals": {
"aws:RequestTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowEC2ENICreationInSubnetAndSecurityGroupWithEMRTags",
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface"
],
"Resource": [
"arn:aws:ec2:*:*:subnet/*",
"arn:aws:ec2:*:*:security-group/*"
],
"Condition": {
"StringEquals": {
"aws:ResourceTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowAddingTagsDuringEC2ENICreation",
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:*:*:network-interface/*",
"Condition": {
"StringEquals": {
"ec2:CreateAction": "CreateNetworkInterface"
}
}
},
{
"Sid": "AllowEC2ReadOnlyActions",
"Effect": "Allow",
"Action": [
"ec2:DescribeSecurityGroups",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeTags",
"ec2:DescribeInstances",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs"
],
"Resource": "*"
},
{
"Sid": "AllowSecretsManagerReadOnlyActionsWithEMRTags",
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue"
],
"Resource": "arn:aws:secretsmanager:*:*:secret:*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/for-use-with-amazon-emr-managed-policies": "true"
}
}
},
{
"Sid": "AllowWorkspaceCollaboration",
"Effect": "Allow",
"Action": [
"iam:GetUser",
"iam:GetRole",
"iam:ListUsers",
"iam:ListRoles",
"sso:GetManagedApplicationInstance",
"sso-directory:SearchUsers"
],
"Resource": "*"
}
]
}
[Error when selecting kernel][1]
[1]: https://i.stack.imgur.com/8gjIp.png |
How to use EMR studio notebooks with EMR serverless |
|amazon-web-services|amazon-emr| |
{"Voters":[{"Id":9718056,"DisplayName":"Arkellys"},{"Id":4541695,"DisplayName":"DVT"},{"Id":2422778,"DisplayName":"Mike Szyndel"}],"SiteSpecificCloseReasonIds":[13]} |
I have the following simplified code. I want to achieve that the manager's TryHandle function is only executed when the mutex is not locked, otherwise skip the execution. TryHandle can be accessed from multiple go routines:
var (
mu sync.Mutex
)
// can be accessed by multiple go routines
func (m *Manager) TryHandle(work *Work) {
if mu.TryLock() {
defer mu.Unlock()
defer func() {
if r := recover(); r != nil {
fmt.Printf("Recovered from panic. Error: %s\n", r)
fmt.Println("stacktrace from panic: \n" + string(debug.Stack()))
}
}()
m.HandleWork(work)
} else {
fmt.Println("mutex is busy.")
}
}
At first it seemed to work, the program was running for some hours quite well. TryHandle was accessed frequently up to 5-10 times per second, and a very few times the mutex was actually busy.
But this morning I saw in the logs that "mutex is busy" every time TryHandle was called, and one cpu was running at 100%.
Of course, the most obvious reason could be that one call to m.HandleWork() never returned and the mutex was never unlocked. Unfortunately the log buffer is not so big so I don't have the proof.
But apart from that, and since I am (almost!) sure m.HandleWork can not be the culprit, is it possible that there was a panic and my defer recover function somehow prevented the defer mu.Unlock()?
Or is it maybe a problem that I define mu standalone instead of making it a field on my managers struct? |
Call an External API from AWS Lambda |
|amazon-web-services|api|aws-lambda| |
null |
I came across a workaround that was suggested in an unofficial capacity by someone at AppDynamics during their local lab explorations. While this solution isn't officially supported by AppDynamics, it has proven to be effective for adjusting the log levels for both the Proxy and the Watchdog components within my AppDynamics setup. I'd like to share the steps involved, but please proceed with caution and understand that this is not a sanctioned solution.
I recommend changing only the log4j2.xml file, because the proxy messages look like are responsible for almost 99% of the log messages.
Here's a summary of the steps:
- **Proxy Log Level:** The `log4j2.xml` file controls this. You can find it within the appdynamics_bindeps module. For example, in my WSL setup, it's located at `/home/wsl/.pyenv/versions/3.11.6/lib/python3.11/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j2.xml`. In the Docker image python:3.9, the path is `/usr/local/lib/python3.9/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j2.xml`. Modify the <AsyncLogger> level within the <Loggers> section to one of the following: debug, info, warn, error, or fatal.
- **Watch Dog Log Level:** This can be adjusted in the `proxy.py` file found within the appdynamics Python module. For example, in my WSL setup, it's located at `/home/wsl/.pyenv/versions/3.11.6/lib/python3.11/site-packages/appdynamics/scripts/pyagent/commands/proxy.py`. In the Docker image python:3.9, the path is `/usr/local/lib/python3.9/site-packages/appdynamics/scripts/pyagent/commands/proxy.py`. You will need to hardcode the log level in the configure_proxy_logger and configure_watchdog_logger functions by changing the level variable.
## My versions
```bash
$ pip freeze | grep appdynamics
appdynamics==24.2.0.6567
appdynamics-bindeps-linux-x64==24.2.0
appdynamics-proxysupport-linux-x64==11.68.3
```
## Original files
### log4j2.xml
```
<Loggers>
<!-- Modify each <AsyncLogger> level as needed -->
<AsyncLogger name="com.singularity" level="info" additivity="false">
<AppenderRef ref="Default"/>
<AppenderRef ref="RESTAppender"/>
<AppenderRef ref="Console"/>
</AsyncLogger>
</Loggers>
```
### proxy.py
```python
def configure_proxy_logger(debug):
logger = logging.getLogger('appdynamics.proxy')
level = logging.DEBUG if debug else logging.INFO
pass
def configure_watchdog_logger(debug):
logger = logging.getLogger('appdynamics.proxy')
level = logging.DEBUG if debug else logging.INFO
pass
```
## My Script to create environment variables to log4j2.xml and proxy.py
### update_appdynamics_log_level.sh
```bash
#!/bin/sh
# Check if PYENV_ROOT is not set
if [ -z "$PYENV_ROOT" ]; then
# If PYENV_ROOT is not set, then set it to the default value
export PYENV_ROOT="/usr/local/lib"
echo "PYENV_ROOT was not set. Setting it to default: $PYENV_ROOT"
else
echo "PYENV_ROOT is already set to: $PYENV_ROOT"
fi
echo "=========================== log4j2 - appdynamics_bindeps module ========================="
# Find the appdynamics_bindeps directory
APP_APPD_BINDEPS_DIR=$(find "$PYENV_ROOT" -type d -name "appdynamics_bindeps" -print -quit)
if [ -z "$APP_APPD_BINDEPS_DIR" ]; then
echo "Error: appdynamics_bindeps directory not found."
exit 1
fi
echo "Found appdynamics_bindeps directory at $APP_APPD_BINDEPS_DIR"
# Find the log4j2.xml file within the appdynamics_bindeps directory
APP_LOG4J2_FILE=$(find "$APP_APPD_BINDEPS_DIR" -type f -name "log4j2.xml" -print -quit)
if [ -z "$APP_LOG4J2_FILE" ]; then
echo "Error: log4j2.xml file not found within the appdynamics_bindeps directory."
exit 1
fi
echo "Found log4j2.xml file at $APP_LOG4J2_FILE"
# Modify the log level in the log4j2.xml file
echo "Modifying log level in log4j2.xml file"
sed -i 's/level="info"/level="${env:APP_APPD_LOG4J2_LOG_LEVEL:-info}"/g' "$APP_LOG4J2_FILE"
echo "log4j2.xml file modified successfully."
echo "=========================== watchdog - appdynamics module ==============================="
# Find the appdynamics directory
APP_APPD_DIR=$(find "$PYENV_ROOT" -type d -name "appdynamics" -print -quit)
if [ -z "$APP_APPD_DIR" ]; then
echo "Error: appdynamics directory not found."
exit 1
fi
echo "Found appdynamics directory at $APP_APPD_DIR"
# Find the proxy.py file within the appdynamics directory
APP_PROXY_PY_FILE=$(find "$APP_APPD_DIR" -type f -name "proxy.py" -print -quit)
if [ -z "$APP_PROXY_PY_FILE" ]; then
echo "Error: proxy.py file not found within the appdynamics directory."
exit 1
fi
echo "Found proxy.py file at $APP_PROXY_PY_FILE"
# Modify the log level in the proxy.py file
echo "Modifying log level in proxy.py file"
sed -i 's/logging.DEBUG if debug else logging.INFO/os.getenv("APP_APPD_WATCHDOG_LOG_LEVEL", "info").upper()/g' "$APP_PROXY_PY_FILE"
echo "proxy.py file modified successfully."
```
## Dockerfile
Dockerfile to run pyagent with FastAPI and run this script
```dockerfile
# Use a specific version of the python image
FROM python:3.9
# Set the working directory in the container
WORKDIR /app
# First, copy only the requirements file and install dependencies to leverage Docker cache
COPY requirements.txt ./
RUN python3 -m pip install --no-cache-dir -r requirements.txt
# Now copy the rest of the application to the container
COPY . .
# Make the update_log4j2.sh and update_watchdog.sh scripts executable and run them
RUN chmod +x update_appdynamics_log_level.sh && \
./update_appdynamics_log_level.sh
# Set environment variables
ENV APP_APPD_LOG4J2_LOG_LEVEL="warn" \
APP_APPD_WATCHDOG_LOG_LEVEL="warn"
EXPOSE 8000
# Command to run the FastAPI application with pyagent
CMD ["pyagent", "run", "uvicorn", "main:app", "--proxy-headers", "--host","0.0.0.0", "--port","8000"]
```
## Files changed by the script
### log4j2.xml
```xml
<Loggers>
<!-- Modify each <AsyncLogger> level as needed -->
<AsyncLogger name="com.singularity" level="${env:APP_APPD_LOG4J2_LOG_LEVEL:-info}" additivity="false">
<AppenderRef ref="Default"/>
<AppenderRef ref="RESTAppender"/>
<AppenderRef ref="Console"/>
</AsyncLogger>
</Loggers>
```
### proxy.py
```python
def configure_proxy_logger(debug):
logger = logging.getLogger('appdynamics.proxy')
level = os.getenv("APP_APPD_WATCHDOG_LOG_LEVEL", "info").upper()
pass
def configure_watchdog_logger(debug):
logger = logging.getLogger('appdynamics.proxy')
level = os.getenv("APP_APPD_WATCHDOG_LOG_LEVEL", "info").upper()
pass
```
## Warning
Please note, these paths and methods may vary based on your AppDynamics version and environment setup. Always backup files before making changes and be aware that updates to AppDynamics may overwrite your customizations.
I hope this helps! |
{"Voters":[{"Id":6243352,"DisplayName":"ggorlen"},{"Id":354577,"DisplayName":"Chris"},{"Id":16217248,"DisplayName":"CPlus"}],"SiteSpecificCloseReasonIds":[16]} |
{"Voters":[{"Id":13317,"DisplayName":"Kenster"},{"Id":354577,"DisplayName":"Chris"},{"Id":16217248,"DisplayName":"CPlus"}],"SiteSpecificCloseReasonIds":[18]} |
This is my application.yml of Spring Authorization Server:
```
spring:
security:
user:
name: "victoria"
password: "password"
oauth2:
authorization-server:
client:
portfolio-client:
registration:
client-id: "portfolio-client"
client-secret: "{noop}secret"
client-authentication-methods:
- "client_secret_basic"
authorization-grant-types:
- "authorization_code"
scopes:
- "openid"
- "profile"
require-authorization-consent: true
redirect-uris:
- "https://oauth.pstmn.io/v1/browser-callback"
logging:
level:
root: info
org.springframework.security: TRACE
server:
port: 8091
```
After you start authorization server, go to http://localhost:8091/.well-known/openid-configuration in browser, you will see openid configuration. You will need "authorization_endpoint" and "token_endpoint" urls for Postman.
```
{
"issuer": "http://localhost:8091",
"authorization_endpoint": "http://localhost:8091/oauth2/authorize",
"device_authorization_endpoint": "http://localhost:8091/oauth2/device_authorization",
"token_endpoint": "http://localhost:8091/oauth2/token",
"token_endpoint_auth_methods_supported": [
"client_secret_basic",
"client_secret_post",
"client_secret_jwt",
"private_key_jwt"
],
"jwks_uri": "http://localhost:8091/oauth2/jwks",
"userinfo_endpoint": "http://localhost:8091/userinfo",
"end_session_endpoint": "http://localhost:8091/connect/logout",
"response_types_supported": [
"code"
],
"grant_types_supported": [
"authorization_code",
"client_credentials",
"refresh_token",
"urn:ietf:params:oauth:grant-type:device_code"
],
"revocation_endpoint": "http://localhost:8091/oauth2/revoke",
"revocation_endpoint_auth_methods_supported": [
"client_secret_basic",
"client_secret_post",
"client_secret_jwt",
"private_key_jwt"
],
"introspection_endpoint": "http://localhost:8091/oauth2/introspect",
"introspection_endpoint_auth_methods_supported": [
"client_secret_basic",
"client_secret_post",
"client_secret_jwt",
"private_key_jwt"
],
"code_challenge_methods_supported": [
"S256"
],
"subject_types_supported": [
"public"
],
"id_token_signing_alg_values_supported": [
"RS256"
],
"scopes_supported": [
"openid"
]
}
```
You need the rest from application.yml to put in Postman, like this:
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/JFNSS.jpg
Then you should get your token when you click "Get New Access Token" after you type "victoria" and "password" in popped up dialog.
|