id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_23530300
|
{{- if .Values.cxvxcxcvxcvx -}}
{{- if .Values.xcvxcvcxv.enabled }}
---
apiVersion: xx
kind: xcvxcv
metadata:
name: xxxxxxxxxxxx
namespace: xxxxxxxx
type: zxczxc
stringData:
app-sql-host: gdsfgdfgdfg.dfgdfgdfgdg.zxczxczcc.xcvx.zxczxczc.com
app-sql-database: xxxxxxxxxxxxxxxx
app-sql-password: xxxxxxxxxxxxxxxx
xx-sql-zxczx: xxxxxxxx
xx-sql-zxczxc: xxxxxxxx
sql-use-xx-user: 'no'
app-sql-username: xxxxxxxx
{{- end }}
{{- if .Values.xcvxcv.enabled }}
---
apiVersion: xx
kind: xcvxcv
metadata:
name: xxxxxxxxxxxx
namespace: xxxxxxxx
type: xcv
stringData:
xxx-url: xcvc:xcv://xxxxxxxxxxxxxxxxxxxxxxxxxxxx;
username: xxxx
password: xxxx
{{- end }}
{{- end -}}
Here is the pseudo-code/steps I need to perform:
*
*Get the text located between the first --- and {.
*Move/copy the text into a temporary file.
*Repeat for each occurrence of text between any following --- and {.
*Run cat -n <temp_file> > <temp_file>
*Insert each file's content back between the --- and { where it came from.
Expected output:
{{- if .Values.cxvxcxcvxcvx -}}
{{- if .Values.xcvxcvcxv.enabled }}
---
apiVersion: xx
kind: xcvxcv
metadata:
name: xxxxxxxxxxxx
namespace: xxxxxxxx
type: zxczxc
encryptedData:
app-sql-host: W8l4BFOTOzg4PDjjVVS7jrjhNy0WEcSjkWY0r93SK54cM1KM77bzSxAiNfwDW9liom8nESqezUcmX3V8A2IFMQJufNkOLUsUHAVPYWzAd0RqHhRHD6x2RkOsasGr46Jm3p14Dcz6oDYzETtwUhMJCVfAL7FXqqQWNOPXQyFhX7Oo05L7v34W4GGEDSwQo4ZqE9rJeYd4OXlsxkvfcIVqkZ5YsacczTXSsJWGdqPr3tsmplznaKZfkLqQgUzayXWdNI5H12FtBILqOr7TEdVEOVk2VkP7Nj4WZyrF00hbRhBi
app-sql-database: W8l4BFOTOzg4PDjjVVS7jrjhNy0WEcSjkWY0r93SK54cM1KM77bzSxAiNfwDW9liom8nESqezUcmX3V8A2IFMQJufNkOLUsUHAVPYWzAd0RqHhRHD6x2RkOsasGr46Jm3p14Dcz6oDYzETtwUhMJCVfAL7FXqqQWNOPXQyFhX7Oo05L7v34W4GGEDSwQo4ZqE9rJeYd4OXlsxkvfcIVqkZ5YsacczTXSsJWGdqPr3tsmplznaKZfkLqQgUzayXWdNI5H12FtBILqOr7TEdVEOVk2VkP7Nj4WZyrF00hbRhBi
app-sql-password: W8l4BFOTOzg4PDjjVVS7jrjhNy0WEcSjkWY0r93SK54cM1KM77bzSxAiNfwDW9liom8nESqezUcmX3V8A2IFMQJufNkOLUsUHAVPYWzAd0RqHhRHD6x2RkOsasGr46Jm3p14Dcz6oDYzETtwUhMJCVfAL7FXqqQWNOPXQyFhX7Oo05L7v34W4GGEDSwQo4ZqE9rJeYd4OXlsxkvfcIVqkZ5YsacczTXSsJWGdqPr3tsmplznaKZfkLqQgUzayXWdNI5H12FtBILqOr7TEdVEOVk2VkP7Nj4WZyrF00hbRhBi
xx-sql-zxczx: W8l4BFOTOzg4PDjjVVS7jrjhNy0WEcSjkWY0r93SK54cM1KM77bzSxAiNfwDW9liom8nESqezUcmX3V8A2IFMQJufNkOLUsUHAVPYWzAd0RqHhRHD6x2RkOsasGr46Jm3p14Dcz6oDYzETtwUhMJCVfAL7FXqqQWNOPXQyFhX7Oo05L7v34W4GGEDSwQo4ZqE9rJeYd4OXlsxkvfcIVqkZ5YsacczTXSsJWGdqPr3tsmplznaKZfkLqQgUzayXWdNI5H12FtBILqOr7TEdVEOVk2VkP7Nj4WZyrF00hbRhBi
xx-sql-zxczxc: W8l4BFOTOzg4PDjjVVS7jrjhNy0WEcSjkWY0r93SK54cM1KM77bzSxAiNfwDW9liom8nESqezUcmX3V8A2IFMQJufNkOLUsUHAVPYWzAd0RqHhRHD6x2RkOsasGr46Jm3p14Dcz6oDYzETtwUhMJCVfAL7FXqqQWNOPXQyFhX7Oo05L7v34W4GGEDSwQo4ZqE9rJeYd4OXlsxkvfcIVqkZ5YsacczTXSsJWGdqPr3tsmplznaKZfkLqQgUzayXWdNI5H12FtBILqOr7TEdVEOVk2VkP7Nj4WZyrF00hbRhBi
sql-use-xx-user: W8l4BFOTOzg4PDjjVVS7jrjhNy0WEcSjkWY0r93SK54cM1KM77bzSxAiNfwDW9liom8nESqezUcmX3V8A2IFMQJufNkOLUsUHAVPYWzAd0RqHhRHD6x2RkOsasGr46Jm3p14Dcz6oDYzETtwUhMJCVfAL7FXqqQWNOPXQyFhX7Oo05L7v34W4GGEDSwQo4ZqE9rJeYd4OXlsxkvfcIVqkZ5YsacczTXSsJWGdqPr3tsmplznaKZfkLqQgUzayXWdNI5H12FtBILqOr7TEdVEOVk2VkP7Nj4WZyrF00hbRhBi
app-sql-username: W8l4BFOTOzg4PDjjVVS7jrjhNy0WEcSjkWY0r93SK54cM1KM77bzSxAiNfwDW9liom8nESqezUcmX3V8A2IFMQJufNkOLUsUHAVPYWzAd0RqHhRHD6x2RkOsasGr46Jm3p14Dcz6oDYzETtwUhMJCVfAL7FXqqQWNOPXQyFhX7Oo05L7v34W4GGEDSwQo4ZqE9rJeYd4OXlsxkvfcIVqkZ5YsacczTXSsJWGdqPr3tsmplznaKZfkLqQgUzayXWdNI5H12FtBILqOr7TEdVEOVk2VkP7Nj4WZyrF00hbRhBi
{{- end }}
{{- if .Values.xcvxcv.enabled }}
---
apiVersion: xx
kind: xcvxcv
metadata:
name: xxxxxxxxxxxx
namespace: xxxxxxxx
type: xcv
encryptedData:
xxx-url: W8l4BFOTOzg4PDjjVVS7jrjhNy0WEcSjkWY0r93SK54cM1KM77bzSxAiNfwDW9liom8nESqezUcmX3V8A2IFMQJufNkOLUsUHAVPYWzAd0RqHhRHD6x2RkOsasGr46Jm3p14Dcz6oDYzETtwUhMJCVfAL7FXqqQWNOPXQyFhX7Oo05L7v34W4GGEDSwQo4ZqE9rJeYd4OXlsxkvfcIVqkZ5YsacczTXSsJWGdqPr3tsmplznaKZfkLqQgUzayXWdNI5H12FtBILqOr7TEdVEOVk2VkP7Nj4WZyrF00hbRhBi
username: W8l4BFOTOzg4PDjjVVS7jrjhNy0WEcSjkWY0r93SK54cM1KM77bzSxAiNfwDW9liom8nESqezUcmX3V8A2IFMQJufNkOLUsUHAVPYWzAd0RqHhRHD6x2RkOsasGr46Jm3p14Dcz6oDYzETtwUhMJCVfAL7FXqqQWNOPXQyFhX7Oo05L7v34W4GGEDSwQo4ZqE9rJeYd4OXlsxkvfcIVqkZ5YsacczTXSsJWGdqPr3tsmplznaKZfkLqQgUzayXWdNI5H12FtBILqOr7TEdVEOVk2VkP7Nj4WZyrF00hbRhBi
password: W8l4BFOTOzg4PDjjVVS7jrjhNy0WEcSjkWY0r93SK54cM1KM77bzSxAiNfwDW9liom8nESqezUcmX3V8A2IFMQJufNkOLUsUHAVPYWzAd0RqHhRHD6x2RkOsasGr46Jm3p14Dcz6oDYzETtwUhMJCVfAL7FXqqQWNOPXQyFhX7Oo05L7v34W4GGEDSwQo4ZqE9rJeYd4OXlsxkvfcIVqkZ5YsacczTXSsJWGdqPr3tsmplznaKZfkLqQgUzayXWdNI5H12FtBILqOr7TEdVEOVk2VkP7Nj4WZyrF00hbRhBi
template:
metadata:
annotations:
sealedsecrets.bitnami.com/<example-pathparameter>: "true"
{{- end }}
{{- end -}}
I think AWK is the best way to do this but I'm open to other Bash solutions. gawk is not available to me.
A: I'm really not convinced you need your command to modify the temp file so here's a solution that will work with any awk that implements fflush() (which is most of them and will be required by POSIX in the upcoming 2022 standards release, see https://www.austingroupbugs.net/view.php?id=634 for details) and uses a temp file (probably not necessary at all if you have GNU awk for co-processes) but assumes your command you run on it can print to stdout:
$ cat tst.awk
inBlock {
if ( /^{/ ) {
fflush()
close(tmp)
cmd = "cat -n \047" tmp "\047"
while ( (cmd | getline line) > 0 ) {
print line
}
close(cmd)
inBlock = 0
}
else {
print > tmp
}
}
!inBlock
/^---/ { inBlock = 1 }
$ tmp=$(mktemp) && awk -v tmp="$tmp" -f tst.awk file && rm -f "$tmp"
{{- if .Values.cxvxcxcvxcvx -}}
{{- if .Values.xcvxcvcxv.enabled }}
---
1 apiVersion: xx
2 kind: xcvxcv
3 metadata:
4 name: xxxxxxxxxxxx
5 namespace: xxxxxxxx
6 type: zxczxc
7 stringData:
8 app-sql-host: gdsfgdfgdfg.dfgdfgdfgdg.zxczxczcc.xcvx.zxczxczc.com
9 app-sql-database: xxxxxxxxxxxxxxxx
10 app-sql-password: xxxxxxxxxxxxxxxx
11 xx-sql-zxczx: xxxxxxxx
12 xx-sql-zxczxc: xxxxxxxx
13 sql-use-xx-user: 'no'
14 app-sql-username: xxxxxxxx
{{- end }}
{{- if .Values.xcvxcv.enabled }}
---
1 apiVersion: xx
2 kind: xcvxcv
3 metadata:
4 name: xxxxxxxxxxxx
5 namespace: xxxxxxxx
6 type: xcv
7 stringData:
8 xxx-url: xcvc:xcv://xxxxxxxxxxxxxxxxxxxxxxxxxxxx;
9 username: xxxx
10 password: xxxx
{{- end }}
{{- end -}}
Obviously replace cat -n with whatever command you really want to run on the temp files.
EDIT: without using a temp file:
$ cat tst.awk
inBlock {
if ( /^{/ ) {
fflush()
cmd = "cat -n"
print buf | cmd
close(cmd)
buf = ""
inBlock = 0
}
else {
buf = (inBlock++ > 1 ? buf ORS : "") $0
}
}
!inBlock
/^---/ { inBlock = 1 }
$ awk -f tst.awk file
{{- if .Values.cxvxcxcvxcvx -}}
{{- if .Values.xcvxcvcxv.enabled }}
---
1 apiVersion: xx
2 kind: xcvxcv
3 metadata:
4 name: xxxxxxxxxxxx
5 namespace: xxxxxxxx
6 type: zxczxc
7 stringData:
8 app-sql-host: gdsfgdfgdfg.dfgdfgdfgdg.zxczxczcc.xcvx.zxczxczc.com
9 app-sql-database: xxxxxxxxxxxxxxxx
10 app-sql-password: xxxxxxxxxxxxxxxx
11 xx-sql-zxczx: xxxxxxxx
12 xx-sql-zxczxc: xxxxxxxx
13 sql-use-xx-user: 'no'
14 app-sql-username: xxxxxxxx
{{- end }}
{{- if .Values.xcvxcv.enabled }}
---
1 apiVersion: xx
2 kind: xcvxcv
3 metadata:
4 name: xxxxxxxxxxxx
5 namespace: xxxxxxxx
6 type: xcv
7 stringData:
8 xxx-url: xcvc:xcv://xxxxxxxxxxxxxxxxxxxxxxxxxxxx;
9 username: xxxx
10 password: xxxx
{{- end }}
{{- end -}}
| |
doc_23530301
|
Things I have done:
*
*Verified my texture is loaded properly with RenderDoc
*Verified that my vertex attribute pointers are compliant with ImGui's convention (array of structs).
Below is my rendering code. You can also see the developer's example code for OpenGL here: https://github.com/ocornut/imgui/blob/master/examples/opengl3_example/imgui_impl_glfw_gl3.cpp
// Setup some GL state
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDisable(GL_CULL_FACE);
glDisable(GL_DEPTH_TEST);
glEnable(GL_SCISSOR_TEST);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
// Setup orthographic projection
glViewport(0, 0, (GLsizei)fb_width, (GLsizei)fb_height);
const float ortho_projection[4][4] =
{
{ 2.0f/io.DisplaySize.x, 0.0f, 0.0f, 0.0f },
{ 0.0f, 2.0f/-io.DisplaySize.y, 0.0f, 0.0f },
{ 0.0f, 0.0f, -1.0f, 0.0f },
{-1.0f, 1.0f, 0.0f, 1.0f },
};
// Setup the shader. bind() calls glUseProgram and enables/disables the proper vertex attributes
shadeTextured->bind();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, g_FontTexture);
shadeTextured->setUniformM4(Shader::uhi_transform, *(glm::mat4*)&ortho_projection[0][0]);
shadeTextured->setUniformSampler(1, 0);
// Set my vertex attribute pointers for position and tex coords
glVertexAttribPointer(0,
2,
GL_FLOAT,
GL_FALSE,
sizeof(ImDrawVert),
(GLvoid*)IM_OFFSETOF(ImDrawVert, pos));
glVertexAttribPointer(1,
2,
GL_FLOAT,
GL_FALSE,
sizeof(ImDrawVert),
(GLvoid*)IM_OFFSETOF(ImDrawVert, uv));
// Loop through all commands ImGui has
for (int n = 0; n < draw_data->CmdListsCount; n++) {
const ImDrawList* cmd_list = draw_data->CmdLists[n];
const ImDrawIdx* idx_buffer_offset = 0;
glBindBuffer(GL_ARRAY_BUFFER, g_VboHandle);
glBufferData(GL_ARRAY_BUFFER,
(GLsizeiptr)cmd_list->VtxBuffer.Size * sizeof(ImDrawVert),
(const GLvoid*)cmd_list->VtxBuffer.Data,
GL_STREAM_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, g_ElementsHandle);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,
(GLsizeiptr)cmd_list->IdxBuffer.Size * sizeof(ImDrawIdx),
(const GLvoid*)cmd_list->IdxBuffer.Data,
GL_STREAM_DRAW);
for (int cmd_i = 0; cmd_i < cmd_list->CmdBuffer.Size; cmd_i++) {
const ImDrawCmd* pcmd = &cmd_list->CmdBuffer[cmd_i];
glScissor((int)pcmd->ClipRect.x,
(int)(fb_height - pcmd->ClipRect.w),
(int)(pcmd->ClipRect.z - pcmd->ClipRect.x),
(int)(pcmd->ClipRect.w - pcmd->ClipRect.y));
glDrawElements(GL_TRIANGLES,
(GLsizei)pcmd->ElemCount,
sizeof(ImDrawIdx) == 2 ? GL_UNSIGNED_SHORT : GL_UNSIGNED_INT,
idx_buffer_offset);
idx_buffer_offset += pcmd->ElemCount;
}
}
And here are the (very, very simple) shaders I have written. The shaders have worked texturing a button before, so I am assuming they are functionally correct.
Vertex shader:
#version 330 core
layout (location = 0) in vec2 pos;
layout (location = 1) in vec2 texCoord;
out vec2 fragTexCoord;
uniform mat4 transform;
void main() {
gl_Position = transform * vec4(pos, 0.0, 1.0);
fragTexCoord = texCoord;
}
Fragment shader:
#version 330 core
out vec4 fragColor;
in vec2 fragTexCoord;
uniform sampler2D sampler;
void main() {
fragColor = texture(sampler, fragTexCoord);
}
I'm at a total loss! Any help would be greatly appreciated
A: Debugging an incorrect OpenGL setup/state can be quite difficult. It's unclear why you are not using exactly the code provided in imgui_impl_glfw_gl3.cpp and rewriting your own, but what you may do is:
*
*Start again from the supposedly working imgui_impl_glfw_gl3.cpp and turn it step by step into your own and see what makes it break?
*Disable scissor temporarily.
*Since you are using RenderDoc already: does it show you the correct mesh? Are the vertices that it shows you ok?
| |
doc_23530302
|
In my project, I have different attachment types like image, audio file, Video file and PDF. I want to sort them based on the extension like .mp4 is video, .pdf is documented like that, How can we write in Java,
One method we can follow gets the extension and compare with extensions required, Is there any better way to do it.
A: You can use a mime database. nginx for example uses this one by default.
But this is not always enough. For example, mp4 mime type is video/mp4 but mp4 COULD be an audio file. and you can know for sure without downloading it.
| |
doc_23530303
|
Situation:
Implement a table with drag & drop capability for the red regions with the following limitations:
Limitations:
*
*The Brand cannot be dragged and dropped at all.
*The Main Category cannot be dragged and dropped beyond its parent Brand.
*The Sub Category cannot be dragged and dropped beyond its Main Category.
*The Product cannot be dragged and dropped beyond its parent Sub Category.
I have added my work in Plunkr here: http://plnkr.co/edit/XkMGWwgh50iz9c2e4W4W?p=preview
If you see in Plunkr above it's not able to sort as per above images example.
Need help and it's highly appreciable.
| |
doc_23530304
|
App.xaml:
DispatcherUnhandledException="Application_DispatcherUnhandledException"
App.xaml.cs:
private void Application_DispatcherUnhandledException(object sender, System.Windows.Threading.DispatcherUnhandledExceptionEventArgs e)
{
MessageBox.Show(e.Exception.Message, "Error", MessageBoxButton.OK, MessageBoxImage.Warning);
e.Handled = true;
}
MainWindow.xaml.cs:
public MainWindow()
{
InitializeComponent();
// Faulty code to trigger unhandled exception
string s = null;
s.Trim();
}
I'm stuck, this won't work. Instead of showing an error message, faulty code would return me to the Visual Studio Code Editor window and highlight the erroneous line.
Update: I have already tried adding the faulty code to a button click event rather than Window Load, but the same issue persisted.
btnClickMe_Click(object sender, RoutedEventArgs e)
A: This is just a matter of the debugger intervening before your event handler marks the exception as handled.
Just hit "continue" in the debugger, and the event handler will be triggered as normal. (You may also be able to configure how specific exceptions are handled in VS Code - I know you can in VS, but I don't know about VS Code.)
| |
doc_23530305
|
go get github.com/githubnemo/CompileDaemon
go: added github.com/fatih/color v1.9.0
go: added github.com/fsnotify/fsnotify v1.4.9
go: added github.com/githubnemo/CompileDaemon v1.4.0
go: added github.com/mattn/go-colorable v0.1.4
go: added github.com/mattn/go-isatty v0.0.11
go: added github.com/radovskyb/watcher v1.0.7
go: added golang.org/x/sys v0.0.0-20191026070338-33540a1f6037
OR
go install -mod=mod github.com/githubnemo/CompileDaemon
Then when I run CompileDaemon --command="./folder_name"
returns:
bash: CompileDaemon: command not found
A: I face the same issue, here is how I solve it.
It seems that GOPATH is not added to ENVIRONMENT VARIABLE as expected.
vim ~/.zshrc
...
export GOPATH="/Users/YOUR_PROFILE_NAME/go" # set GOPATH
export PATH=$PATH:$GOPATH/bin. # append GOPATH to PATH
don't forget to
source ~/.zshrc
| |
doc_23530306
|
*
*I have a file from where i used to get the volume name ie ldap_volList it contains 500 odd volume names
*From the main ldap_volList i further creating a variable based on defined volume names like array_fxn1 and array_fxn2.
*Another i am checking if the Variable not empty like if [ ! -z "$ldap_volList" ]; then which is provided as an input for the script and then further getting it into the if condition.
*Further i am making certain if the volume that startswith with fxn1 IE if [[ $ldap_volList == fxn1* ]] then run the ssh command into a predifined host with the array_fxn1 volumes and similar to other .
*but it does work only for first if condition and just exits out without moving to second if condition.
fxn1: fsx1002_Ploverdose_scratch fsx1002_Ploverdose
fxn2: fsx2002_Ploverdose_workareas fsx2002_Ploverdose_cache
Code below bash
#!/bin/bash
echo "Plese enter LDAP Project Name"
read -p "Enter LDAP Project Name:" ldap_proj
ldap_volList=$(cat archive_dashboard/ldap-project-nismap.csv | grep "$ldap_proj" | tr "," "\t"| awk '{print $NF}'| tr -d '"')
array_fxn1=$(echo "$ldap_volList" | grep ^fxn1)
array_fxn2=$(echo "$ldap_volList" | grep ^fxn2)
if [ ! -z "$ldap_volList" ]; then
if [[ $ldap_volList == fxn1* ]] ; then
for i in $array_fxn1;do ssh dbcl101 "row 0; vol show $i";done
elif [[ $ldap_volList == fxn2* ]] ; then
for i in $array_fxn2;do ssh dbcl201 "row 0; vol show $i";done
fi
fi
Output:
$ bash hotvscold-test.sh
Plese enter LDAP Project Name
Enter LDAP Project Name:Ploverdose
Last login time: 7/15/2022 06:02:12
(rows)
Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
fsx1002 fsx1002_Ploverdose_scratch dbc014_ssd02 online RW 7.32TB 2.81TB 23%
Last login time: 7/15/2022 06:02:39
(rows)
Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
fsx1002 fsx1002_Ploverdose dbc012_ssd02 online RW 15GB 7.47GB 0%
Please suggest any help.
| |
doc_23530307
|
import { serve } from "https://deno.land/std@0.91.0/http/server.ts";
import { serveFile } from 'https://deno.land/std@0.91.0/http/file_server.ts';
const server = serve({ port: 8000 });
console.log("http://localhost:8000/");
for await (const req of server) {
console.log(req.url);
if(req.url === '/')
await serveFile(req, 'index.html');
}
So why is serveFile not working in this instance?
A: The call to serveFile only creates a Response (status, headers, body) but doesn't send it.
You have to send it with a separate call to req.respond():
import { serve } from "https://deno.land/std@0.91.0/http/server.ts";
import { serveFile } from 'https://deno.land/std@0.91.0/http/file_server.ts';
const server = serve({ port: 8000 });
console.log("http://localhost:8000/");
for await (const req of server) {
console.log(req.url);
if(req.url === '/') {
const response = await serveFile(req, 'index.html');
req.respond(response)
}
}
| |
doc_23530308
|
When server is executed and it loads dynamic_lib.so dynamically, but on the code path, dynamic_lib.so actually expects some symbols from static_lib.a. What I'm seeing is that, dynamic_lib.so pulls in static_lib.so so essentially I have two static_lib in memory.
Let's assume there's no way we can change dynamic_lib.so, because it's a 3rd-party library.
My question is, is it possible to make dynamic_lib.so or ld itself search the current binary first, or even not search for it in ld's path, just use the binary's symbol, or abort.
I tried to find some related docs about it, but it's not easy for noobs about linkers like me :-)
A: You can not change library to not load static_lib.so but you can trick it to use static_lib.a instead.
By default ld does not export any symbols from executables but you can change this via -rdynamic. This option is quite crude as it exports all static symbols so for finer-grained control you can use -Wl,--dynamic-list (see example use in Clang sources).
| |
doc_23530309
|
The project directory structure is as follows:
/
visits.js
/static
/css
style.css
/js
jquery.flot.js
/views
In the app configuration, I am setting up the static file server with the following:
app.configure(function(){
app.use(express.logger());
app.use(express.static(__dirname + '/static'));
app.set('views', __dirname + '/views');
app.set('view options', { layout : true });
app.set('view engine', 'jade');
});
When I run the app on my local machine, everything works great. My network console shows:
GET http://myserver.com/ [HTTP/1.1 200 OK 13ms]
GET http://myserver.com/css/style.css [HTTP/1.1 200 OK 3ms]
GET http://myserver.com/js/jquery.flot.js [HTTP/1.1 200 OK 4ms]
However, when I push the same application up to my test server, the static files are not found (404 errors):
GET http://myserver.com/visits/ [HTTP/1.1 200 OK 195ms]
GET http://myserver.com/css/style.css [HTTP/1.1 404 Not Found 95ms]
GET http://myserver.com/js/jquery.flot.js [HTTP/1.1 404 Not Found 93ms]
The only difference that I can think of is on my local machine, I simply access the application at:
127.0.0.1:4000/
But on the test server, I have the app sitting behind an nginx server. I use a proxy_pass that forwards requests to:
myserver.com/visits/
I'm wondering if the forwarding is affecting the path that the static file server is using to look up the css and js files?
Any ideas / help would be greatly appreciated.
Thanks!
A: After further investigation, this was an issue with my use of proxy_pass in the nginx configuration. The initial GET request was being properly forwarded to my Express app, but the subsequent calls for css or js files was not being properly routed. This is not an issue with the Express framework. Thanks for looking.
| |
doc_23530310
|
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent" >
<ImageView
android:id="@+id/imgContactPhoto"
android:contentDescription="Contact photo"
android:layout_width="90sp"
android:layout_height="90sp" />
<TextView
android:id="@+id/lblMsg"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_toRightOf="@id/imgContactPhoto"
android:paddingLeft="10sp" />
</RelativeLayout>
And my custom adapter class:
public class SmsListAdapter extends ArrayAdapter{
private int resource;
private LayoutInflater inflater;
private Context context;
@SuppressWarnings("unchecked")
public SmsListAdapter(Context ctx, int resourceId, List objects) {
super(ctx, resourceId, objects);
resource = resourceId;
inflater = LayoutInflater.from( ctx );
context=ctx;
}
@Override
public View getView (int position, View convertView, ViewGroup parent) {
//create a new view of the layout and inflate it in the row
convertView = ( RelativeLayout ) inflater.inflate( resource, null );
// Extract the object to show
Sms msg = (Sms) getItem(position);
// Take the TextView from layout and set the message
TextView lblMsg = (TextView) convertView.findViewById(R.id.lblMsg);
lblMsg.setText(msg.getBody());
//Take the ImageView from layout and set the contact image
long contactId = fetchContactId(msg.getSenderNum());
String uriContactImg = getPhotoUri(contactId).toString();
ImageView imgContactPhoto = (ImageView) convertView.findViewById(R.id.imgContactPhoto);
int imageResource = context.getResources().getIdentifier(uriContactImg, null, context.getPackageName());
Drawable image = context.getResources().getDrawable(imageResource);
imgContactPhoto.setImageDrawable(image);
return convertView;
}
}
When I attempt to activate the adapter, I get an error on the first line of getView() saying that a TextView cannot be cast to a RelativeLayout.
What I'm not clear on is why that is a TextView in the first place. My list item layout is set as a RelativeLayout and that's what should be being inflated, unless I'm mistaken. Can anyone help me debug this?
A: On this line:
convertView = ( RelativeLayout ) inflater.inflate( resource, null );
instead of resource, try using the full id of the layout- R.layout.whatever
Also, you should only inflate the layout if the incoming view is null. Otherwise the view is already inflated and you should just overwrite all its values.
A: Use a BaseAdapter to create your custom adapter. This is a bit harder to manage from the start, but makes your work easier later on.
public class GeneralAdapter extends BaseAdapter{
private LayoutInflater mInflater = null;
private ArrayList<String> info = null;
public GeneralAdapter( ArrayList<String> info ) {
this.info = info;
}
@Override
public int getCount() {
return info.size();
}
@Override
public Object getItem(int position) {
return info.get(position);
}
@Override
public long getItemId(int position) {
return position;
}
@Override
public View getView(int position, View convertView, ViewGroup parent) {
ViewHolder holder;
if (convertView == null) {
convertView = mInflater.inflate(R.layout.YOURLAYOUT, null);
holder = new ViewHolder();
holder.generalTV = (TextView) convertView.findViewById(R.id.lblMsg);
holder.generalIV = (ImageView) convertView.findViewById(R.id.imgContactPhoto);
convertView.setTag(holder);
} else {
holder = (ViewHolder) convertView.getTag();
}
holder.generalTV.setText(info.getBody());
holder.generalIV.setBackgroundResource(R.id.YOURIMAGE);
return null;
}
private class ViewHolder {
TextView generalTV;
ImageView generalIV;
}
}
A: Removing the explicit cast of convertView to RelativeLayout should help.
| |
doc_23530311
|
Error during parsing. repetition constraint is more restrictive: can not merge type required binary MyTime into optional binary MyTime.
Maybe one of the files is corrupted but I don't know how to skip it.
Thanks
A: This happens when reading multiple parquet files that have slightly different metadata in their schemas. Either you have a mixed collection of files in a single directory or you are giving the LOAD statement a glob and the resulting collection of files is mixed in this respect.
Rather than specifying the schema in an AS() clause or making a bare call to the loader function the solution is to override the schema in the loader function's argument like this:
data = LOAD 'data'
USING parquet.pig.ParquetLoader( 'n1:int, n2:float, n3:double, n4:long')
Otherwise the loader function infers the schema from the first file it encounters which then conflicts with one of the others.
If you have still have trouble try using type bytearray in the schema specification and then cast to the desired types in a subsequent FOREACH.
According to the Parquet source code there is another argument to the loader function that allows columns to be specified by position rather than name (the default) but I have not experimented with that.
| |
doc_23530312
|
Thanks so much for your help, you guys are amazing!
A: Instead of opening up a single image using the default implementation:
<span data-featherlight="myimage.png">Open image in lightbox</a>
You could try instead to use FeatherLight's iFrame feature and call an external Div:
<a href="#" data-featherlight="#mylightbox">Open element in lightbox</a>
<div id="mylightbox"><img src="myimage.jpg" /><p>This div will be opened in a lightbox</p></div>
You could then style this to have the image float left, the text float right etc.
Unsure as to whether this will work with FeatherLight's gallery extension though!
| |
doc_23530313
|
Here are the steps I've used:
1) I created a class CollapsablePanel that I've placed in my site_code directory. I am using a Web App not Web Site so App_Code is not really available to me.
Namespace webstation.WebControls
Public Class CollapsablePanel
Inherits System.Web.UI.WebControls.Panel
End Class
End Namespace
2) In the .aspx file I've added <%@ Register TagPrefix="webstation" Namespace="MyApplication.webstation.WebControls" %>
I've built the project but my custom tag prefix does not appear. If I just go ahead and type the editor does not throw an error, however the page does when I publish it and try to access it. If I try to access the class from the codebehind (Imports MyApplication.webstation.WebControls) the custom control appears in intellisense, so I know that Visual Studio is reading the class information to some extent.
What am I doing wrong here? Any thoughts are greatly appreciated.
Mike
A: seems like you may be missing the TagName attribute
as
<%@ Register TagPrefix="webstation" TagName="CollapsiblePanel" Namespace="MyApplication.webstation.WebControls" %>
once you do this you should be able to access it as
<webstations:CollapsiblePanel id='usercontrol1" runat="server" />
A: Check out Scott Gu's blog post on registering controls, I like registering them in the web.config file myself.
http://weblogs.asp.net/scottgu/archive/2006/11/26/tip-trick-how-to-register-user-controls-and-custom-controls-in-web-config.aspx
You need to make sure you have a fully qualified reference to the control class, meaning the library name and namespace. I place my controls in a class library, but you can include them in your App_Code folder. You can also register user controls in the web.config, both examples follow:
A: I was able to get it work like expected once I went ahead and created a Class Library project in the same solution, built the project, copied the DLL from the bin of the Class Library and placed it in a folder of my Web Project used for holding external binaries. I also added a reference to the .dll in the project. Once I did that, then the syntax:
<%@ Register Assembly="webstation.WebControls" Namespace="webstation.WebControls" TagPrefix="webstation" %>
began to work. Does anyone know if I am able to somehow automatically update the compiled .DLL in my web project from the class library in the same solution? Or do I just have to go into the bin folder each time and manually copy it into the web project?
Thanks,
Mike
| |
doc_23530314
|
'This is where I try and call the function
Sub Test1()
Dim ChartCells1 As Range
Set ChartCells1 = GetValues(ActiveSheet.Range("BV2", Range("BV2").End(xlDown)), ActiveSheet.Range("BV2"))
ChartCells1.Select
End Sub
'this is the function am trying to call
Function GetValues(Column As Range, Value As Range) As Range
Dim ChartCells As Range
Dim Count As Range
Dim Cells As Range
Dim Number As Range
DataSheetArea1Zone16.Activate
Set Number = Range(Value)
Set Cells = Range(Column)
Set ChartCells = Range(Value).Offset(0, -36)
For Each Count In Cells
If Count.Value <> Number Then
Set ChartCells = Union(ChartCells, Count.Offset(0, -36))
Set Number = Count
End If
Next Count
GetValues = ChartCells
End Function
I keep receiving the error 91 on the line GetValues = ChartCells or on the line ChartCells1.Select
A: In the code, the following were problematic:
Set Number = Range(Value)
Set Cells = Range(Column)
Set ChartCells = Range(Value).Offset(0, -36)
'----
GetValues = ChartCells
As Value and Column were already declared as ranges, the Range(Value) was causing the error. The function returns type Range thus it should be set, e.g. GetValues = ChartCells.
In general, working with Activate and Select in VBA is not a best practice - How to avoid using Select in Excel VBA
This works (but it is a good idea to refactor the ActiveSheet and .Activate away):
Sub Main()
Dim ChartCells1 As Range
Set ChartCells1 = GetValues(ActiveSheet.Range("BV2", Range("BV2").End(xlDown)), ActiveSheet.Range("BV2"))
ChartCells1.Select
End Sub
'this is the function am trying to call
Function GetValues(Column As Range, Value As Range) As Range
Dim ChartCells As Range
Dim Count As Range
Dim Cells As Range
Dim Number As Range
Worksheets(1).Activate
Set Number = Value
Set Cells = Column
Set ChartCells = Value.Offset(0, -36)
For Each Count In Cells
If Count.Value <> Number Then
Set ChartCells = Union(ChartCells, Count.Offset(0, -36))
Set Number = Count
End If
Next Count
Set GetValues = ChartCells
End Function
| |
doc_23530315
|
os.system("curl --head www.google.com")
If I run that, it prints out:
HTTP/1.1 200 OK
Date: Sun, 15 Apr 2012 00:50:13 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
Set-Cookie: PREF=ID=3e39ad65c9fa03f3:FF=0:TM=1334451013:LM=1334451013:S=IyFnmKZh0Ck4xfJ4; expires=Tue, 15-Apr-2014 00:50:13 GMT; path=/; domain=.google.com
Set-Cookie: NID=58=Giz8e5-6p4cDNmx9j9QLwCbqhRksc907LDDO6WYeeV-hRbugTLTLvyjswf6Vk1xd6FPAGi8VOPaJVXm14TBm-0Seu1_331zS6gPHfFp4u4rRkXtSR9Un0hg-smEqByZO; expires=Mon, 15-Oct-2012 00:50:13 GMT; path=/; domain=.google.com; HttpOnly
P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Transfer-Encoding: chunked
What I want to do, is be able to match the 200 in it using a regex (i don't need help with that), but, I can't find a way to convert all the text above into a string. How do I do that?
I tried: info = os.system("curl --head www.google.com") but info was just 0.
A: import os
cmd = 'curl https://randomuser.me/api/'
os.system(cmd)
Result
{"results":[{"gender":"male","name":{"title":"mr","first":"çetin","last":"nebioğlu"},"location":{"street":"5919 abanoz sk","city":"adana","state":"kayseri","postcode":53537},"email":"çetin.nebioğlu@example.com","login":{"username":"heavyleopard188","password":"forgot","salt":"91TJOXWX","md5":"2b1124732ed2716af7d87ff3b140d178","sha1":"cb13fddef0e2ce14fa08a1731b66f5a603e32abe","sha256":"cbc252db886cc20e13f1fe000af1762be9f05e4f6372c289f993b89f1013a68c"},"dob":"1977-05-10 18:26:56","registered":"2009-09-08 15:57:32","phone":"(518)-816-4122","cell":"(605)-165-1900","id":{"name":"","value":null},"picture":{"large":"https://randomuser.me/api/portraits/men/38.jpg","medium":"https://randomuser.me/api/portraits/med/men/38.jpg","thumbnail":"https://randomuser.me/api/portraits/thumb/men/38.jpg"},"nat":"TR"}],"info":{"seed":"0b38b702ef718e83","results":1,"page":1,"version":"1.1"}}
A: For some reason... I need use curl (no pycurl, httplib2...), maybe this can help to somebody:
import os
result = os.popen("curl http://google.es").read()
print result
A: Try this, using subprocess.Popen():
import subprocess
proc = subprocess.Popen(["curl", "--head", "www.google.com"], stdout=subprocess.PIPE)
(out, err) = proc.communicate()
print out
As stated in the documentation:
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several other, older modules and functions, such as:
os.system
os.spawn*
os.popen*
popen2.*
commands.*
A: You could use an HTTP library or http client library in Python instead of calling a curl command. In fact, there is a curl library that you can install (as long as you have a compiler on your OS).
Other choices are httplib2 (recommended) which is a fairly complete http protocol client supporting caching as well, or just plain httplib or a library named Request.
If you really, really want to just run the curl command and capture its output, then you can do this with Popen in the builtin subprocess module documented here: http://docs.python.org/library/subprocess.html
A: Well, there is an easier to read, but messier way to do it. Here it is:
import os
outfile='' #put your file path there
os.system("curl --head www.google.com>>{x}".format(x=str(outfile)) #Outputs command to log file (and creates it if it doesnt exist).
readOut=open("{z}".format(z=str(outfile),"r") #Opens file in reading mode.
for line in readOut:
print line #Prints lines in file
readOut.close() #Closes file
os.system("del {c}".format(c=str(outfile)) #This is optional, as it just deletes the log file after use.
This should work properly for your needs. :)
A: Try this:
import httplib
conn = httplib.HTTPConnection("www.python.org")
conn.request("GET", "/index.html")
r1 = conn.getresponse()
print r1.status, r1.reason
| |
doc_23530316
|
I am trying to see if I can use afterLabelTextTpl. But didnt find an example online
as how to use it.
Here is an idea of what I am trying to do. Can someone correct me syntatically..
{
xtype: 'combobox',
fieldLabel: 'Role',
editable:false,
store: roles_store,
triggerAction:'all',
name: 'role_id',
valueField: 'role_id',
displayField:'role_name',
afterLabelTextTpl : function() {
{
xtype: 'image',
src:'../www/css/slate/btn/question.png',
padding: '5 0 0 0',
cls:'pointer',
listeners: {
el:{
click: function() {
Ext.create('Ext.tip.Tip', {
closable:true,
padding: '0 0 0 0',
maxWidth:300,
html: "<b>read-only</b>: Has read access to all pages, but can make no changes.<br><br><b>user</b>: Can edit rules and commit to production.<br><br><b>admin</b>: Can edit rules, commit to production, and add/delete users.<br><br> "+supertext
}).showAt([810, 340]);
}
}
}
}
}
A: Its an XTemplate instance or a string.
From the docs:
An optional string or XTemplate configuration to insert
in the field markup after the label text. If an XTemplate is used, the
component's render data serves as the context.
So a string "Hello" will do, or anything that XTemplate will take
A: The afterLabelTextTpl is a template config, meaning that it takes either a string, an array of strings, or an instance of Ext.XTemplate and uses that to generate HTML.
There's no built-in way of creating a component via XTemplate (although a forum member created an extension called CTemplate that allows this). So if you want to go pure Ext JS, you're going to do a little more work.
NOTE: I don't have access to Ext JS 4.1.3 so what follows is an approximation, based on my experience with 4.1.0. You may need to tweak the code to get it just right, but it should provide a sufficient starting point.
Step 1: Setting up your afterLabelTextTpl. Try something like this:
afterLabelTextTpl:'<img id="combo_icon" style="padding-top:5px" class="pointer" src="../www/css/slate/btn/question.png"/>'
That should get your icon showing.
Step 2: Adding the click listener. There are two ways to go about this. Both methods assume you have a function called My.Name.Space.onImageClick. Obviously you can replace this name with whatever you want. Here's the function:
My.Name.Space.onImageClick = function(){
Ext.create('Ext.tip.Tip', {
closable: true,
padding: 0,
maxWidth: 300,
html: '<b>whatever you want here</b>'
}).showAt([810, 340]);
};
One method is to add the listener directly to the DOM.
'<img onclick="My.Name.Space.onImageClick();" /* the rest of the HTML here */ />'
The other method is to add the listener via Ext.dom.Element which is probably the better choice. You would need something like this in your combobox config:
listeners: {
afterrender: function(me){
var imgEl = Ext.get("combo_icon");
if(imgEl){
imgEl.on("click", My.Name.Space.onImageClick);
}
}
}
If you're having specific trouble getting this working, leave a comment and I'll help clarify what I can.
| |
doc_23530317
|
I found out 2 issues.
The first is that the clear button(a small cross in a circle, which must to clear the text in the search box) does not work on the IOS Safari(or in the workilght app which uses safari).
The only thing happens is that the cursor moves to right side of the text are in the search box. That's it. It does not remove the text.
And the second one.
I need to call function by pressing search button on the virtual keybord.
If I set the type="search" in search box - there is no search button on the keybord.
So i put my search button into the .
So search button appears on the virtual keybord.
But after the pressing this button, the form submits and I page reloads.
And I just need to call a function.
A: I've resolved both issues))
1) About event on pressing Search button(enter)
There is an issue in IOS Safari with appearing "Search button" on virtual keybord.
your text input with type="search" must be inside form tag.
Show 'Search' button in iPhone/iPad Safari keyboard
(second answer)
To call some function on pressing Search button and not submit a form I put the following javascript into the from tag:
<form onsubmit="myFunction(...);return false;">
Pressing Search button starts the Submit action. And this javascript call my function at this time and stop the submitting. That's what I need!
2)
The second problem with clear button of the search box.
This is the bug of dojo. https://bugs.dojotoolkit.org/ticket/16672
I've found a workaround. http://dojo-toolkit.33424.n3.nabble.com/dojox-mobile-SearchBox-Clear-Button-x-fails-in-iPad-iOS-16672-td3995707.html
But I change it a little, cause it does not work in my case.
This is the my variant:
<form onsubmit="myFunction(...);return false;">
<input id="searchBox" ontouchstart="clearButtonSupport(event);" data-dojo-type="dojox.mobile.SearchBox"
data-dojo-props="type:'search'" type="search"
placeholder="Some placeholder...">
</form>
This is the clearButtonSupport function:
function clearButtonSupport(evt) {
require([ "dijit/registry", "dojox/mobile/SearchBox" ], function(registry) {
var searchBox = registry.byId('searchBox');
var rect = document.getElementById('searchBox').getBoundingClientRect();
// if touched in the right-most 20 pels of the search box
if (rect.right - evt.touches[0].clientX < 20) {
evt.preventDefault();
searchBox.set("value", "");
}
});
}
onclick and onmouseup event in IOS safari works only when text input is not focused.
If the focus on the search box(cursor is inside) this event is not thrown.
So i used ontouchstart event
ontouchstart - multitouch event in IOS safari.
It's thrown every time you touch the element.
So I take the coordinates of the first(and the only) touch.
And look if it's less than 20px far away from the right side of the element.()position of the clear button)
And clear the search box.
That's it!
| |
doc_23530318
|
but I still haven't found an answer to my question.
I work on quite complex application with tens of fragments and several activities in which I want to use DI (dagger 2). For all of those fragments and activities I have one BaseActivity and one BaseFragment. However, as far as I read and tried, in order to use @Inject in my let's say MainActivity, I have to specify it in Component interface and also invoke getApplicationComponent().inject(this) in onCreate method. When I do this for BaseActivity only, @Inject annotated fields in MainActivity is never injected. And what is even worse, I do not find out about that until that specific part of code is executed and NPE is thrown.
So far it is a deal breaker for me, because this can be source of many crashes. I would need to specify tens of fragments and activities in Component interface and not forget to call inject in each onCreate method.
I would be very glad to hear any solution to this since I would really like to use DI..
code example:
@Singleton
@Component(modules = ApplicationModule.class)
public interface ApplicationComponent {
void inject(BaseActivity baseActivity);
Analytics analytics();
}
public class BaseActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
this.getApplicationComponent().inject(this);
}
}
public class MainActivity extends BaseActivity {
@Inject
Analytics analytics;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
analytics.log("event1"); // THROWS NPE!
}
}
A: You can not inject properties in your subclass by injecting the super (since dagger2 works at compile time and there is no way to dynamically check subclasses for annotated properties.)
You can move analytics up to the super, then it will be injected there. To inject annotated fields in your subclass you will have to call the injection there again.
You can make an abstract method in your baseclass e.g. inject(App app)where you just handle the injection. That way you can't 'miss' it.
As stated in the official documentation:
While a members-injection method for a type will accept instances of its subtypes, only Inject-annotated members of the parameter type and its supertypes will be injected; members of subtypes will not.
A: move the
@Inject
Analytics analytics;
to your BaseActivity class, the Analytics object is initialized in the superclass and is inherited by sub-classes automatically, therefor u wouldn't get null any more.
public class MainActivity extends BaseActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
analytics.log("event1");
} }
| |
doc_23530319
|
reducer:
case "VIDEOS_SUCCESS": {
const { data } = action;
let videos = [];
let questions = [];
data.map(item => {
const {
id,
title: { rendered: title },
content: { rendered: description },
youtubeVideo,
...
icon
} = item;
const newVideo = {
id,
title,
preview: trimText(description, 100),
description,
youtubeVideo,
bookMarked: false,
...
current: 0,
correctScore: 5,
totalScore: 50,
results: {
score: 0,
correctAnswers: 0
},
completed: false,
icon
};
videos.push(newVideo);
});
return videos;
}
Store:
import { createStore, applyMiddleware } from "redux";
import { persistStore, persistReducer, autoRehydrate } from "redux-persist";
import storage from "redux-persist/lib/storage";
import thunk from "redux-thunk";
import logger from "redux-logger";
import rootReducer from "../reducers";
const persistConfig = {
key: "root",
storage: storage,
whitelist: ["videos"],
timeout: null
};
const pReducer = persistReducer(persistConfig, rootReducer);
export const store = createStore(pReducer, applyMiddleware(thunk, logger));
export const persistor = persistStore(store);
export default store;
App:
import { store, persistor } from "./config/configureStore";
import { PersistGate } from "redux-persist/integration/react";
import AppNavigator from "./navigation/AppNavigator";
import NavigationService from "./navigation/actions";
export default class App extends React.Component {
render() {
return (
<ImageBackground
source={require("./assets/images/TC_background.jpg")}
style={styles.container}
>
<Provider store={store}>
<PersistGate loading={null} persistor={persistor}>
<AppNavigator
ref={navigatorRef => {
NavigationService.setTopLevelNavigator(navigatorRef);
}}
/>
</PersistGate>
</Provider>
</ImageBackground>
);
}
}
I fetch my data from my default route:
import DrawerHeader from "../navigation/DrawerHeader";
class HomeScreen extends React.Component {
constructor(props) {
super(props);
}
componentDidMount() {
const { fetchData } = this.props;
fetchData();
}
render() {
const { videos } = this.props;
return (
<View style={styles.container}>
<HomeMenu {...this.props} />
</View>
);
}
}
const mapDispatchToProps = dispatch => ({
fetchData: () => dispatch(fetchData())
});
const mapStateToProps = state => {
return {
videos: state.videos
};
};
export default connect(
mapStateToProps,
mapDispatchToProps
)(HomeScreen);
My rehydrate is working, but the VIDEO_SUCCESS nextState shows initial fetch data.
A: If you are using the latest version of redux-persist there is no need to do this.
As Rehydrate() is called automatically as long as you are persisting state(root reducer).
case "persist/REHYDRATE": {
if (action && action.payload) {
const {
payload: { videos }
} = action;
return videos;
}
return [];
But make sure that your app is wrapped with a persistGate.
const App = () => {
return (
<Provider store={store}>
<PersistGate loading={null} persistor={persistor}>
<RootComponent />
</PersistGate>
</Provider>
);
};
Transforms allow you to customize the state object that gets persisted and rehydrated.
Look at redux-persist docs, if you need help with that.
A: You can remove this line.
// check if state exists from persist, and return, otherwise fetch from
if (Array.isArray(state) && state.length > 0) {
return state;
}
| |
doc_23530320
|
(TTD, WST, TDG, ODFL, VMI) and put them in a list using selenium.
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome()
driver.get('https://finance.yahoo.com/gainers')
change = driver.find_element_by_xpath('//span[text()="Change"]')
actions = ActionChains(driver)
#stockname = driver.find_element_by_id('')
for i in range(2):
WebDriverWait(driver, 3600).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="scr- res-table"]/div[1]/table/thead/tr/th[4]'))).click()
link = driver.find_elements_by_class_name('Fw(600)')
print(link.text)
A: It's a real tricky one If you click on change link not sure how many times to click to get the element you are after since this information not always comes in first click.
Try Below code hope this helps.
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.get('https://finance.yahoo.com/gainers')
#Cookie pop up to handle if not there then ignore
driver.maximize_window()
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH,"//button[text()='I agree']"))).click()
driver.execute_script("window.scrollTo(0, 250)")
element=WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH,'//*[@id="scr-res-table"]/div[1]/table/thead/tr/th[4]/span[text()="Change"]')))
driver.execute_script("arguments[0].click();", element)
while(True):
try:
print("running...")
WebDriverWait(driver,5).until(EC.presence_of_element_located((By.XPATH, "//a[text()='TTD']")))
tablerows = WebDriverWait(driver, 10).until(EC.visibility_of_all_elements_located((By.XPATH, "//div[@id='fin-scr-res-table']//table[1]/tbody//tr")))
for row in tablerows:
if row.find_element_by_xpath("./td[1]").text in ['TTD', 'WST', 'TDG', 'ODFL', 'VMI']:
coldata = [td.text for td in row.find_elements_by_xpath(".//td") if td.text != '']
print(coldata)
break
except:
print("exception block")
driver.execute_script("arguments[0].click();", element)
continue
This will print on console like this.You can remove unused print option.
running...
exception block
running...
exception block
running...
['WST', 'West Pharmaceutical Services, Inc.', '187.61', '+17.49', '+10.28%', '431,280', '509,179', '13.854B', '58.45']
['TTD', 'The Trade Desk, Inc.', '260.30', '+16.11', '+6.60%', '2.135M', '2.115M', '11.971B', '114.55']
['TDG', 'TransDigm Group Incorporated', '318.01', '+13.75', '+4.52%', '262,441', '847,228', '17.073B', '24.77']
['ODFL', 'Old Dominion Freight Line, Inc.', '138.97', '+10.72', '+8.36%', '765,680', '939,080', '16.607B', '27.21']
['VMI', 'Valmont Industries, Inc.', '115.11', '+9.64', '+9.14%', '143,588', '138,449', '2.478B', '16.30']
UPDATE
To get only first five records from table.
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.get('https://finance.yahoo.com/gainers')
#Cookie pop up to handle if not there then ignore
driver.maximize_window()
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH,"//button[text()='I agree']"))).click()
tablerows = WebDriverWait(driver, 10).until(EC.visibility_of_all_elements_located((By.XPATH, "//div[@id='fin-scr-res-table']//table[1]/tbody//tr")))
for row in tablerows[:5]:
coldata = [td.text for td in row.find_elements_by_xpath(".//td") if td.text != '']
print(coldata)
Output:
['RLLCF', 'Rolls-Royce Holdings plc', '0.0125', '+0.0050', '+66.67%', '38.796M', '3.637M', '272.509B', 'N/A']
['IMMU', 'Immunomedics, Inc.', '27.00', '+5.01', '+22.78%', '20.966M', '3.661M', '5.776B', 'N/A']
['PBI-PB', 'Pitney Bowes Inc. NT 43', '15.20', '+2.33', '+18.12%', '116,008', 'N/A', '2.833B', '11.53']
['KZMYY', 'KAZ Minerals PLC', '2.7500', '+0.3552', '+14.83%', '34,267', '44,605', '2.555B', '4.70']
['FWONK', 'Formula One Group', '29.39', '+3.59', '+13.91%', '3.22M', '2.083M', '6.742B', 'N/A']
| |
doc_23530321
|
$('body').smoothWheel();
This:
$('html').smoothWheel();
And this:
$(document).smoothWheel();
But nothing works. Page does not scrolling
A: Try this:
body{
overflow:auto;
-webkit-overflow-scrolling: touch;
position:relative;
height:100%;
.scroll-container{
position: absolute;
width: 100%;
height: 100%;
}
}
| |
doc_23530322
|
This has two consequences on our end:
*
*The problematic cookie is assumed to contain a Base64-encoded object. Because the comma isn't interpreted as a separator, the next cookie in the header is tacked onto the end of the Base64 value we are trying to convert, and the conversion crashes.
*The cookie following the comma separator is "lost" as far as ASP.NET's own cookie processing is concerned.
It would be pretty easy to hack something together on our end to fix the crashing conversion. I could look for a comma, and ignore it and everything following it if it was present, and the conversion would succeed. But then I'd have to manually handle recovering the "lost" cookies, and really, I'd like to avoid these kinds of hacks if possible. Given the state of OS updates on Android phones, I'm pretty hopeless that a fix for this will ever go out. But since older RFCs suggested supporting commas as separators on the server side, I'm hoping there something configurable somewhere I can enable to get this behavior for free, without resorting to inelegant hacks in our application code.
So really, what I'm asking is:
*
*Has ASP.NET MVC ever supported commas as separators in the Cookie HTTP header?
*If so, is there any way to enable that legacy behavior on MVC 5/Framework 4.5.2?
| |
doc_23530323
| ||
doc_23530324
|
<form name="myForm" ng-submit="submit()" novalidate>
<input type="email" name="email" ng-model="email" required/>
<div ng-messages="myForm.$submitted">
<span ng-message="required">Please enter details in these field</span>
<span ng-message="email">Please enter email</span>
</div>
<button type="submit">Save</button>
</form>
There is a success message in submit function :-
$scope.submit = function(){
console.log("Update Successful");
}
Even if i haven't fill the required field and press Save i still get the "Update Successful" message.So,why doesn't the validation work and why is the submit function even if the validation fails.
Also i found these solution of doing it these way :-
<form name="myForm" ng-submit="myForm.$valid && submit()" novalidate>
<input type="email" name="email" ng-model="email" required/>
<div ng-messages="myForm.email.$error" ng-if="myForm.$submitted">
<span ng-message="required">Please enter details in these field</span>
<span ng-message="email">Please enter email</span>
</div>
<button type="submit">Save</button>
</form>
This works fine but problem is,it should also validated on keypress.However,it only validates on keypress after i have sumbitted the form atleast once before that keypress validation doesn't work.
How should i solve these?
I was also trying myForm.$touched but even that doesn't work when i use it as :-
<div ng-messages="myForm.$touched">
...
</div>
A: Try this:
In html:
<form name="myForm" novalidate>
<input type="email" name="email" required/>
<div ng-messages="myForm.email.$error" ng-if="myForm.email.$touched || valid">
...
</div>
<button ng-click="submit(myForm.$valid)">Save</button>
</form>
In controller:
$scope.submit(valid)
{
valid ? $scope.validCheck = false : $scope.validCheck = true;
}
A: There is a little something that you've missed in implementing AngularJS's form validation.
From the code you've provided, your form, as it seems, is using the default HTML5 form validation and NOT AngularJS form validation.
How?
In order to be able to wire up with AngularJS form validation (technically adding it as a property to the form directive), in addition to the name attribute of the form control, ng-model attribute is also required.
Meanwhile, to disable HTML5 default validation behavior, novalidate attribute must be added to the form tag.
To be able to achieve your expected behavior from the form (i.e. validation on key press as well as on submission, if I'm right) you can implement a combination of yourForm.$dirty and yourForm.$submitted properties:
<div ng-messages="myForm.email.$error" ng-if="myForm.$dirty || myForm.$submitted">
<p ng-message="required">Please enter details in these field</p>
<p ng-message="email">Please enter email</p>
</div>
Demo
| |
doc_23530325
|
My service:
myApp.factory("MyService", function ($http) {
var service = {};
service.save = function (item) {
return $http.post('../api/Save', item)
.success(function () {
console.log("Saved items on current page");
})
.error(function () {
console.log("Error saving items on page")
});
};
return service;
});
My controller:
myApp.controller("MyCtrl", function ($scope, MyService) {
$scope.save = function () {
MyService.save($scope.data)
.success(function () {
//do something
}).error(function () {
//do something else
});
};
});
The app works exactly as expected.
My spec file:
describe('MyCtrl', function () {
var myCtrl, mySvc, scope;
beforeEach(function () {
module('MyApp');
module(function ($provide) {
$provide.service('MyService', function () {
this.save = function () { };
});
});
inject(function ($injector) {
mySvc = $injector.get('MyService');
});
spyOn(mySvc, 'save').and.callFake(function () {
return {
success: function (callback) {
callback({ /* something */ });
},
error: function (callback) {
callback({/* something else */ });
}
};
});
inject(function ($controller, $rootScope) {
scope = $rootScope.$new();
myCtrl = $controller('MyCtrl', { $scope: scope });
});
});
describe('when save button has been hit', function () {
it('should save', function () {
scope.save();
expect(mySvc.save).toHaveBeenCalled();
});
});
});
The test gives me an error that 'undefined' is not an object (near '...}).error(function () {...'). If I remove the '...}).error(function () {...') portion from the controller itself then the test works fine, but I don't want to get rid of that functionality.
A: Try using return callback()
spyOn(mySvc, 'save').and.callFake(function () {
return {
success: function (callback) {
return callback({ /* something */ });
},
error: function (callback) {
return callback({/* something else */ });
}
};
});
A: I got something working! The issue is that in my controller I am chaining the response behavior, which the mocked function doesn't know how to handle. The solution I found was to direct back to the function after each case.
$provide.service('MyService', function () {
this.save = function () {
return {
success: function (callback) {
callback();
return this;
},
error: function (callback) {
callback();
return this;
},
finally: function (callback) {
callback();
return this;
}
};
};
});
and then change the spy to just:
spyOn(mySvc, 'save').and.callThrough();
| |
doc_23530326
|
The user is created without a password but with the -D (defaults) flag:
RUN adduser -h /home/jupyter -D jupyter && \
chown -R jupyter:jupyter /home/jupyter
USER jupyter
ENV HOME /home/jupyter
When I shell into the container and create a new user jup1 with the same command, it's password is blank.
However, I can also confirm that the password of jupyter is NOT blank.
So what IS the password and where is it set?
/etc/passwd
jupyter:x:1000:1000:Linux User,,,:/home/jupyter:
jup1:x:1001:1001:Linux User,,,:/home/jup1:
/etc/shadow
jupyter:!:17287:0:99999:7:::
jup1:!:18991:0:99999:7:::
Dockerfile
FROM python:2.7-alpine
RUN apk update; apk upgrade; rm -rf /var/cache/apk/*
RUN apk --update add bash alpine-sdk zeromq-dev nodejs
RUN pip install jupyter
RUN npm install -g ijavascript
RUN adduser -h /home/jupyter -D jupyter && \
chown -R jupyter:jupyter /home/jupyter
USER jupyter
ENV HOME /home/jupyter
RUN jupyter notebook --generate-config && \
echo "c.NotebookApp.ip = '*'" >> /home/jupyter/.jupyter/jupyter_notebook_config.py && \
echo "c.NotebookApp.open_browser = False" >> /home/jupyter/.jupyter/jupyter_notebook_config.py && \
echo "c.NotebookApp.password = u'sha1:37c14f4a2b90:0742999935c4297b7016ae0c31e2b16c3d919d52'" >> /home/jupyter/.jupyter/jupyter_notebook_config.py
WORKDIR /home/jupyter/work
EXPOSE 8888
ENTRYPOINT ["/usr/bin/ijs"]
| |
doc_23530327
|
I give this webpage to a graphic designer and he ask to if he can put an image behind the java applet so he can simulate to change the background using CSS (it is a skinned app and graphic design can change during execution).
Practically i supposed to do:
<div>
<applet width="50" height="50" />
</div>
with this CSS:
div {
width:50px;
height:50px;
background-image: url(image.jpg) center center no-repeat;
}
But it doesn't work (background is opaque).
It is possible to set transparency to the applet without loosing drag and drop capabilities ?
I'm searching something similar to flash wmode parameter.
Better solutions implies only changes on the CSS/HTML without recompiling java class so the designing team can change the page structure without changing the Java.
A: You might pass the background image URL into the applet as a parameter, or have the applet use Javascript to interrogate the page to determine what background image is shown.
A: It is not possible to make an applets background transparent, but you can use the alpha parameter of color to set transparency on components of the applet and to get the same background as the website you could pass the color or image as an applet parameter. However if it's an image, it will probably not be aligned like the site unless you position it fixed and pass the right part of the image.
A: You can try to put your applet code in a <td> tag of a <table>, and set background-color property of that <td> to your desired color, the applet will be displayed under that color only.
| |
doc_23530328
|
JUser Object
(
[isRoot:protected] =>
[id] => 0
[name] =>
[username] =>
[email] =>
[password] =>
[password_clear] =>
[usertype] =>
[block] =>
[sendEmail] => 0
[registerDate] =>
[lastvisitDate] =>
[activation] =>
[params] =>
[groups] => Array()
[guest] => 1
[lastResetTime] =>
[resetCount] =>
[_params:protected] => JRegistry Object
(
[data:protected] => stdClass Object ()
)
[_authGroups:protected] =>
[_authLevels:protected] =>
[_authActions:protected] =>
[_errorMsg:protected] =>
[_errors:protected] => Array()
[aid] => 0
As have logged in before running this code, why is this still returning 0 user id. Can any one help me on this....
A: The response you are getting from JFactory::getUser() is either the current JUser-object (related to your session) oder an empty JUser-object for guests.
So in your case you are either not logged in or not loading Joomla properly.
The only php-files you should speak to are index.php and administrator/index.php, which are loading all necessary files and do what you want. (If you are using some CLI scripts you may have other options)
You can find more informations about Ajax on Joomla Documentation
A: I had a similar problem, it turned out to be a problem with the cookie domain, I was logged into the site using the domain example.com but accessing the AJAX URL using www.example.com. Just changing the domains so that they matched solved the problem.
In my experience
$user = JFactory::getUser();
$id = $user->id;
will always return a positve user id if the user is logged in, regardless of whether they are an admin user or not, and 0 if they are not logged in. So if you are getting 0 when you think that you are logged in, it has to be a cookie problem of some sort, or the Joomla session is not correctly started, eg by not accessing the site through index.php.
| |
doc_23530329
|
<input type = "checkbox" id = "myCheckbox"/>
<div id = "display-hide">
//Contains more elements
</div>
Now, here is the JQuery script that does the trick.
<script type = "text/javascript">
$(function () {
$("#myCheckbox").click(function(){
$("#display-hide").toggle(this.checked);
});
$("#display-hide").hide();
});
</script>
The first time I access the page, everything works fine (i.e. div is hidden - when I check the checkbox, the div is displayed).
BUT, when I come back to the page (to edit, for instance), no-matter the value of the checkbox (checked or not), the div is hidden first. The div becomes only visible after I uncheck then check again.
I guess that because of this statement:
$("#display-hide").hide();
Is there anyway to verify the value of the checkbox and hide/show the div accordingly?
Thanks for helping
A: Instead of this:
$("#display-hide").hide();
You can use .triggerHandler(), like this:
$("#myCheckbox").triggerHandler('click');
This executes the click handler your just bound without actually executing a click on the box, so it makes the initial state match, without interfering. Overall it looks like this:
$("#myCheckbox").click(function(){
$("#display-hide").toggle(this.checked);
}).triggerHandler('click');
You can give it a try here
A: I think you should edit a little in your code
script type = "text/javascript">
$(function () {
$("#myCheckbox").click(function(){
$("#display-hide").toggle(this.checked);
});
if($("#myCheckbox").attr("checked"))
{
$("#display-hide").show();
}
else
{
$("#display-hide").hide();
}
});
</script>
| |
doc_23530330
|
This is my css code:
.ui-page, #index .ui-content{
background: url(../images/bg.jpg) no-repeat center center fixed;
-webkit-background-size: cover;
-moz-background-size: cover;
-o-background-size: cover;
background-size: cover;
background-size:100% 100%;
}
The problem I have is on other pages if the content is short, background image from .ui-page is visible. How do I assign .ui-page to index.html page only?
A: Add the page id of the page that should have the background to the CSS rule:
#page1.ui-page{
background: url(http://lorempixel.com/1800/1800/abstract/2/) no-repeat center center fixed;
-webkit-background-size: cover;
-moz-background-size: cover;
-o-background-size: cover;
background-size: cover;
background-size:100% 100%;
}
DEMO
In the demo page1 has the background and page2 does not.
| |
doc_23530331
|
1. This version 8.01 returns all user entries except the first alphabetical username and sorts by username.
$myquery = "
SELECT
name
, data
, recordtime
FROM
(
SELECT
*
FROM
{$conf->website_db}.userform_db
ORDER BY
recordtime DESC
) AS x
GROUP BY
name
";
2. This version 8.02 returns all user entries except the first alphabetical username and sorts by username,
except places the most recent entry (of all users) at the top of the list.
$myquery = "
SELECT
m1.*
FROM
{$conf->website_db}.userform_db m1
LEFT JOIN
{$conf->website_db}.userform_db m2
ON (m1.name = m2.name AND m1.recordtime website_db}.userform_db a,
(SELECT name, MAX(recordtime) AS Date
FROM {$conf->website_db}.userform_db
GROUP BY name) b
WHERE a.name = b.name
AND a.recordtime = b.Date
";
3.This version is 8.03 returns all user entries except the first alphabetical username and sorts by username,
except places the most recent entry (of all users) at the top of the list (same as v2).
$myquery = "
SELECT
a.name, a.data, a.recordtime
FROM
{$conf->website_db}.userform_db a,
(SELECT name, MAX(recordtime) AS Date
FROM {$conf->website_db}.userform_db
GROUP BY name) b
WHERE a.name = b.name
AND a.recordtime = b.Date
";
A: Try:
SELECT
udb.name
, data
, recordtime
FROM userform_db udb
JOIN (select name,max(recordtime) as rtime from userform_db group by name) tmp on udb.name=tmp.name and udb.recordtime=tmp.rtime
A: This SQL should work:
SELECT m.name,f.`data`,m.recordtime
FROM
{$conf->website_db}.userform_db f,
(SELECT name,max(recordtime) mrt
FROM {$conf->website_db}.userform_db
GROUP BY name) m
WHERE
f.name=m.name AND f.recordtime=m.mrt
ORDER BY
m.name
Which is close to your 8.03. The lacking first entry problem sounds more like a data consistency problem... if you run:
SELECT name,max(recordtime) mrt
FROM {$conf->website_db}.userform_db
GROUP BY name
By itself, do you get a name and max value for every row?
A: SELECT uf.*
FROM (
SELECT DISTINCT name
FROM userform_db
) ufd
JOIN userform_db uf
ON uf.id =
(
SELECT id
FROM userform_db ufi
WHERE ufi.name = ufd.name
ORDER BY
ufi.name DESC, ufi.recordtime DESC, ufi.id DESC
LIMIT 1
)
ORDER BY
uf.recordtime DESC
| |
doc_23530332
|
from keras.models import Model
from keras.layers import Convolution1D, Input, Dropout, GlobalMaxPooling1D, Dense, merge
input_window3 = Input(shape=(MEANLEN, W2VLEN))
input_window4 = Input(shape=(MEANLEN, W2VLEN))
conv_w3 = Convolution1D(MEANLEN*2, 3, activation='tanh', border_mode='valid')(input_window3)
drop_w3 = Dropout(0.7)(conv_w3),
pool_w3 = GlobalMaxPooling1D(name='pool_w3')(drop_w3[0])
conv_w4 = Convolution1D(MEANLEN, 5, activation='tanh', border_mode='valid')(input_window4)
drop_w4 = Dropout(0.7)(conv_w4),
pool_w4 = GlobalMaxPooling1D(name='pool_w4')(drop_w4[0])
print(conv_w4.shape)
x = merge([pool_w3, pool_w4], mode='concat', concat_axis=1)
print(x.shape)
x = Dense(MEANLEN*3, activation='relu')(x)
drop_dense = Dropout(0.5)(x)
main_output = Dense(num_categories, activation='sigmoid', name='main_output')(drop_dense)
model = Model(input=[input_window3, input_window4], output=[main_output])
model.compile(optimizer='adam', loss='mse', metrics=['accuracy', f1_score])
Predicting:
result = model.predict([X_test, X_test])
returns arrays of vectors simillar to these ones:
array([[ 0.08401331, 0.1911521 , 0.14310306, 0.07138534, 0.19428432,
0.15808958, 0.16400988, 0.27708355, 0.09983496],
[ 0.02074078, 0.08897329, 0.03244834, 0.00112842, 0.04122255,
0.03494435, 0.17535761, 0.55671334, 0.04375785],
[ 0.04897207, 0.06169643, 0.00313113, 0.002085 , 0.00275023,
0.00131959, 0.09961601, 0.56414878, 0.02338091]], dtype=float32)
Values in arrays, that I assume to be class probabilities, do not sum up to 1. How to get class probabilities?
A: Based on the array that you posted, you have 9 categories. In such case, you should replace your final activation function with softmax instead of sigmoid. In addition, if you haven't done it yet, you need to transform your labels into one-hot vectors. You can do that using the function to_categorical. Finally, as a loss function, you should use categorical_crossentropy loss, instead of mse. A tutorial on using keras for classification (using the functions mentioned above) is provided here.
A: In general, when you want to have an output similar to a categorical probability distribution you use a softmax activation function in the last layer instead of a sigmoid:
main_output = Dense(num_categories, activation='softmax', name='main_output')(drop_dense)
| |
doc_23530333
|
What is the difference between the memory in the tasklist ( which you run in the cmd) and that GUI task manager. I noticed for browser processes, that the memory is off by a great deal. Which is more accurate of the process's memory.
A: Task Manager has lots of memory counters see View menu - Select Columns.
The standard one shown is private working set. This is a/ private - so only bytes in memory specific to this program (so no shell32 common code is counted) and b/ working set - the amount of memory mapped and present in that processes address space.
Even memory not present in address space may be in physical memory as on the standby list or in the file cache or being used by another program. It only requires flicking a bit to make it available to the process. Run two copies of notepad, notepad in now in the file cache (and being small) in two processes. But the code is only in memory once not three times.
If you want to make your own tasklist.
Set objWMIService = GetObject("winmgmts:{impersonationLevel=impersonate}!\\.\root\cimv2")
Set colItems = objWMIService.ExecQuery("Select * From Win32_Process")
For Each objItem in colItems
' If objitem.Name = "mspaint.exe" Then
wscript.echo objitem.name & " PID=" & objItem.ProcessID & " SessionID=" & objitem.sessionid
' objitem.terminate
' End If
Next
Lines starting with a ' are commented out.
To use in a command prompt
cscript //nologo c:\path\script.vbs
These are the properties
Property Type Operation
======== ==== =========
CSName N/A N/A
CommandLine N/A N/A
Description N/A N/A
ExecutablePath N/A N/A
ExecutionState N/A N/A
Handle N/A N/A
HandleCount N/A N/A
InstallDate N/A N/A
KernelModeTime N/A N/A
MaximumWorkingSetSize N/A N/A
MinimumWorkingSetSize N/A N/A
Name N/A N/A
OSName N/A N/A
OtherOperationCount N/A N/A
OtherTransferCount N/A N/A
PageFaults N/A N/A
PageFileUsage N/A N/A
ParentProcessId N/A N/A
PeakPageFileUsage N/A N/A
PeakVirtualSize N/A N/A
PeakWorkingSetSize N/A N/A
Priority N/A N/A
PrivatePageCount N/A N/A
ProcessId N/A N/A
QuotaNonPagedPoolUsage N/A N/A
QuotaPagedPoolUsage N/A N/A
QuotaPeakNonPagedPoolUsage N/A N/A
QuotaPeakPagedPoolUsage N/A N/A
ReadOperationCount N/A N/A
ReadTransferCount N/A N/A
SessionId N/A N/A
Status N/A N/A
TerminationDate N/A N/A
ThreadCount N/A N/A
UserModeTime N/A N/A
VirtualSize N/A N/A
WindowsVersion N/A N/A
WorkingSetSize N/A N/A
WriteOperationCount N/A N/A
WriteTransferCount N/A N/A
And the methods
Call [ In/Out ]Params&type Status
==== ===================== ======
AttachDebugger (null)
Create [IN ]CommandLine(STRING) (null)
[IN ]CurrentDirectory(STRING)
[IN ]ProcessStartupInformation(OBJECT)
[OUT]ProcessId(UINT32)
GetOwner [OUT]Domain(STRING) (null)
[OUT]User(STRING)
GetOwnerSid [OUT]Sid(STRING) (null)
SetPriority [IN ]Priority(SINT32) (null)
Terminate [IN ]Reason(UINT32) (null)
Which is the same as
wmic process where name='notepad.exe' get /format:list
Further reading
https://msdn.microsoft.com/en-us/library/ms810627.aspx
https://www.labri.fr/perso/betrema/winnt/ntvmm.html (this no longer appears on the MSDN)
| |
doc_23530334
|
<result cover="http://ia.mediaimdb.com/images
/M/MV5BMjMyOTM4MDMxNV5BMl5BanBnXkFtZTcwNjIyNzExOA@@._V1._SX54_
CR0,0,54,74_.jpg" title="The Amazing Spider-Man(2012)"year="2012"
director="Marc Webb" rating="7.5"
details="http://www.imdb.com/title/tt0948470"/>
<result cover="http://ia.mediaimdb.
com/images/M/MV5BMzk3MTE5MDU5NV5BMl5BanBnXkFtZTYwMjY3NTY3._V1._SX54_CR0,
0,54,74_.jpg" title="Spider-Man(2002)" year="2002"director="Sam Raimi"
rating="7.3" details="http://www.imdb.com/title/tt0145487"/>
<result cover="http://ia.mediaimdb.
com/images/M/MV5BODUwMDc5Mzc5M15BMl5BanBnXkFtZTcwNDgzOTY0MQ@@._V1._SX54_
CR0,0,54,74_.jpg" title="Spider-Man 3 (2007)" year="2007" director="Sam
Raimi" rating="6.3" details="http://www.imdb.com/title/tt0413300"/>
<result cover="http://i.mediaimdb.
com/images/SF1f0a42ee1aa08d477a576fbbf7562eed/realm/feature.gif" title="
The Amazing Spider-Man 2 (2014)" year="2014" director="Sam Raimi"
rating="6.3" details="http://www.imdb.com/title/tt1872181"/>
<result cover="http://ia.mediaimdb.
com/images/M/MV5BMjE1ODcyODYxMl5BMl5BanBnXkFtZTcwNjA1NDE3MQ@@._V1._SX54_
CR0,0,54,74_.jpg" title="Spider-Man 2 (2004)" year="2004" director="Sam
Raimi" rating="7.5" details="http://www.imdb.com/title/tt0316654"/>
</results>
I need to parse this XML file using a parser like saxp and output the result as a json string.
I am new to SAXParser. All this while i have been googling out but i cannot simply understand it.
Any ideas?
A: Let me explain with a simple example.
Input XML :
<Employees>
<employee name="vels" gender="male"/>
<employee name="steave" gender="male"/>
</Employees>
Parse xml using SAXParser
final JSONObject jsonObj = new JSONObject();
SAXParserFactory factory = SAXParserFactory.newInstance();
SAXParser saxParser = factory.newSAXParser();
DefaultHandler handler = new DefaultHandler() {
@Override
public void startElement(String uri, String localName, String qName,
Attributes attributes) throws SAXException {
try {
if (qName.equals("Employees")) {
// reject root node
return;
}
int len = attributes.getLength();
HashMap<String, String> map = new HashMap< String, String>();
for (int i = 0; i < len; i++) {
String attribName = attributes.getLocalName(i);
String attribVal = attributes.getValue(i);
map.put(attribName, attribVal);
}
jsonObj.append("employee", map);
} catch (JSONException ex) {
// handle excep
}
}
};
saxParser.parse("c:/employee.xml", handler);
String jSonOutput = jsonObj.toString();
// process this json outpout
System.out.println(jSonOutput);
Output (manually formatted) :
{"employee":
[
{"name":"vels","gender":"male"},
{"name":"steave","gender":"male"}
]
}
Here is a simple and decent example for Java SAX Parser & Java SAX DefaultHandler. Hope you can go forward.
| |
doc_23530335
|
For example, I could just pass each constant down to the next recursive function, but it isn't obvious that those params are constants:
const startFoo = (myArray, isFoo, isBar) => {
console.log(isFoo, isBar);
startFoo(myArray, isFoo, isBar);
};
Alternatively, I could have 2 functions and keep constants in the closure of the first, but I'm curious if recreating the second function every time the first is called will cause unnecessary object creation & GC:
const startFoo = (myArray, isFoo, isBar) => {
const foo = myArray => {
console.log(isFoo, isBar);
foo(myArray);
};
foo(myArray);
};
Finally, I could keep it at one function, and just cache the initial values:
const startFoo = (myArray, isFoo, isBar) => {
if (!startFoo.cache) {
startFoo.cache = {
isFoo,
isBar
}
}
const {isFoo, isBar} = startFoo.cache;
console.log(isFoo, isBar);
startFoo(myArray);
};
All 3 look like they'll be good candidates for the upcoming (here-ish) TCO so I don't see that playing into the decision, but if it does that'd be good to know as well!
A:
it isn't obvious that those params are constants
Does it matter that much? If you are choosing recursion over a loop because you like the functional approach, all your variables and parameters are constants anyway. You can tell whether they stay constant or not in the recursive descent by looking at the recursive call and comparing the arguments to your function's parameters.
Alternatively, I could have 2 functions and keep constants in the closure of the first, but I'm curious if recreating the second function every time the first is called will cause unnecessary object creation & GC.
Unlikely. Afaik, no function objects are instantiated for simple inline helper functions that are never used as an object or exported as a closure. At least it's a rather trivial optimisation, and even when not done the GC pressure will not be hard or impair performance noticeably.
You should go for this approach as it is the cleanest and most maintainable.
I could keep it at one function, and just cache the initial values
You better not do that. It introduces an extra condition in the function that will impair performance because it is executed on each and every call, but most of all it complicates the code unnecessarily much. This is also likely the reason for the bug you introduced - you never unset the .cache in the base case1 so that all invocations will use the same constants no matter what is passed to them. Also, you are leaking the constants into the global scope where anyone could access them.
1: Admittedly, your demo function does not have a base case, but ask yourself: if you had added one, would you have forgotten to unset the cache?
| |
doc_23530336
|
*
*An authorization and authentication server which provide JWT (Keycloak for exemple)
*2 micro service which communicate between them through REST.
*1 micro service is a user service which create a new user in my database on each new user from the Keycloak (may be tomorrow we have Google or Github, it's important to take this in mind). When I'm creating a user I store his subject from claim in a specific field.
*1 micro service which store the creatorId, the updateById for blog post for exemple.
Is it better to store in my creatorId and updatedById the subject (Like this I don't need to ask to my user service to identify who is a creator) or to store the userId from my user service and everytime call from my post-service which is the user that made the request (So I made Everytime a rest request to get the user which send the request by passing the JWT token to the user service).
IMO, sending Everytime a rest request will increase the load on the user service but a subject id for a different user can be the same for Google, Github and Keycloak.
A: I would do the following, so that you can move to a different Authorization Server (Google / Github) in future without too much impact:
*
*User Service creates a row in its Users Table for each new user, with a database surrogate key as the main user id
*This user id is saved to your creatorId / updateById fields
*Meanwhile the OAuth Id / Sub claim is a column in the Users Table, but is not the main user identifier from business logic
The Posts Service can avoid calling the User Service on every single request if you cache claims in the Posts Service. Some resources of mine might give you some ideas that you can apply to your own solution:
*
*User Management Blog Post
*Claims Caching Blog Post
*Claims Caching Code
| |
doc_23530337
|
user=> (dir (ns-name *ns*))
Execution error (ClassCastException) at user/eval2010 (REPL:1).
class clojure.lang.PersistentList cannot be cast to class clojure.lang.Symbol (clojure.lang.PersistentList and clojure.lang.Symbol are in unnamed module of loader 'bootstrap')
A: dir is a macro that expects an unquoted symbol as its argument. It should be used like:
user=> (clojure.repl/dir clojure.string)
blank?
capitalize
escape
join
lower-case
replace
...
When you call it like so:
(ns demo.core
(:require [clojure.repl :as repl]))
(println (repl/dir (ns-name *ns*)))
you are not passing an unquoted symbol (eg clojure.string), but rather a list with the symbol ns-name as its first element.
As the namespace for clojure.repl/dir implies, this command is intended to by typed into a REPL by hand, not used programatically.
If you do want to get information programatically, you probably want something more like one of these:
(ns-publics 'tst.demo.core)
(ns-publics (ns-name *ns*))
either of which works.
Be sure to peruse the Clojure CheatSheet and this list of documentation.
| |
doc_23530338
|
How to do it?
A: From here:-
[Rectangle r = new Rectangle(0, 0, pictureBox1.Width, pictureBox1.Height);
System.Drawing.Drawing2D.GraphicsPath gp = new System.Drawing.Drawing2D.GraphicsPath();
int d = 50;
gp.AddArc(r.X, r.Y, d, d, 180, 90);
gp.AddArc(r.X + r.Width - d, r.Y, d, d, 270, 90);
gp.AddArc(r.X + r.Width - d, r.Y + r.Height - d, d, d, 0, 90);
gp.AddArc(r.X, r.Y + r.Height - d, d, d, 90, 90);
pictureBox1.Region = new Region(gp);][2]
| |
doc_23530339
|
This is how I used to do in C++:
char buff[2048];
ssize_t data_len = recvfrom(socket, buff, sizeof(buff), 0, nullptr, nullptr);
std::vector<char> vec_buff(buff, buff + data_len)
I know Vec<T> impls From<[T; N]> and it can be created from an array by using the From::from() method, but this takes the entire size of 2048 but I want only data_len bytes.
A: A simple
let vec = buff[..data_len].to_vec();
will do if your type is Clone.
buff[..data_len] takes a slice of the first data_len elements, to_vec then turns that slice into a Vec
If it isn't you can use a variation of @Emoun s answer:
let vec = buff.into_iter().take(data_len).collect::<Vec<_>>();
A: You could use Iterator::take:
let array: [u8;2048] = ..;
let data_vec: Vec<_> = array.iter().cloned().take(data_len).collect();
| |
doc_23530340
|
I made a copy of the whole web site folder and then opened it again in another PC (using the same version of VS2012).
Now all my breakpoints are gone.
How can I open a Visual Studio Web Site in another PC without losing all my breakpoints?
Thanks in advance.
CD
A: The project's .suo file has the breakpoint information. Try copying the .suo and replacing the other pc's .suo file.
If you're unfamiliar with it, the project solution file (e.g. "MyProject.sln") has an associated file, the Solution User Options file (e.g. "MyProject.suo"), which stores various information, including breakpoints.
| |
doc_23530341
|
export const CONTROLS = [
"section",
"text",
"richtext",
"number",
];
export const ControlType = t.union(
// What to do here? Is this even possible? This is what came to mind but it's obviously wrong.
// CONTROL_TYPES.map((type: string) => t.literal(type))
);
I don't know if this is possible but given that io-ts is just JS functions I don't see why not. I just don't know how.
The end result in this case should be (with io-ts):
export const ControlType = t.union(
t.literal("section"),
t.literal("text"),
t.literal("richtext"),
t.literal("number"),
);
A: io-ts formally recommends to use keyof for better performance with string literal unions. Thankfully, that also makes this problem much easier to solve:
export const CONTROLS = [
"section",
"text",
"richtext",
"number",
] as const;
function keyObject<T extends readonly string[]>(arr: T): { [K in T[number]]: null } {
return Object.fromEntries(arr.map(v => [v, null])) as any
}
const ControlType = t.keyof(keyObject(CONTROLS))
| |
doc_23530342
|
$scope.loadNextPage = function()
{
loadComments($scope.nextCommentId);
}
function loadComments(page){
$scope.allComments.push(d.data);
..
..
$scope.nextCommentId = nextId;
}
Here is my html
<div ng-repeat="comment in allComments">
...
</div>
My question is, is this the correct way of doing pagination in angularjs as i am keeping all data in array so if data will grow it will consume memory.
A: <div data-pagination="" data-num-pages="numPages()"
data-current-page="currentPage" data-max-size="maxSize"
data-boundary-links="true"></div>
todos.controller("TodoController", function($scope) {
$scope.filteredTodos = []
,$scope.currentPage = 1
,$scope.numPerPage = 10
,$scope.maxSize = 5;
$scope.makeTodos = function() {
$scope.todos = [];
for (i=1;i<=1000;i++) {
$scope.todos.push({ text:"todo "+i, done:false});
}
};
$scope.makeTodos();
$scope.numPages = function () {
return Math.ceil($scope.todos.length / $scope.numPerPage);
};
$scope.$watch("currentPage + numPerPage", function() {
var begin = (($scope.currentPage - 1) * $scope.numPerPage)
, end = begin + $scope.numPerPage;
$scope.filteredTodos = $scope.todos.slice(begin, end);
});
});
| |
doc_23530343
|
SELECT *
FROM [lab occurrence form]
WHERE ((([lab occurrence form].[occurrence date]) Between [Forms]![Form1]![Text2] And [Forms]![Form1]![Text4]))
ORDER BY [lab occurrence form].[occurrence date] DESC;
i have two textboxes which contain the range of dates: text2 and text4
the report is displaying the data correctly, but it is not sorting it by date
how can i make sure that it will sort it by date?
i did a datasheet view on the query and it works fine, but when i run the report it does not sort it for some reason by date
A: Order by will sort by the field specified, but if you have not used a datetime data type, it will not sort the way you expect because it will do an alphabetical sort. The best fix for this is to stop storing dates as anything except date datatypes.
A: Use the report's Sorting and Grouping option to establish the sort order. In Access 2003, with the report open in design view, select "Sorting and Grouping" from the "View" menu. If your Access version is different, look for a similar name in the report design options.
| |
doc_23530344
|
The easiest approach seems to be the one in this tutorial. You just sqlite3_exec a single combined sequence of commands, which start with BEGIN and end with COMMIT. So, you never do a ROLLBACK, and you presumably rely on SQLite automatically rolling back if it encounters an error.
The problem is that the section on handling transaction errors in the manual is fairly complex, and it's not obvious to me that this is a good approach. The doc also suggests manually rolling back.
The next approach would be to exec a single BEGIN, and then run each statement individually, checking for errors, and then finally run a COMMIT or ROLLBACK. Is this actually any better, or is it just busywork?
A: sqlite3_exec() will abort in the first error encountered, an most errors do not result in an automatic rollback.
You should execute the BEGIN first, then do all the stuff inside the transaction, then end the transaction with either COMMIT or ROLLBACK.
In the case of ROLLBACK, you might just ignore any errors. Either the transation already was rolled back, or there is nothing you could do anyway.
| |
doc_23530345
|
For example,
A.xml ----> has a Tag <Customer convertto = "Master"> then I should convert into Master.xml file.
If A.xml file has a <Customer convertto = "Child"> then I should convert into Child.xml file.
In both cases I want to use single XSLT file.
| |
doc_23530346
|
My only constraint is that instead of the user providing an absolute path, I need my program to search the directories specified in the path environment variable and find the location of the command's executable. I don't understand how to do this either but searching online has told me this would be best using "getenv() to access the OS PATH variable and prefix the user-supplied command appropriately". Can anyone help me out here? Thanks for your assistance in advance.
A: Try popen(), which can be found here in the manpages.
Check this out:
#include <stdio.h>
#include <stdlib.h>
void write_netstat(FILE * stream)
{
FILE * outfile;
outfile = fopen("output.txt","w");
char line[128];
if(!ferror(stream))
{
while(fgets(line, sizeof(line), stream) != NULL)
{
fputs(line, outfile);
printf("%s", line);
}
fclose(outfile);
}
else
{
fprintf(stderr, "Output to stream failed.n");
exit(EXIT_FAILURE);
}
}
int main(void)
{
FILE * output;
output = popen("netstat", "r");
if(!output)
{
fprintf(stderr, "incorrect params or too many files.n");
return EXIT_FAILURE;
}
write_netstat(output);
if(pclose(output) != 0)
{
fprintf(stderr, "Could not run 'netstat' or other error.n");
}
return EXIT_SUCCESS;
}
This prints a netstat to a file. You can do this for all commands. It uses popen(). I wrote it because I needed a log of a netstat.
| |
doc_23530347
|
Here's how my Gson instance is defined and how I'm using it:
RuntimeTypeAdapterFactory<Enemy> adapter = RuntimeTypeAdapterFactory.of(Enemy.class,"enemyType")
.registerSubtype(Ranged.class, "Ranged")
.registerSubtype(Melee.class, "Melee")
.registerSubtype(Sniper.class, "Sniper")
;
gson = new GsonBuilder().registerTypeAdapterFactory(adapter).create();
...
public GameStateData loadGameState(String path) throws FileNotFoundException {
FileReader fileReader = new FileReader(path);
gameStateData = gson.fromJson(fileReader, GameStateData.class);
try {
fileReader.close();
} catch (IOException e) {
System.err.println("Error closing save file");
}
return gameStateData;
}
public void writeGameState(String path) throws IOException {
FileWriter fileWriter = new FileWriter(path);
gson.toJson(gameStateData, fileWriter);
fileWriter.close();
}
Both type and enemyType are already existing fields within Enemy, when trying to use either or without the adapter's typeFieldName specified leads to the following error when trying to serialize:
com.google.gson.JsonParseException: cannot serialize enemies.Ranged because it already defines a field named enemyType
I also tried setting an unused value for typeFieldName but that lead to the issue that my Ranged type entity was saved as an Enemy. Both type and enemyType are enums, although different ones. What type should typeFieldName be?
Edit: Ended up solving this issue by re-filling the ArrayList with new Objects with their specific constructors based on enemyType. Looking at my code my thoroughly my error seemed to have been using a different (and differently configured) Gson instance for deserializing.
| |
doc_23530348
|
How do I keep the nice formatting but show all rows?
For example if I want to output all rows of mtcars:
A: You can achieve this by setting the options pageLength of DT:
Example rmd:
---
title: "Untitled"
author: "TarJae"
date: "5 2 2021"
output: html_document
---
chunk 1
knitr::opts_chunk$set(echo = TRUE)
library(DT)
options(DT.options = list(pageLength = 100, language = list(search = 'Filter:')))
chunk 2:
datatable(mtcars)
A: To keep the same formatting but drop the tabs/paging, use the following:
```{r results='asis'}
knitr::kable(mtcars)
`` `
| |
doc_23530349
|
A: Try this:
Path.GetDirectoryName(Assembly.GetExecutingAssembly().GetName().CodeBase);
A: You could try this:
using System.IO;
using System.Reflection;
namespace Utilities
{
static public class DirectoryHelper
{
static public string GetCurrentDirectory ()
{
return Path.GetDirectoryName (Assembly.GetExecutingAssembly ().GetName ().CodeBase);
}
}
}
A: Public Shared Sub WriteDBStatus(ByVal strString As String)
Try
Dim FILE_NAME As String = Path.GetDirectoryName(Assembly.GetExecutingAssembly().GetName().CodeBase) + "\DBStatus"
Dim sr As IO.StreamWriter = Nothing
If Not IO.File.Exists(FILE_NAME) Then
sr = IO.File.CreateText(FILE_NAME)
sr.WriteLine(strString)
End If
sr.Close()
Catch ex As Exception
End Try
End Sub
| |
doc_23530350
|
A: They have an id and you will do anything according to that id. So whenever a user buys a non-consumable with the id of let's say "x", you extend a feature accordingly. Also you need to add "restore purchases" option exactly for that matter. To re-enable those features.
| |
doc_23530351
|
I’d like to be able to do something similar also on data that does not come from DBI.
Is there a module that does that?
| |
doc_23530352
|
here is a web.xml snippet, detailing the REST setup:
<listener>
<listener-class>org.jboss.resteasy.plugins.server.servlet.ResteasyBootstrap</listener-class>
</listener>
<listener>
<listener-class>
org.jboss.resteasy.plugins.spring.SpringContextLoaderListener
</listener-class>
</listener>
...
<servlet>
<servlet-name>Resteasy</servlet-name>
<servlet-class>
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher
</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>Resteasy</servlet-name>
<url-pattern>/AutomatedProcessing/rest/*</url-pattern>
</servlet-mapping>
However, when I walk around the app in a browser, I keep seeing these exceptions in the log:
org.jboss.resteasy.springmvc.ResteasyHandlerMapping - Resource Not Found: Could not find resource for relative :
org.jboss.resteasy.spi.NoResourceFoundFailure: Could not find resource for relative : /AccountManagement/login.do of full path: https://dev.produceredge.com:7002/AccountManagement/login.do
It seems to me like REST is suddenly trying to handle all requests, not just requests that match the /AutomatedProcessing/rest/* URL pattern.
I have found no details on the NoResourceFoundFailure exception, or why REST would be trying to handle requests outside of its assigned URL pattern. The exception isn't fatal to the user, but I think it might be destroying something I don't know about. Plus, exceptions in the logs are never fun. I would greatly appreciate any insight on this exception!
A: The answer came from an ordering issue.
I had another UrlHandlerMapping set up in a config.xml file:
<bean class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping">
<property name="interceptors">
<list>
<ref bean="openPersistenceManagerInView"/>
</list>
</property>
<property name="mappings">
<value>
**/AccountManagement/login.do=flowController
**/AccountManagement/createAccount.do=flowController
**/AccountManagement/manageAccount.do=flowController
</value>
</property>
<property name="alwaysUseFullPath" value="true"/>
</bean>
This mapping did not have an "order" property, meaning that the order was set at the default value.
The ResteasyHandlerMapping, which handles mapping RESTEasy resources, was found in the JBoss included sprincmvc-resteasy.xml file. This mapping also did not have the "order" property.
This lead to both mappings having the same ordering priority, and since the RESTEasy mapping came first in the XML, it tried to handle all requests.
Solution: add this property to your default url mapper:
<property name="order" value="0"/>
| |
doc_23530353
|
However, when I tried to execute a python file via these codes this is what happened when I entered Diagnosis page from previous page
*
*browser tab show loading animetion
*camera on for 10 seconds then off (noticed from little led light beside webcam)
*display Diagnosis page
Controller/Diagnosis
class Diagnosis extends CI_controller
{
public function __construct()
{
parent::__construct();
$this->load->model('diagnosis_model');
}
public function index()
{
$face = $this->diagnosis_model->get_face('female');
## execute python file
$command = escapeshellcmd('vdocapture.py');
shell_exec($command);
## render page
$this->load->view(
'diagnosis/_diagnosis',
array(
'face'=> $face
)
);
}
}
python
import cv2
import time
id = '[citizen_id]'
cap = cv2.VideoCapture(0)
## set timeout
timeout = time.time() + 10 # 10 sec
## Define the codec and create VideoWriter object
## Save video in "capture" folder
out = cv2.VideoWriter('capture/%s.wmv' % id, -1, 29, (640,480))
while(cap.isOpened()):
ret, frame = cap.read()
if ret==True:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
## write the frame
out.write(frame)
if time.time() > timeout:
break
else:
break
## Release everything if job is finished
cap.release()
out.release()
But what I expected is letting view render until completed then execute python and start recording video in background for 10 seconds (while the user can interact with form normally).
What I've tried are
*
*Move python section to the end of index() function ---> camera is not on
*Using ajax technique from how to execute a codeigniter controller method in background and action after render a view (codeigniter) ---> not sure what I've misses but camera still not on
Controller/Diagnosis
public function run_py(){
$command = escapeshellcmd('vdocapture.py');
shell_exec($command);
}
javascript (first try)
$.ajax("index.php?/Diagnosis/run_py()")
javascript (second try)
$.ajax({
url: "Diagnosis/run_py",
async: true
}).done(function() {
console.log("Capturing began")
});
*I tried to change directory of vdocapture.py and capture folder without changing anything from my original code except vdocapture.py and capture folder directory --> camera still not on after view has rendered
from
|--MY_CI_Project
|--Application
|--index.php
|-- **capture**
|-- **vdocapture.py**
to
|--MY_CI_Project
|--Application
|--controller
|--model
|--view
|-- **capture**
|-- **vdocapture.py**
|--index.php
So, how can I execute a python script after CI view is rendered?
Best regards.
A: you are trying to open the webcam of the server, not the user device. What you can do is trying to open webcam using javascript on the client side (browser), record, and then send it to the server after that run your python to read the recorded video. take a look this link for recording video via webcam https://developers.google.com/web/fundamentals/media/recording-video
| |
doc_23530354
|
I can access other sites via request, but I know that I need shareplum or something similar to execute the sharepoint site functions. If not going through a proxy, then I have no issue. However, my company recently deployed a proxy and now my script is no longer working. The code to use shareplum is similar to the code provide by the git project. However, the git project assumes straightforward communication to the web server with https or http
I’ve tried changing the tls version to default 1.1 and 1.2, but neither work. I’ve also tried setting the protocol to http verses https and tried to first pass the proxy server, but no success. I tried using proxy manager using the proxy, then using the created session to access the sharepoint site, but that also fails. Using the proxy manager caused a bad authentication with the sharepoint site.
| |
doc_23530355
|
Following is the error
Problem Event Name: APPCRASH Application Name: pythonw.exe
Application Version: 3.6.5150.1013 Application Timestamp: 5abd3212
Fault Module Name: mkl_core.dll Fault Module Version: 2018.0.2.1
Fault Module Timestamp: 5a6d0bd8 Exception Code: c000001d
Exception Offset: 00000000037f2b63 OS Version: 6.1.7600.2.0.0.256.1
Locale ID: 2057 Additional Information 1: 856a Additional
Information 2: 856afabdb44596b948f949971a9bb974 Additional
Information 3: 3bf8 Additional Information
4: 3bf8aad6cc25352af56a92972390ce43
| |
doc_23530356
|
Does anyone see where I'm going wrong? Here's my code:
<div class="table col-xs-12" style="margin-top: 20px; width: 100%; ">
<h3 class="text-center col-xs-12" style="margin-top: 200px; color: #dcdcdc; background-color: #333333; height: 30px; vertical-align: middle; display: table-cell; ">
Your account is currently pending. Please check back in again later.
</h3>
</div>
Thanks in advance for any help!
EDIT: Here's a fiddle, if that helps.
A: Remove position: absolute; from your .centered class, delete the wrapper div and apply .centered directly to div with .table class.
HTML:
<div class="table col-xs-12 centered" style="margin-top: 20px; ">
<h3 class="text-center col-xs-12" style="margin-top: 200px; color: #dcdcdc; background-color: #333333; height: 30px; width: 100% !important; vertical-align: middle; display: table-cell; ">Your account is currently pending. Please check back in again later.</h3>
</div>
CSS:
.centered {
left: 50%;
transform: translate(-50%,-10%);
}
CODEPEN DEMO
| |
doc_23530357
|
To be specific, say I have X = [42, 0, 99] and Y = ["a", "b", "c"]. What is it called when I reorder Y in the same way that I have to reorder X to make X a sorted list, winding up with ["b", "a", "c"]?
What about the reordering itself, which is kind of a list - i.e. [<2nd>, <1st>, <3rd>] - does that have a common name too?
It seems like that would be the kind of operation that would have a name that I should know, with its own Wikipedia page and everything (or an entry in the NIST's Dictionary of Algorithms and Data Structures: http://xw2k.nist.gov/dads/). I'm probably going to feel like a dummy when someone answer this.
A: The reordering itself is called a permutation(see sidenote).
I am not aware of a special term for the situation, but you could say that you're applying the permutation that sorts the list X, to the list Y.
Sidenote: Note that the word "permutation" can refer to both a particular ordering of a group of elements, for instance with the ordered list [3, 1, 2] being a permutation of the numbers {1, 2, 3}, as well as a reordering of elements (as in, the transformation itself), for instance the one that permutes the ordered list [3, 1, 2] to the ordered list [1, 2, 3].
A: I've mostly known it as an "indexed sort". X is the index you use to sort Y.
A: As far as I know, there is no term for this specific situation, but you are applying the same transformation to lists X and Y, and you create the transformation such that it transforms list X into a sorted list.
A: You could call this a parallel key sort, since X contains the sort keys and Y contains the values. In a functional language, e.g., Scala, this could be implemented as X.zip(Y).sortWith((a,b) => a._1 < b._1).map(a => a._2)
| |
doc_23530358
|
'amplify is not recognized as an internal or external command, operable program or batch file'.
A: To solve this, simply edit a PATH key under system Environment Variables and add a new path pointing to amplify:
C:\Users\{UserName}\AppData\Roaming\npm\amplify.cmd
If you have globally installed amplify/cli then you should find two files named amplify and amplify.cmd in the above mentioned npm directory.
A: Please share your platform. Are you developing on Linux, Windows (Powershell), or Linux on Windows (WSL/Ubuntu)?
Did you install the CLI globally?
Try this:
npm install -g @aws-amplify/cli
And see if that works. If the global install fails, you can try running this per an Amplify developer:
npm install -g @aws-amplify/cli --unsafe-perm=true
Edit: since you're on Windows, it's possible the CLI wasn't added to your $PATH variable. You can fix it by seeing this Github issue.
A: Under same circumstances I run all the suggested solutions on Windows 10 machine (64 bit). None of them seemed to do the trick.
I got a more specific error:
..... cannot be loaded because running scripts is disabled on this
system .... + CategoryInfo : SecurityError: (:) [],
PSSecurityException
+ FullyQualifiedErrorId : UnauthorizedAccess
The issue appears due to Windows PowerShell execution policies. Eventually, I managed to amend it by applying the following:
C:\Windows\System32>powershell Set-ExecutionPolicy -Scope CurrentUser RemoteSigned
A: Above solutions didn't work for me, I had to run this instead of 'amplify init':
C:\Users{UserName}\AppData\Roaming\npm\amplify init
A: I had the same issue and my problem was because I was trying to install it using
yarn global add @aws-amplify/cli
Apparently, it doesn't work when it is installed with yarn it has to be npm. It's funny because there are no errors shown. There might be a fix to it maybe someone can look into that.
A: If you are on windows platform avoid using the global(-g) flag from your npm command. Install Amplify CLI with below npm command.
npm install @aws-amplify/cli
It worked for me.
A: Error:
amplify : The term 'amplify' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. le program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
*
*amplify init
*
*CategoryInfo : ObjectNotFound: (amplify:String) [], CommandNotFoundException
*FullyQualifiedErrorId : CommandNotFoundException
Try this for windows:
Step 1:
npm install -g @aws-amplify/cli --unsafe-perm=true
Step 2:
npm config get prefix
Step 3:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
you must run this code on PowerShell not a cmd.
A: I had the same issue
For Windows, try the below command to install Amplify CLI
$ curl -sL https://aws-amplify.github.io/amplify-cli/install-win -o
install.cmd && install.cmd
$ amplify configure
for more info on installation follow the link
https://docs.amplify.aws/cli/start/install/
| |
doc_23530359
|
A: You might want to have a look at ABAC within AWS IAM. You can set tags on users/roles and then set tags on the AWS resources being created by them and then control the access based on those tags.
We have been using the same approach in our firm. We have various software platforms and we set up IAM groups for each platform and add users accordingly. We setup roles for each group and apply policies (using SCP) so that people cannot spin up resources without tags. Then we use ABAC to restrict each team to have control on their own resources.
https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_attribute-based-access-control.html
| |
doc_23530360
|
class Contador extends AsyncTask<String, String, String>{
@Override
protected void onPreExecute() {
}
@Override
protected String doInBackground(String... params) {
// TODO Auto-generated method stub
try {
httpclientContador = new DefaultHttpClient();
String sContador = "http://webmilab.com/SEDESOL/index.php/transferencias/conteo/" + resIdEncuesta;
pContador = new HttpPost(sContador);
pContador.addHeader("content-type", "application/x-www-form-urlencoded");
JSONObject jContador = new JSONObject();
jContador.put("mensaje", "Empezar a contar :)");
StringEntity eContador = new StringEntity("json="+jContador.toString(), "UTF-8");
pContador.setEntity(eContador);
httpclientContador.execute(pContador);
} catch (IOException e) {
e.printStackTrace();
Toast.makeText(TransferirDatos.this, "Hubo un error al realizar la petición del conteo", Toast.LENGTH_LONG).show();
} catch (JSONException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return null;
}
protected void onPostExecute(String file_url) {
}
}
}
This is the AsyncTask that i want to finish.
class Contador extends AsyncTask<String, String, String>{
@Override
protected void onPreExecute() {
}
@Override
protected String doInBackground(String... params) {
// TODO Auto-generated method stub
//Some Code
return null;
}
protected void onPostExecute(String file_url) {
pContador.abort();
}
}
HttpPost pContador and HttpClient httpclientContador are variables define in the main activity.
When i try to execute this code the app always crashes, I want to know how can I abort the AsyncTask from the second AsyncTask.
Here is the Log.
04-08 09:22:48.625: E/AndroidRuntime(10950): FATAL EXCEPTION: AsyncTask #3
04-08 09:22:48.625: E/AndroidRuntime(10950): java.lang.RuntimeException: An error occured while executing doInBackground()
04-08 09:22:48.625: E/AndroidRuntime(10950): at android.os.AsyncTask$3.done(AsyncTask.java:278)
04-08 09:22:48.625: E/AndroidRuntime(10950): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:273)
04-08 09:22:48.625: E/AndroidRuntime(10950): at java.util.concurrent.FutureTask.setException(FutureTask.java:124)
04-08 09:22:48.625: E/AndroidRuntime(10950): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:307)
04-08 09:22:48.625: E/AndroidRuntime(10950): at java.util.concurrent.FutureTask.run(FutureTask.java:137)
04-08 09:22:48.625: E/AndroidRuntime(10950): at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:208)
04-08 09:22:48.625: E/AndroidRuntime(10950): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076)
04-08 09:22:48.625: E/AndroidRuntime(10950): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569)
04-08 09:22:48.625: E/AndroidRuntime(10950): at java.lang.Thread.run(Thread.java:856)
04-08 09:22:48.625: E/AndroidRuntime(10950): Caused by: java.lang.RuntimeException: Can't create handler inside thread that has not called Looper.prepare()
04-08 09:22:48.625: E/AndroidRuntime(10950): at android.os.Handler.<init>(Handler.java:121)
04-08 09:22:48.625: E/AndroidRuntime(10950): at android.widget.Toast$TN.<init>(Toast.java:317)
04-08 09:22:48.625: E/AndroidRuntime(10950): at android.widget.Toast.<init>(Toast.java:91)
04-08 09:22:48.625: E/AndroidRuntime(10950): at android.widget.Toast.makeText(Toast.java:233)
04-08 09:22:48.625: E/AndroidRuntime(10950): at tsp.consulting.TransferirDatos$Contador.doInBackground(TransferirDatos.java:184)
04-08 09:22:48.625: E/AndroidRuntime(10950): at tsp.consulting.TransferirDatos$Contador.doInBackground(TransferirDatos.java:1)
04-08 09:22:48.625: E/AndroidRuntime(10950): at android.os.AsyncTask$2.call(AsyncTask.java:264)
04-08 09:22:48.625: E/AndroidRuntime(10950): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305)
A: Your issue is that you are trying to print a Toast inside the doInBackground method.
This method is executed in a background thread, and thus it cannot interact with the UI.
You can read more on that here.
You can use the onPostExecute(Result) method to notify the user after the task has ended, or the onProgressUpdate(Progress...) to update while the Task is running.
| |
doc_23530361
|
Uncaught TypeError: cannot read property 'stopVideo' of undefined
It is happening because it cannot find the video.stopVideo() function from the YouTube API. This only happens when I try to load the videos by the API because when I try it with manually input videos, it works fine.
$(document).ready(function() {
$.get(
"https://youtube.googleapis.com/youtube/v3/search", {
part: 'snippet',
channelId: 'WC8s9GnPV_Ol-G9YKvxs4r-w',
maxResults: 5,
order: 'date',
key: 'HIzaSyD8Fjhg8G-Q5xfQQUxyzJc_ikNSe4hK9SD'
}, function(data) {
var output;
$.each(data.items, function(i, item) {
console.log(item);
videoTitle = item.id.videoId;
output = '<iframe class="video" src="https://www.youtube.com/embed/' + videoTitle + '?ecver=2&enablejsapi=1"></iframe>';
let currentSlot = '#slide' + (i + 1);
$(currentSlot).append(output);
})
}
);
});
$(document).ready(function() {
var pos = 0,
slides = $('.slide'),
numOfSlides = slides.length;
function nextSlide() {
slides[pos].video.stopVideo();
console.log(slides[pos]);
slides.eq(pos).animate({
left: '-100%'
}, 500);
pos = (pos >= numOfSlides - 1 ? 0 : ++pos);
slides.eq(pos).css({
left: '100%'
}).animate({
left: 0
}, 500);
}
function previousSlide() {
slides[pos].video.stopVideo();
console.log(slides[pos]);
slides.eq(pos).animate({
left: '100%'
}, 500);
pos = (pos == 0 ? numOfSlides - 1 : --pos);
slides.eq(pos).css({
left: '-100%'
}).animate({
left: 0
}, 500);
}
$('.left').click(previousSlide);
$('.right').click(nextSlide);
})
function onYouTubeIframeAPIReady() {
$('.slide').each(function(index, slide) {
// Get the `.video` element inside each `.slide`
var iframe = $(slide).find('.video')[0];
slide.video = new YT.Player(iframe);
})
}
* {
-webkit-box-sizing: border-box;
-moz-box-sizing: border-box;
-ms-box-sizing: border-box;
-o-box-sizing: border-box;
box-sizing: border-box;
margin: 0;
padding: 0;
}
html {
font-size: 16px;
background-color: #000;
}
body {
font-family: Arial, sans-serif;
color: #fff;
max-width: 600px;
margin: 10px auto;
}
/* Additionnal styles */
.video-slider {
width: 100%;
height: 20em;
background: #444;
position: relative;
overflow: hidden;
}
.slide {
position: absolute;
top: 0;
left: 100%;
height: 100%;
width: 100%;
text-align: center;
overflow: hidden;
}
.slide:first-child {
left: 0;
}
.video {
height: 100%;
width: 100%;
border: 0;
}
.slide-arrow {
position: absolute;
color: grey;
top: 0;
left: 0;
height: 45%;
width: 15%;
cursor: pointer;
opacity: .2;
}
.slide-arrow:hover {
opacity: 1;
}
.slide-arrow:after {
content: "\003c";
text-align: center;
display: block;
height: 10%;
width: 100%;
position: absolute;
bottom: 0;
left: 0;
font-size: 3em;
}
.slide-arrow.right:after {
content: "\003e";
}
.slide-arrow.right {
left: auto;
right: 0;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
<div class="video-slider">
<div class="slide" id="slide1"></div>
<div class="slide" id="slide2"></div>
<div class="slide" id="slide3"></div>
<div class="slide" id="slide4"></div>
<div class="slide" id="slide5"></div>
<div class="slide-arrow left"></div>
<div class="slide-arrow right"></div>
</div>
| |
doc_23530362
|
this is my Html
<div class="form-group row">
<div class="col-4">
<select class="form-control" id="itemslist" name="itemName" asp-items="Model.Itemslist">
<option value="SSS"></option>
</select>
<input class="form-control" type="hidden" name="itemName"asp-for="Expiree.ItemName" />
<br />
<br />
@if (ViewData["Message"] != null)
{
<script type="text/javascript">
window.onload = function ()
{
alert("@ViewData["Message"]");
};
</script>
}
</div>
this my script section
@section scripts
{
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/select2@4.0.13/dist/css/select2.min.css" />
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script>
<script type="text/javascript" src="https://cdn.jsdelivr.net/npm/select2@4.0.13/dist/js/select2.min.js"></script>
<script type="text/javascript">
$(function () {
$("#itemslist").select2();
});
$("body").on("change", "#itemslist", function () {
$("input[name=itemName]").val($(this).find("option:selected").text());
});
</script>
}
this my page model
namespace SparkAuto.Pages.Expire
{
public class CreateeModel : PageModel
{
private readonly ApplicationDbContext _db;
public CreateeModel(ApplicationDbContext db)
{
_db = db;
}
public SelectList Itemslist { get; set; }
[BindProperty]
public Expiree Expiree { get; set; }
public IActionResult OnGet()
{
this.Itemslist = new SelectList(_db.Items, "Id", "Name");
return Page();
}
public async Task<IActionResult> OnPostAsync()
{
_db.Expiree.Add(Expiree);
await _db.SaveChangesAsync();
return RedirectToPage("Index");
}
}
}
finally this my error i get Null in itemName in database
this a screenshot from searchable select2 dropdown list i use
I search in 3K items in this dropdown list and need to push the item name after selection
A: You can use select to bind id of Expiree,and bind ItemName with hidden input:
Expire:
public class Expiree {
public int ItemId { get; set; }
public string ItemName { get; set; }
}
html:
<select class="form-control" id="itemslist" asp-for="Expiree.ItemId" asp-items="Model.Itemslist">
<option value="SSS"></option>
</select>
<input class="form-control" type="hidden" asp-for="Expiree.ItemName" />
js:
@section scripts
{
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/select2@4.0.13/dist/css/select2.min.css" />
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script>
<script type="text/javascript" src="https://cdn.jsdelivr.net/npm/select2@4.0.13/dist/js/select2.min.js"></script>
<script type="text/javascript">
$(function () {
$("#itemslist").select2();
});
$("body").on("change", "#itemslist", function () {
$("#Expiree_ItemName").val($(this).find("option:selected").text());
});
</script>
}
| |
doc_23530363
|
I have three tables I want to use in the table:
users
user_events
Events
Current SQL code:
SELECT users.username, users.user_firstname, users.user_lastname,
users.user_role, user_events.Event_ID
FROM users
LEFT JOIN user_events ON users.id = user_events.id
This returns all users, there details and the ID of the event they have signed up to, but this event id is duplicated if they've signed up to one or more.
If I update my query to the below how would I join on the Events table to get the event name?
SELECT users.username, users.user_firstname, users.user_lastname,
users.user_role,
GROUP_CONCAT(Events.Event_Name) AS Event_Names
FROM users
LEFT JOIN user_events ON users.id = user_events.id
My output from the first query is as follows:
| username | user_firstname | user_lastname | user_role | event_id.
| M@gmail.com | jo | mccann | employee | 8
| M@gmail.com | jo | mccann | employee | 15
Expected output:
| username | user_firstname | user_lastname | user_role | Event_Name
| M@gmail.com | jo | mccann | employee | baking,run
A: Just add one more outer join & do aggregation :
SELECT users.username, users.user_firstname, users.user_lastname,
users.user_role,
GROUP_CONCAT(Events.Event_Name) AS Event_Names,
COUNT(Events.Event_Name) AS Event_count
FROM users LEFT JOIN
user_events
ON users.id = user_events.id LEFT JOIN
Events
ON Events.id = user_events.event_id
GROUP BY users.username, users.user_firstname, users.user_lastname, users.user_role;
Use DISTINCT inside COUNT() in case of any duplicate events.
A: SELECT users.username, users.user_firstname, users.user_lastname,
users.user_role,
GROUP_CONCAT(ev.Event_Name) AS Event_Names
FROM users
LEFT JOIN user_events ON users.id = user_events.id
JOIN events ev ON ev.event_id = user_events.event_id
| |
doc_23530364
|
Once a link is clicked, the parameters that are passed, whatever it may be, is being truncated (due to security reasons) and I need to obtain these parameters somehow after a user has logged in. I am trying to do this using a session.
I have four JSP pages say :
*
*routing.jsp
*intermediateTarget.jsp
*login.jsp
*target.jsp
and page with a link and some parameters being passed into it :
< a href="router.jsp?p1=s&p2=s123">Click here</a>
Once I click this it takes me to router.jsp where I create a session and obtain the passed parameters and also set them into the session:
<% String param1 = request.getParameter("p1");
String param2 = request.getParameter("p2");
HttpSession session1 = request.getSession(true);
session1.setAttribute("p1", param1);
session1.setAttribute("p2", param2);
response.sendRedirect("intermediateTarget.jsp");
%>
router.jsp is executed onload so it is basically re-directed to intermediateTarget.jsp.
Here, I am obtaining the parameters using getAttribute :
HttpSession session1 = request.getSession();
String param1 = (String) request.getAttribute("p1");
String param2 = (String) request.getAttribute("p2");
String param = request.getParameter("param");
String lgn = request.getParameter("lgn");
if (null != param) {
response.sendRedirect("empLogin.jsp");
} else if (null != lgn) {
response.sendRedirect("target.jsp");
}
I have two hidden parameters being passed from the login page and the routing page which are lgn and param respectively. After checking which ever page the request is gotten from, I would need to redirect it to a target.jsp or login.jsp.
Basically, after logging in I need to be just able to obtain the same parameters from the session and display it on the target page (like a confirmation that I am in fact able to obtain the parameters that I passed in the link.)
Again, I don't need servlets because this is something that we need to include from our end using only JSP's.
Also, I have read on trying to extract the part of the URL after the ? and even went into the concept of extracting deeplinks but I'm not getting anywhere.
Sorry if the post seems too long. Just wanted to give a clear idea of my situation and also any other ideas on this would be greatly appreciated.
Thanks a lot in advance !
A: First of all I don't understand why you don't want servlets since it's much clearer and more beautiful code. They are also perfect for this situation.
This has nothing to do with the question but for making your life easier.
If you are using scriptlets which are mostly just disturbing to read I advice you to put
the scriptlet tag instead for the readability of the code, e.g.
<jsp:scriptlet>
String param1 = request.getParameter("p1");
String param2 = request.getParameter("p2");
HttpSession session1 = request.getSession(true);
session1.setAttribute("p1", param1);
session1.setAttribute("p2", param2);
response.sendRedirect("intermediateTarget.jsp");
</jsp:scriptlet>
Now to what I believe is your problem since it wasn't very clear what you wanted.
You are actually storing your variables in the session (which you want to do), but you are accessing them from the request?
For session scoped variables you should call:
String param2 = (String)request.getSession(false).getParameter("p2");
// or session1.getParameter("p2");
// if you add false where you create the variable.
You want to use the false paramenter in getSession because the user must have a session if he/she is logged in and you want to redirect them with a filter instead of creating a new session I assume.
I would prefer to write it like this though
<c:choose>
<c:when test="${not empty param}">
<c:redirect url="empLogin.jsp"/>
</c:when>
<c:otherwise>
<c:redirect url="target.jsp"/>
</c:otherwise>
</c:choose>
Hope it answered your question, feel free to comment if I misunderstood your question :)
| |
doc_23530365
|
My data tree
|
|--Code
| |--Oscilador.cpp
| |--makefile
|
|--Resultados
| |--(Where I want the txt to be save in)
My make file code is this
Oscilador.x:Oscilador.cpp
g++-10 -o0 Oscilador.cpp -o Oscilador.x
Resultados.txt:Oscilador.x
./Oscilador.x > ./Resultados/
rm Oscilador.x
When I runed the make file it says:
:Codigo Felipe$ make Resultados.txt
./Oscilador.x > Resultados/Resultados.txt
/bin/sh: Resultados/Resultados.txt: No such file or directory
make: *** [Resultados.txt] Error 1
I wonder how can I fix it.
A: Please see the below code which will fix this issue.
Oscilador.x:Oscilador.cpp
g++ Oscilador.cpp -o Oscilador.x
Resultados.txt:Oscilador.x
# Output to Resultados.txt file under Resultados directory which is present one folder behind
./Oscilador.x > ../Resultados/Resultados.txt
rm Oscilador.x
The above code runs succesfully if the directory Resultados is already created.
If the directory doesnot exists and you want to create it before generation of Resultdos.txt then use the below code
Oscilador.x:Oscilador.cpp
g++ Oscilador.cpp -o Oscilador.x
Resultados.txt:Oscilador.x
# create directory Resultados one folder behind if it does not exist
mkdir -p ../Resultados
./Oscilador.x > ../Resultados/Resultados.txt
rm Oscilador.x
| |
doc_23530366
|
https://docs.aws.amazon.com/devicefarm/latest/developerguide/test-types-ios-xctest-ui.html and
https://www.mobdesignapps.fr/blog/2016/9/17/running-your-test-on-aws-device-farm?utm_source=stackoverflow&utm_medium=answer&utm_term=37184633
but still no luck.
| |
doc_23530367
|
Thanks
| |
doc_23530368
| ||
doc_23530369
|
I think this is how one would do it in C++ ->
for(i=0; i<10; i++){
int vehicle[1];
}
A: So you want an Array with 10 Strings, each one having different names right? You can do something like this:
val vehicles = emptyArray<String>()
for (i in 0 until 10) vehicles[i] = "vehicle$i"
| |
doc_23530370
|
There will 4+ servers. For ease I'll call them Server A, Server B, Server C, Server D
Server A is the command and control server. It will contain the scripts (PHP) that will get executed by Cron Tab jobs. There will also be an application on Server A that will allow the user to select scripts to run and specify which server they will run on. So that app will create the Cron tab jobs on the specified server.
So far so good - using the SSH2 lib I can create the cron entries on the target servers.
I know that I can use symlinks in a Crontab entry.
Is it possible to create a symlink that points from Server B, Server C or Server D to Server A.
So basically I want Server A to issue/maintain the Cron tab on the other servers, to store all the scripts but the when the other server jobs run, they run the scripts stored on Server A.
Is this possible?
A: A symlink only tells the OS to substitute another path and try accessing the file again. The access still occurs only on the local file system.
You need to mount server A's file system on the other servers via some means such as sshfs, NFS, or Samba.
Instead, why not just copy the script down to the target server before running it?
| |
doc_23530371
|
I need to make sure the expressions are balanced
I tried everything but I don't even get errors
int main() {
ifstream infile;
infile.open("input.txt");
string exp;
cout << "Enter an expression ";
while (getline(infile, exp)) {
cout << exp << ": ";
if (matcher(exp))
cout << "Matched ok" << endl;
else
cout << "Match error" << endl;
cout << "Enter an expression: ";
}
cout << "--- Done ---" << endl;
return 0;
}
int matcher(string expression) {
stack<char> s;
for (int i = 0; i < expression.length(); i++) {
if (isOpener(expression[i]))
s.push(expression[i]);
else if (isCloser(expression[i])) {
if (s.empty()) return 1;
char opener = s.top();
s.pop();
if (!matches(opener, expression[i])) return 1;
}
}
if (!s.empty()) return 1;
return 0;
}
A: One obivous problem -- your matcher function appears to return 1 for failure (does not match) and 0 for success, but your main prints ok if matcher returns non-zero...
A: I will assume isOpener() and matches() work as intended, since you aren't showing them.
If so, the problem is that you're misinterpreting int -> bool conversions. A zero converts to false and a non-zero integer converts to true. You'd be better off declaring matcher() to return bool and return true or false from it directly. You'll want to return false there where you now return 1 and true there where you now return 0.
| |
doc_23530372
|
I want to load a log4j.xml file from a different location on the file path (not in my classpath). Is that possible in a web application using say, JBoss or Tomcat?
A: You can use PropertyConfigurator.Call configure with file you wanted
A: Use -Dlog4j.configuration=path/to/your/file.xml startup parameter to specify where your configuration file is. It's the recommended practice anyway:
The preferred way to specify the default initialization file is
through the log4j.configuration system property.
(log4j manual)
| |
doc_23530373
|
Use NewGuid in .NET or newGuid in JavaScript to safely generate random GUIDs.
I assume they mean the static method Guid.NewGuid(), yet when I use it in my code, like this:
[FunctionName("Orchestration")]
public static async Task Orchestration([OrchestrationTrigger] IDurableOrchestrationContext context, ILogger logger) {
var guid = Guid.NewGuid();
I get a compiler warning:
Warning DF0102: 'Guid.NewGuid' violates the orchestrator deterministic code constraint. (DF0102)
and when I run the function, I see that it produces a different GUID when replaying, so it's definitely not deterministic. I know I can write an activity function to generate a GUID, but this seems a bit overkill if there's dedicated support for this. This GitHub comment mentions it has been released in v1.7.0 (I'm using v2.3.1).
A: While writing this question, I realized what my problem was. It's this assumption:
I assume they mean the static method Guid.NewGuid()
It turns out the IDurableOrchestrationContext interface has a NewGuid() method as well, which
Creates a new GUID that is safe for replay within an orchestration or operation.
[FunctionName("Orchestration")]
public static async Task Orchestration([OrchestrationTrigger] IDurableOrchestrationContext context, ILogger logger) {
var guid = context.NewGuid();
| |
doc_23530374
|
my guess for feature map: 10*((125-5)/3)+1 = (41 * 41 * 10)(no of filters) but what is the difference between RGB image or Greyscale image so for RGB image it should be 41 * 41 * 30 ( no of filters * no of channels of input image)?
and for total number of parameters:5 * 5 * 3 * 10=750 ?
A:
Note: I have a well-recieved post on the intuition on how CNNs work on Data Science Stack Exchange. Do check it out as well.
You are right about the shape of the feature maps, with the given stride.
The total number of parameters in CNN is given by -
Params = (n*m*l+1)*k
where, l = number of input feature maps / image channels,
k = number of output feature maps / # of filters,
n*m = filter dimensions
Important! - The +1 in the formula is because each of the output feature
maps have a bias term added, which is also a trainable parameter. So, don't
forget to add that!
So for your case is, number of trainable params: ((5*5*3+1)*10) = 760
And, for a greyscale it is: ((5*5*1+1)*10) = 260
A very good visualization that I find intuitive to show how filters work to create feature maps is this.
The input feature maps are the same as the number of channels of the input image if it's the first CNN layer. Since CNNs are usually stacked, the output channels of previous CNNs are called input feature maps to the current CNN layer.
A: No of feature maps = no of filters and the size of each feature map would be 41 * 41(you correctly calculated it, if the padding is zero). So, in the case above you would have 10 feature maps of size 41 * 41 independent of rgb or greyscale, if you have 10 filters.
For rgb vs greyscale, think about channels as feature maps for input layer and a filter gets applied on all feature map at once. If you have an input rgb image with 5 * 5 filter size, it is actually a filter of size 5 * 5 * 3(no. of channels). So, independent of rgb or greyscale if you apply 10 filters of 5*5 on an image of size 125 * 125 with stride 3, you will always get out 10 feature maps of 41 * 41.
Whenever you define a filter, it is defined as x * y but the z is always equal to 'no of channels on which you have applied the conv' for 2d convolutions. That way total no of parameters for each filter would be x * y * n_channels + 1 (additional 1 is for the constant)
In your case of 10 filters, for rgb image, each filter is of size 5 * 5 * 3. So, total parameters for rgb= 10 * (5 * 5 * 3 + 1). And for greyscale each filter is 5 * 5 * 1. So, total parameters for greyscale =10 * (5 * 5 * 1 + 1)
| |
doc_23530375
|
I have tried using a undefined variable and using mongodb.
A: The basic solution for this is to create the session and check whether the user is available.
app.post('/login', function (req, res) {
var post = req.body;
if (post.user === 'user_name' && post.password === 'user_password') {
req.session.user_id = 'user_id';
res.redirect('/user');
} else {
res.send('Bad user/pass');
}
});
You will get the user ID in /user using req.session.user_id.
| |
doc_23530376
|
I'm building a product for a private school, and I found that the patterns are so close to a CMS, except for having my own models (Course, Subjects, etc.)...
There should be some learning curve to get the best result of a CMS, of course.
what do you recommend ??
A: I've not used mezzanine, but doing something like this would certainly be possible on top of django_cms.
It's quite straightforward to write custom plugins for a CMS, so you could build new widgets (assessments, polls etc) which can be dropped into cms-based pages. The menu's can also be extended , with new menus build based on objects in your models (e.g. courses, modules)... one gotcha with this is that the menus get cached, so the app either needs to be restarted to rebuild menus or you would have to add a hook to rebuild them manually. There are pretty good docs on this here:
http://docs.django-cms.org/en/2.1.3/extending_cms/custom_plugins.html
and on building custom apps, which can be hooked up to CMS urls:
http://docs.django-cms.org/en/2.1.3/extending_cms/app_integration.html
Overall, I quite like django_cms, although the breakage with successive versions (and also versions of MPTT on which it depends) has been quite a pain. It looks like they are trying to clean up this sort of thing in forthcoming releases though, and contrib.staticfiles is now supported which is nice.
A: Mezzanine has its own implementation of a page tree rather than using mptt, and it's quite solid. It's also designed for you to add your own Django models to the tree. From what you've said (which granted isn't much) it sounds quite suitable. Have a read over the relevant docs section here: http://mezzanine.jupo.org/docs/
| |
doc_23530377
|
import urllib
import json
from math import log
def hits(word1,word2=""):
query = "http://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=%s"
if word2 == "":
results = urllib.urlopen(query % word1)
else:
results = urllib.urlopen(query % word1+" "+"AROUND(10)"+" "+word2)
json_res = json.loads(results.read())
google_hits=int(json_res['responseData']['cursor']['estimatedResultCount'])
return google_hits
def so(phrase):
num = hits(phrase,"excellent")
#print num
den = hits(phrase,"poor")
#print den
ratio = num / den
#print ratio
sop = log(ratio)
return sop
print so("ugly product")
I need this code to calculate the Point wise Mutual Information which can be used to classify reviews as positive or negative. Basically I am using the technique specified by Turney(2002): http://acl.ldc.upenn.edu/P/P02/P02-1053.pdf as an example for an unsupervised classification method for sentiment analysis.
As explained in the paper, the semantic orientation of a phrase is negative if the phrase is more strongly associated with the word "poor" and positive if it is more strongly associated with the word "excellent".
The code above calculates the SO of a phrase. I use Google to calculate the number of hits and calculate the SO.(as AltaVista is now not there)
The values computed are very erratic. They don't stick to a particular pattern.
For example SO("ugly product") turns out be 2.85462098541 while SO("beautiful product") is 1.71395061117. While the former is expected to be negative and the other positive.
Is there something wrong with the code? Is there an easier way to calculate SO of a phrase (using PMI) with any Python library,say NLTK? I tried NLTK but was not able to find any explicit method which computes the PMI.
A: The Python library DISSECT contains a few methods to compute Pointwise Mutual Information on co-occurrence matrices.
Example:
#ex03.py
#-------
from composes.utils import io_utils
from composes.transformation.scaling.ppmi_weighting import PpmiWeighting
#create a space from co-occurrence counts in sparse format
my_space = io_utils.load("./data/out/ex01.pkl")
#print the co-occurrence matrix of the space
print my_space.cooccurrence_matrix
#apply ppmi weighting
my_space = my_space.apply(PpmiWeighting())
#print the co-occurrence matrix of the transformed space
print my_space.cooccurrence_matrix
Code on GitHub for the PMI methods.
Reference: Georgiana Dinu, Nghia The Pham, and Marco Baroni.
2013. DISSECT: DIStributional SEmantics Composition
Toolkit. In Proceedings of the System Demonstrations
of ACL 2013, Sofia, Bulgaria
Related: Calculating pointwise mutual information between two strings
A: To answer why your results are erratic, it is important to know that Google Search is not a dependable source for word frequencies. Frequencies as returned by the engine are mere estimations that are particularly inaccurate and possibly contradictory when querying for multiple words. This is not to bash Google, but it is not a utility for frequency counts. Therefore, your implementation may be fine, but the results on that basis can still be non-sensical.
For a more in-depth discussion of the matter, read "Googleology is bad science" by Adam Kilgarriff.
A: Generally, calculating PMI is tricky since the formula will change depending on the size of the ngram that you want to take into consideration:
Mathematically, for bigrams, you can simply consider:
log(p(a,b) / ( p(a) * p(b) ))
Programmatically, let's say you have calculated all the frequencies of the unigrams and bigrams in your corpus, you do this:
def pmi(word1, word2, unigram_freq, bigram_freq):
prob_word1 = unigram_freq[word1] / float(sum(unigram_freq.values()))
prob_word2 = unigram_freq[word2] / float(sum(unigram_freq.values()))
prob_word1_word2 = bigram_freq[" ".join([word1, word2])] / float(sum(bigram_freq.values()))
return math.log(prob_word1_word2/float(prob_word1*prob_word2),2)
This is a code snippet from an MWE library but it's in its pre-development stage (https://github.com/alvations/Terminator/blob/master/mwe.py). But do note that it's for parallel MWE extraction, so here's how you can "hack" it to extract monolingual MWE:
$ wget https://dl.dropboxusercontent.com/u/45771499/mwe.py
$ printf "This is a foo bar sentence .\nI need multi-word expression from this text file.\nThe text file is messed up , I know you foo bar multi-word expression thingy .\n More foo bar is needed , so that the text file is populated with some sort of foo bar bigrams to extract the multi-word expression ." > src.txt
$ printf "" > trg.txt
$ python
>>> import codecs
>>> from mwe import load_ngramfreq, extract_mwe
>>> # Calculates the unigrams and bigrams counts.
>>> # More superfluously, "Training a bigram 'language model'."
>>> unigram, bigram, _ , _ = load_ngramfreq('src.txt','trg.txt')
>>> sent = "This is another foo bar sentence not in the training corpus ."
>>> for threshold in range(-2, 4):
... print threshold, [mwe for mwe in extract_mwe(sent.strip().lower(), unigram, bigram, threshold)]
[out]:
-2 ['this is', 'is another', 'another foo', 'foo bar', 'bar sentence', 'sentence not', 'not in', 'in the', 'the training', 'training corpus', 'corpus .']
-1 ['this is', 'is another', 'another foo', 'foo bar', 'bar sentence', 'sentence not', 'not in', 'in the', 'the training', 'training corpus', 'corpus .']
0 ['this is', 'foo bar', 'bar sentence']
1 ['this is', 'foo bar', 'bar sentence']
2 ['this is', 'foo bar', 'bar sentence']
3 ['foo bar', 'bar sentence']
4 []
For further details, i find this thesis an quick and easy introduction to MWE extraction: "Extending the Log Likelihood Measure to Improve Collocation Identification", see http://goo.gl/5ebTJJ
| |
doc_23530378
|
#include <Rcpp.h>
using namespace Rcpp;
int square(int x)
{
return x*x;
}
RCPP_MODULE(mod_bar) {
function( "sqaure", &square );
}
I am trying to use the square function by using R after the my library is loaded:
library(myLib)
require(Rcpp)
Module(mod_bar)
But I get the following error message:
Uninitialized module named "mod_bar" from package ".GlobalEnv"
A: Take an existing package with Rcpp Modules and compare.
Maybe you just need a loadModules("mod_bar"), maybe you need something else. We can't tell from here.
Every full regression test for Rcpp includes building and the embedded testRcppModule package containing a module. I would start to compare to this one.
A: I notice that you are missing // [[Rcpp::export]] before declaration of your function.
| |
doc_23530379
|
Right now I am using data mappers in order to move data between the models (domain objects) and a MySQL database. Each mapper receives a MySQL adapter as dependency. The injected adapter receives a PDO instance (a database connection) as dependency and runs sql queries on the database.
I also use a dependency injection container (Auryn).
I'd like to be able to simultaneously retrieve data from storages of different types (MySQL database, PostgreSQL database, XML feeds, etc).
Let's say, I want to retrieve User data from a PostgreSQL database (by using PDO data-access abstraction), to change it, and to save it into a MySQL database (by using mysqli data-access abstraction) on another server. So, there will be different data-access calls for the both database types.
My question is:
Should I create a different mapper for each storage type, like
UserMapperPgsql(PgsqlAdapter $adapter)
UserMapperMySql(MySqlAdapter $adapter)
, or should I create only one mapper with more adapters (one for each data type) as dependencies, like bellow?
UserMapper(PgsqlAdapter $adapter1, MySqlAdapter $adapter2, ...)
Thank you all for your suggestions!
A: What an odd project you have there.
Anyway. I would go with two separate mappers for separate storage mediums. Because trying to juggle those adapters inside a mapper might end up quite complicated.
That said, depending on how complicated the persistence logic actually ends up, you might benefit from looking up repositories as approach to streamline the API, that gets exposed to where your "application logic" is actually done.
| |
doc_23530380
|
private void jButton2ActionPerformed(java.awt.event.ActionEvent evt) {
try {
JFileChooser ch = new JFileChooser(FileSystemView.getFileSystemView().getHomeDirectory());
int c = ch.showOpenDialog(this);
ch.setMultiSelectionEnabled(true);
ch.setFileSelectionMode(JFileChooser.FILES_AND_DIRECTORIES);
ch.setMultiSelectionEnabled(true);
if (c == JFileChooser.APPROVE_OPTION) {
File[] f = ch.getSelectedFiles();
FileInputStream in = new FileInputStream();
/// the error start from in here
byte b[] = new byte[in.available()];
in.read(b);
Data data = new Data();
lblNewLabel.setText(ch.getSelectedFile().getAbsolutePath());
data.setName(lblNewLabel.getText().trim());
data.setFile(b);
out.writeObject(data);
out.flush();
textArea.append("send 1 file ../n");
}
} catch (Exception e) {
JOptionPane.showMessageDialog(this, e, "Error",
JOptionPane.ERROR_MESSAGE);
}
}`
can everyone fix it?
A: Looks like you are using .getSelectedFile() method, although you have selected multiple files.
You have to use .getSelectedFiles() method like in line 9 of your code sample and iterate through the File[].
A: FileInputStream in = new FileInputStream();
/// the error start from in here
byte b[] = new byte[in.available()];
in.read(b);
The instantiation of FileInputStream in = new FileInputStream() is wrong. As nvplus said you have to select ONE file and instantiate it as following
File f = ch.getSelectedFile();
FileInputStream in = FileInputStream(f); // <---
| |
doc_23530381
|
.html
<h1> ASSET TABLE </h1>
<div class="table">
<div class = "overflow-auto">
{% render_table table_assets %}
</div>
</div>
<h1>TASK TABLE </h1>
<div class="table">
{% render_table table_tasks %}
</div>
.css
.overflow-auto {
height: 150px;
overflow-y: scroll;
overflow: auto;
}
it seems like that
A: You have to set a width for the parent div.
A block-level element always starts on a new line and takes up the full width available (stretches out to the left and right as far as it can).
Block-level elements
#container {
overflow-y: scroll;
max-width: 500px;
max-height: 200px;
}
table {
width: 100%;
}
<div id='container'>
<table>
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>4</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<td>7</td>
<td>8</td>
<td>9</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>4</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<td>7</td>
<td>8</td>
<td>9</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>4</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<td>7</td>
<td>8</td>
<td>9</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>4</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<td>7</td>
<td>8</td>
<td>9</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>4</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<td>7</td>
<td>8</td>
<td>9</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>4</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<td>7</td>
<td>8</td>
<td>9</td>
</tr>
</tbody>
</table>
</div>
| |
doc_23530382
|
# Define mesh and solution array
x = np.linspace(-0.5, 0.5, 50)
y = np.zeros((2, x.size))
y2 = np.zeros((4, x.size))
y2[0] = 2.5*x + 1
y2[1] = 3*x
def fun1(x, y):
# Solve for the Magnetic Field
B, dB = y;
d2B = (alpha/(C_k**2*sigma*zeta))*B -U_0*Q*(1/(zeta*C_k))*(1/(np.cosh(Q*x))**2 - 1/(np.cosh(Q/2))**2)
return dB, d2B
def bc1(ya, yb):
#Define the boundary of the Magnetic Field
return ya[0], yb[0]
def func2(x, y2):
# Call the Magnetic Solver
sol = solve_bvp(fun1, bc1, x, y)
B = sol.y[0]
dB = sol.y[1]
U = -C_k*zeta*sol.y[1]
dU = -C_k*zeta*sol.yp[1]
# define second array
T, dT, M, dM = y2
#set out the equations
d2T = (1/gamma - 1)*(sigma*dU**2 + zeta*alpha*dB**2)#
d2M = -(dM/T)*dT + (dM/T)*theta*(m+1) - (alpha/T)*B*dB
return dT, d2T, dM, d2M
def bc2(ya, yb):
return ya[0] - 1, yb[0] - 4, ya[2], yb[2] - 1
tempdensity = solve_bvp(func2, bc2, x, y2)
A: After the definition of bc1 change to
sol1 = solve_bvp(fun1, bc1, x, y)
print(sol1.message)
def func2(x, y2):
# Call the Magnetic Solver
y = sol1.sol(x)
yp = fun1(x,y)
B = y[0]
dB = y[1]
U = -C_k*zeta*y[1]
dU = -C_k*zeta*yp[1]
Using the previous constants I get twice a "The algorithm converged to the desired accuracy."
| |
doc_23530383
|
SeqId
PlateId
Target
TargetFullName
1
11
111
111
2
22.
222.
2222.
However, I'd like it to look like this:
-----
SeqId
PlateId
TargetFullName
Target
-
-
-
111
1
11
111
222
2
22
222
This is what I have for reshaping:
library(reshape2)
longnewData <- melt(newData)
(differentSeqs <- getSequencesWithLargestBetweenGroupVariation(
longnewData, n=10))[,.(SeqId, Target)]
Any help would be greatly appreciated -- thanks!
A: This seems to achieve what you want:
Data:
df <- data.frame(SeqId = 1:2,
PlateId = c(11,22),
Target = c(111,222),
TargetFullName = c(111, 2222))
Turning column into row:
row.names(df) <- df$Target; df$Target <- NULL
Checking outcome:
df
SeqId PlateId TargetFullName
111 1 11 111
222 2 22 2222
A: We could also use this:
library(tibble)
df %>% column_to_rownames("Target")
SeqId PlateId TargetFullName
111 1 11 111
222 2 22 2222
| |
doc_23530384
|
For three CPUs (one manager and two workers) expected output from the below program is something like
worker #1 waited for 1 seconds
worker #2 waited for 1 seconds
worker #1 waited for 1 seconds
worker #1 waited for 1 seconds
...
worker #2 waited for 5 seconds
worker #1 waited for 1 seconds
worker #2 waited for 1 seconds
...
However, in the current implementation, the program never gets past the first two output lines. I'm thinking this is because the workers does not correctly communicate to the manager that that they have finished their tasks, and therefore are never given their next tasks.
Any ideas where it goes wrong?
#include <iostream>
#include <mpi.h>
#include <windows.h>
using namespace std;
void task(int waittime, int worldrank) {
Sleep(waittime); // use sleep for unix systems
cout << "worker #" << worldrank << " waited for " << waittime << " seconds" << endl;
}
int main()
{
int waittimes[] = { 1,1,5,1,1,1,1,1,1,1,1,1,1 };
int nwaits = sizeof(waittimes) / sizeof(int);
MPI_Init(NULL, NULL);
int worldrank, worldsize;
MPI_Comm_rank(MPI_COMM_WORLD, &worldrank);
MPI_Comm_size(MPI_COMM_WORLD, &worldsize);
MPI_Status status;
int ready = 0;
if (worldrank == 0)
{
for (int k = 0; k < nwaits; k++)
{
MPI_Recv(&ready, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
MPI_Send(&waittimes[k], 1, MPI_INT, status.MPI_SOURCE, 0, MPI_COMM_WORLD);
}
}
else
{
int waittime;
ready = 1;
MPI_Send(&ready, 1, MPI_INT, 0, 0, MPI_COMM_WORLD);
MPI_Recv(&waittime, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);
task(waittime,worldrank);
}
MPI_Finalize();
return 0;
}
| |
doc_23530385
|
So far I have tried using gsutil cp with the parallel option (-m) to using S3 as source and GS as destination directly. Even tweaking the multi-processing and multi-threading parameters I haven't been able to achieve a performance of over 30mb/s.
What I am now contemplating:
*
*Load the data in batches from S3 into hdfs using distcp and then finding a way of distcp-ing all the data into google storage (not supported as far as I can tell), or:
*Set up a hadoop cluster where each node runs a gsutil cp parallel job with S3 and GS as src and dst
If the first option were supported, I would really appreciate details on how to do that. However, it seems like I'm gonna have to find out how to do the second one. I'm unsure of how to pursue this avenue because I would need to keep track of the gsutil resumable transfer feature on many nodes and I'm generally inexperienced running this sort of hadoop job.
Any help on how to pursue one of these avenues (or something simpler I haven't thought of) would be greatly appreciated.
A: You could set up a Google Compute Engine (GCE) account and run gsutil from GCE to import the data. You can start up multiple GCE instances, each importing a subset of the data. That's part of one of the techniques covered in the talk we gave at Google I/O 2013 called Importing Large Data Sets into Google Cloud Storage.
One other thing you'll want to do if you use this approach is to use the gsutil cp -L and -n options. -L creates a manifest that records details about what has been transferred, and -n allows you to avoid re-copying files that were already copied (in case you restart the copy from the beginning, e.g., after an interruption). I suggest you update to gsutil version 3.30 (which will come out in the next week or so), which improves how the -L option works for this kind of copying scenario.
Mike Schwartz, Google Cloud Storage team
A: Google has recently released the Cloud Storage Transfer Service which is designed to transfer large amounts of data from S3 to GCS:
https://cloud.google.com/storage/transfer/getting-started
(I realize this answer is a little late for the original question but it may help future visitors with the same question.)
| |
doc_23530386
|
/api/user/1/location GET --> Controller.getLocation()
/api/user/1/location POST --> Controller.setLocation()
I have tried the following url mapping rule without any luck:
"/api/$controller/$id/$property" {
action = {[GET: "get${params.property.capitalize()}", POST: "set${params.property.capitalize()}"]}
}
Anyone tried something like this
A: I tried and succeeded with both:
"/api/$controller/$id/$property"{
action = {"get"+params.property.capitalize()} //put all joint into the {} bracelet
}
and
"/api/$controller/$id/$property"{
action = [GET: "getLocation", POST:"setLocation"] //remove the {} bracelet
}
but failed to assign dynamic parameters into [GET:'',POST:''] map, for example:
"/api/$controller/$id/$property"{
action = {["POST": "set"+params.property.capitalize(), "GET": "get"+params.property.capitalize()]}
}
and
"/api/$controller/$id/$property"{
action = [POST: {"set"+params.property.capitalize()}, GET: {"get"+params.property.capitalize()}]
}
both produced a 404 error.
So I guess Grails only allow this kind of configuration to be static yet. Maybe somebody could dig into the source code to find out later.
| |
doc_23530387
|
GET /test
GET /{test}
when I run the code I get
wildcard segment ':test' conflicts with existing children in path '/:test'
how to solve this problem in go?
code:
r := gin.Default()
r.GET("/test", test1)
r.GET("/:test", test2)
A: Way 1:
Different handler functions (i.e. test1, test2) in different paths.
router := gin.Default()
router.GET("/test1", func(c *gin.Context) {
// test1
})
router.GET("/test2", func(c *gin.Context) {
// test2
})
Way 2:
Use one handler function with parameter in path.
router := gin.Default()
router.GET("/:test", func(c *gin.Context) {
test := c.Param("test")
if test == "test1" {
// test1
} else if test == "test2" {
// test2
}
})
| |
doc_23530388
|
Facing error: Multiple aggregrations are not supported in streaming Dataframes.
for single window(per min) I am able to do the transformations. But no idea or luck on how to do multiple window tranformations on the same data.
df.withWatermark("timestamp", "60 seconds")
.groupBy(col("assetId"), col("organization"), col("tag"),
functions.window(col("timestamp"), "60 seconds", "60 seconds"),
functions.window(col("timestamp"), "3600 seconds", "3600 seconds"))
.mean("value");
| |
doc_23530389
|
Conditions:
*
*My App is a multi-user web app.
*There are 2 user roles in My App: Admin, and Member.
I want to:
*
*Member user clicks a button.
*Send an SMS containing a one-time token to the admin phone number.
*Admin user tell a one-time-token member user.
*Member user fills out a form and presses submit.
*Token is sent back to the Firebase and verified.
A: What you're describing is not a built-in flow for Firebase Authentication. The closest equivalent is Firebase's phone number authentication, but in that scenario the one-time password (OTP) is sent to the user who signs in to the app.
So you can either modify your flow to use another step for involving the admin user, or you can build your own provider for Firebase Authentication. In the latter case, you won't be able to use Firebase to send the SMS messages though, but will have to use another provider for that.
| |
doc_23530390
|
I'm starting my first project with Gatsby and have run into an issue with querying data that "may" not always exist. Here is my gatsby-node.js file:
const path = require('path');
const _ = require('lodash');
// Lifecycle methods
function attachFieldsToBlogPost({ node, actions }) {
if (node.internal.type !== 'MarkdownRemark') {
return;
}
const { createNodeField } = actions;
const { slug, title } = node.frontmatter;
const postPath = slug || _.kebabCase(title);
createNodeField({
node,
name: 'slug',
getter: node => node.frontmatter.slug, // eslint-disable-line no-shadow
value: postPath,
});
createNodeField({
node,
name: 'url',
value: postPath,
});
}
exports.onCreateNode = function() { // eslint-disable-line func-names
return Promise.all([attachFieldsToBlogPost].map(fn => fn.apply(this, arguments))); // eslint-disable-line prefer-rest-params
};
// Implementations
function getMarkdownQuery({ regex } = {}) {
return `
{
allMarkdownRemark(
sort: { fields: [frontmatter___date], order: DESC }
filter: { fileAbsolutePath: { regex: "${regex}" } }
) {
totalCount
edges {
node {
fileAbsolutePath
excerpt(pruneLength: 280)
timeToRead
frontmatter {
title
date
slug
}
fields {
url
slug
}
}
}
}
}
`;
}
function createBlogPostPages({ edges, createPage }) {
const component = path.resolve('src/templates/Post.js');
edges.forEach(({ node }) => {
const { slug, title } = node.frontmatter;
const postPath = slug || _.kebabCase(title);
createPage({
path: postPath,
component,
context: {
slug: postPath,
},
});
});
}
exports.createPages = async({ actions, graphql }) => {
const results = await Promise.all([
graphql(getMarkdownQuery({ regex: '/src/posts/' })),
]);
const error = results.filter(r => r.errors);
if (error.length) {
return Promise.reject(error[0].errors);
}
const [blogPostResults] = results;
const { createPage } = actions;
const blogPostEdges = blogPostResults.data.allMarkdownRemark.edges;
createBlogPostPages({
createPage,
edges: blogPostEdges,
});
};
And my example blog post content is:
---
title: 'Hello world there'
date: '2018-08-25'
---
Here is some content.
```javascript
console.log('test')
```
When I supply a slug frontmatter component, the page is created as intended. But, I'd only like to use the slug parameter when it's available (hence the check to see if the frontmatter is available in both attachFieldsToBlogPost and createBlogPostPages). If I remove the slug frontmatter item, I get the following error:
GraphQLError: Cannot query field "slug" on type "frontmatter".
Is there a way to override the path used to create the post pages "if" the slug frontmatter is there?
Hopefully this isn't too vague as it's my first Gatsby project, but seems like a pretty useful feature. Thanks!
A: If you remove every case of slug then the field will no longer exist in Gatsby's internal schema. You're probably running into the issue described here.
So long as you have at least 1 markdown file with a slug field, then it will exist in Gatsby's automated GraphQL schema, and then your conditional logic should work.
| |
doc_23530391
|
chai.request(server)
.post('/demo')
.set('Content-Type', 'application/vnd+companyName.v01+json')
.send({name: 'test'})
.end(function(err, res) {/* tests are here */});
In my express app's app.js, I am calling this middleware:
app.use(bodyParser.json({type: 'application/*+json'}));
When I make the type more general, like making it 'application/*', I can pass the request through with a 'application/json' Content-Type, but not my custom one. When I do this, my req.body is an empty object. If bodyParser was just completely not working, req.body would be undefined and not an empty object. By looking at the docs, I feel like my options on my bodyParser call are correct, but clearly not - any insight?
A: vnd+companyName.v01+json isn't a valid media type.
A valid media type should look like:
[ tree. ] subtype name [ +suffix ] [ ; parameters ]
The subtype name can't contain . or + characters, those are reserved for the (optional) tree and suffix, respectively (RFC6838).
So in your case, the mime type should look like this:
application/vnd.companyName-v01+json
However, it seems that there's an additional requirement imposed by body-parser (or rather, type-is, which is used to match content type), in that the subtype name needs to be lower case:
application/vnd.companyname-v01+json
Strangely, that requirement only applies for the body-parser configuration part, the client is allowed to use upper case in its requests.
| |
doc_23530392
|
https://github.com/briancherne/jquery-hoverIntent
Or are there better alternative hover functions for jQuery?
Maybe on i can use with .on();
| |
doc_23530393
|
I tried setting it on the IIS side of things and that did not work. In fact, having the settings match seemed to throw more errors. I have removed that change to return it to the current issue. I have provided the changes I made to make this error page. Perhaps someone brighter than me can figure out what went wrong?
<customErrors mode="On" redirectMode="ResponseRewrite" defaultRedirect="~/Error.aspx" />
Any help is widely appreciated! I did check to see if the Error.aspx exists and it does so I know it is not an actual 404 page not found issue
EDIT: Tried suggested duplicate answers and it did not work
EDIT 2: This is the error that appears on the screen
HTTP Error 404.0 - Not Found
The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.
Most likely causes:
*
*The directory or file specified does not exist on the Web server.
*The URL contains a typographical error.
*A custom filter or module, such as URLScan, restricts access to the file.
Things you can try:
*
*Create the content on the Web server.
*Review the browser URL.
*Check the failed request tracing log and see which module is calling SetStatus. For more information, click here.
Detailed Error Information:
Module
IIS Web Core
Notification
MapRequestHandler
Handler
StaticFile
Error Code
0x80070002
A: I recommend you to use a Controller to specific for this, for example:
[AllowAnonymous]
public class ErroHttpController : Controller
{
[Route("RequestError")]
public ActionResult RequestError()
{
Response.StatusCode = 400;
return View();
}
[Route("NotFound")]
public ActionResult NotFound()
{
Response.StatusCode = 404;
return View();
}
[Route("InternalError")]
public ActionResult InternalError()
{
Response.StatusCode = 500;
return View();
}
protected override void OnActionExecuting(ActionExecutingContext filterContext)
{
Response.TrySkipIisCustomErrors = true;
base.OnActionExecuting(filterContext);
}
}
}
And in your web.config you should do:
<customErrors mode="RemoteOnly">
<error statusCode="400" redirect="/RequestError"/>
<error statusCode="404" redirect="/NotFound"/>
<error statusCode="500" redirect="/InternalError"/>
</customErrors>
You must skip IIS custom errors, like the code above, in OnActionExecuting method.
| |
doc_23530394
|
I will try to explain in more detail. My source question was edited.
By clicking the canvas, I've must call _handleTapDown function:
void _handleTapDown (TapDownDetails details)
{
_showModeless (context);
}
In this function, need to visualize your Modeless widget:
void _showModeless (BuildContext context)
{
// How do I show Modeless Widget?
}
A: You can use Overlay to add widget above everything else ; and use them however you like.
class ModeLess extends StatefulWidget {
final Widget child;
ModeLess({this.child});
@override
_ModeLessState createState() => new _ModeLessState();
}
class _ModeLessState extends State<ModeLess> {
OverlayEntry modeless;
@override
void initState() {
super.initState();
modeless = new OverlayEntry(
opaque: false,
builder: (context) {
return new Positioned(
top: 50.0,
left: 50.0,
child: new SizedBox(
height: 50.0,
child: new Card(
child: new Text("I'm a modeless")
),
),
);
});
Future.microtask(() {
Overlay.of(context).insert(modeless);
});
}
@override
void dispose() {
modeless.remove();
super.dispose();
}
@override
Widget build(BuildContext context) {
return widget.child;
}
}
A: Rémi Rousselet, thank you very much. Your advice has helped. Below is the prototype of function that I need:
OverlayEntry
_modeless = null;
void _showModeless(BuildContext context)
{
_modeless = new OverlayEntry(
opaque: false,
builder: (context) {
return new Positioned(
top: 100.0,
left: 100.0,
child:
new Row(
mainAxisAlignment: MainAxisAlignment.start,
crossAxisAlignment: CrossAxisAlignment.center,
children: <Widget>[
new Icon(Icons.content_paste, color: Colors.blueGrey),
new Padding(
padding: const EdgeInsets.only(left: 16.0),
child: new Text('Modeless', overflow: TextOverflow.ellipsis,
style: new TextStyle(fontSize: 14.0, fontWeight: FontWeight.bold, color: Colors.lightBlue, decoration: TextDecoration.none
),
),
),
],
),
);
});
Overlay.of(context).insert(_modeless);
_startWaiting();
}
static const TIMEOUT = const Duration(seconds: 8);
static Timer _timer = null;
void _startWaiting()
{
_timer = new Timer(TIMEOUT, _handleTimeout);
}
void _handleTimeout()
{
if (_modeless != null)
_modeless.remove();
}
PS. I added only another function that allows to remove the modeless after 8 sec. Once again many thanks.
| |
doc_23530395
|
For example user creates 3 paragraph. After that user use LeadingMarginSpan.Standard for second paragraph to get a indent. When indent applyed, first paragraph moves out from EditText for certain amount of px and second paragraph visible stay at center.
When user tap at first paragraph everything is ok, all paragraphs stay at required position and second paragraph stay with indent. If tap at second paragraph all paragraph will move to left side
Code:
EditText et = (EditText)findViewById(R.id.et);
et.setText("ab\nab\nab");
AlignmentSpan.Standard normal = new AlignmentSpan.Standard(Layout.Alignment.ALIGN_NORMAL);
AlignmentSpan.Standard center = new AlignmentSpan.Standard(Layout.Alignment.ALIGN_CENTER);
AlignmentSpan.Standard opposite = new AlignmentSpan.Standard(Layout.Alignment.ALIGN_OPPOSITE);
et.getEditableText().setSpan(normal, 0, 3, Spanned.SPAN_PARAGRAPH);
et.getEditableText().setSpan(center, 3, 6, Spanned.SPAN_PARAGRAPH);
et.getEditableText().setSpan(opposite, 6, et.length(), Spanned.SPAN_PARAGRAPH);
LeadingMarginSpan.Standard indent = new LeadingMarginSpan.Standard(20);
et.getEditableText().setSpan(indent, 3,6, Spanned.SPAN_PARAGRAPH);
Select first align:
Select second align:
Edit:
https://www.youtube.com/watch?v=Ea9HJEmEeZA&feature=youtu.be
| |
doc_23530396
|
Then I was using
gem update spaceship
to update the library:
Then I got : Nothing to update
But I am really sure the last version of Spaceship is 0.39.0.
I have no choice to update the spaceship by gem install.
gem install spaceship
Then I get both 0.38.5 and 0.39.0.
I must clean up the old version of spaceship.
Any expert can tell me why my gem update not working?
Thank you in advance!
:)
| |
doc_23530397
|
On my edit-products page, I cannot seem to select an image to upload for wloudinary using there widget, it acts like a submit button and just submits when I click upload, without even letting me add an image.
here's my form:
<form method="GET" action="/submitEdit" style="border: 2px solid black">
<h1>Edit a product</h1>
<input type="hidden" id="ogItem" name="ogItem" value="<%= itemName %>" style="border: 1px solid black" >
</script>
<fieldset>
<legend><a href="/help-center"><span class="number">?</a></span>Enter The Following Details</legend>
<label for="productName">Product Name:</label>
<input type="text" id="item" name="item" placeholder="<%= itemName %>" value="<%= itemName %>" style="border: 1px solid black" required>
<label for="productPrice">Price:</label>
<input type="text" id="price" name="price" placeholder="<%= itemPrice %>" value="<%= itemPrice %>" style="border: 1px solid black" required>
<!-- <label for="productCategory">Category:</label>
<input type="text" id="category" name="category" placeholder="Socks" style="border: 1px solid black"equired> -->
<input type="text" id="size[]" name="size[]" placeholder="CURRENT SIZES: <%= itemSize %>" value="<%= itemSize %>" style="border: 1px solid black" readonly>
<label for="productSizes">SIZES (OPTIONAL):</label>
<label for="check-1">Extra Small</label>
<input type="checkbox" name="size[]" id="extra-small" value="xs">
<label for="check-1">Small</label>
<input type="checkbox" name="size[]" id="small" value="small">
<label for="check-1">Medium</label>
<input type="checkbox" name="size[]" id="medium" value="medium">
<label for="check-1">Large</label>
<input type="checkbox" name="size[]" id="large" value="large">
<label for="check-1">Extra Large</label>
<input type="checkbox" name="size[]" id="extra-large" value="xl">
<!-- submit image -->
<button id="upload_widget" class="cloudinary-button">Upload Product Image</button>
<!-- The Modal -->
<div id="myModal" class="modal">
<span class="close">×</span>
<img class="modal-content" id="img01">
<div id="caption"></div>
</div>
<script src="https://widget.cloudinary.com/v2.0/global/all.js" type="text/javascript"></script>
<script type="text/javascript">
var myWidget = cloudinary.createUploadWidget({
cloudName: 'piersolutions',
uploadPreset: 'ld3l7evv'}, (error, result) => {
if (!error && result && result.event === "success") {
console.log('Done! Here is the image info: ', result.info);
console.log(result.info.secure_url)
var result_url = result.info.secure_url;
console.log("result url is " + result_url)
document.getElementById("url").value = result_url;
}
}
)
document.getElementById("upload_widget").addEventListener("click", function(){
myWidget.open();
}, false);
</script>
<!-- submit image end -->
<input type="text" name="url" id="url" value="<%= ogImgUrl %>" readonly>
<input id="myImg" type="button" src="<%= ogImgUrl %>" alt="ORIGINAL PRODUCT IMAGE" value="See Current Image" width="300" height="200">
<img src="https://img.icons8.com/material-sharp/24/000000/visible.png"/>
</fieldset>
<fieldset>
<label for="bio">Description:</label>
<textarea id="bio" name="description" placeholder="" value="<%= itemDesc %>" style="border: 1px solid black"><%= itemDesc %></textarea>
</fieldset>
<button id="submitButton" type="submit">Submit</button>
</form>
<!-- help box on right -->
<br style="margin-top: 20px;">
<form class="form2" action="/" method="post" style="border: 2px solid black;">
<fieldset>
<h3 style="margin-top: 50px;">More Information!</h3>
<h5>For more information on how to edit a product on your webiste, you can check out our help form <a href="/helpAddpProducts" style="color: #000; font-size: 15px;">HERE</a>!You can also send us an email, or call us for more help! Find contact information on our contact info page <a style="color: #000; font-size: 15px;" href="/contactInfo">HERE!</a> </h5>
</form>
it is really strange, and it frustrating because people who are using the dashboard I have are not able to edit there products. please help!!! Thanks in advance :)
A: You should be able to achieve what you're aiming for by passing the files parameter in myWidget.open() as an array of image/s URL/s that you would like to be uploaded by the widget and skipped directly to the cropping step, if it was enabled when you initiated the widget with cloudinary.createUploadWidget().
In terms of changes to your code, you should add cropping: true to the list of cloudinary.createUploadWidget() parameters, and also if you wish to add any additional cropping parameters from the list shown here.
In addition, you will need to modify myWidget.open() as follows:
myWidget.open({files: ["https://my.domain.com/my_example_image.jpg"]});
| |
doc_23530398
|
joinedaandb <- full_join(tbl_df(tablea), tbl_df(tableb), by = "pidp")
The unique identifier is pidp. There are a bunch of variables I was expecting to match up such as sex, edtype and age into one variable but instead it has created a sex.x variable and a sex.y variable.
pidp <dbl> 280165, 541285, 541965, 665045, 956765, 987365, 1558565, 1833965, 229…
$ sex.x <fct> female, male, female, male, male, female, female, male, male, female,…
$ edtype.x <fct> proxy, proxy, proxy, inapplicable, proxy, proxy, at higher education …
$ age.x <fct> 32, 25, 23, 29, 56, 21, 18, 46, 36, 17, 29, 22, 28, 57, 20, 33, 27, 6..
A bit further down these y variables appear.
$ sex.y : Factor w/ 7 levels "missing","inapplicable",..: 7 6 NA 6 NA NA NA 6 6 NA ...
$ edtype.y : Factor w/ 10 levels "missing","inapplicable",..: 3 3 NA 2 NA NA NA 3 3 NA ...
$ age.y : Factor w/ 91 levels "missing","inapplicable",..: 23 16 NA 20 NA NA NA 37 26 NA ...
What does this mean? And how do I get it so that it matches the variables such as sex from tablea and tableb into a singular variable in the new dataframe.
Cheers
A: With joins (merges) you have two options for columns:
*
*(a) you join on the column---include it in the by argument---and the values in each table will be compare and must be equal for the row to be joined
*(b) you don't join on the column---not included in the by argument---and no assumptions are made about whether the values are equal. Columns from each data frame will be included separately, and it's up to you to handle them.
There are a bunch of variables I was expecting to match up
If you want them to be matched, you need to tell your full_join call. This means putting them in the by to your full_join. If you leave the by argument blank, dplyr's defaults will assume you want to match all columns with the same names. When you do specify a by argument, no more assumptions will be made, and any columns that appear in both data frames but do not appear in the by argument will be kept separately, with .x and .y appended so you can tell the sources apart.
| |
doc_23530399
|
Ideally I would like this all merged into a single table with unique columns: Name, Date, Age, Username, LoginTime, MothersName, School, University.
I know Access doesn't allow the use of covalesce - how can I go about creating this master table? I know what order I would like SQL to look in i.e. if Age is not in table 3, only then look in table 2 (even if table 2 has a value), and only then look in table 1 for that given Name. However I cannot work out how best to do the join so I don't get table1.Age, table2.Age, and table3.Age across all of my common fields...
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.