id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_23530500
|
And this is how my posts table look like
Posts API
I am seeking a method to retrieve all posts created by followers of a specific user from the DynamoDB database. My initial plan was to first query the followers table to obtain a list of user IDs for the followers and then use those keys to query the posts table. However, I have been unable to determine a way to effectively query a table with a list of keys using DynamoDB. Additionally, I require the retrieved posts to be sorted by date and the querying process to support pagination through the use of LastEvaluatedKey, ExclusiveStartKey, and PageCount. Is it possible to accomplish this efficiently and with good performance using DynamoDB, or should I consider revising my data model?
(I am using nodejs)
A: How to create a schema for DynamoDB
*
*Define your access patterns, list them out in order of priority
*View other models/schemas which are similar online
*Use a tool like NoSQL Workbench to create sample models with the ability to visualize
*Iterate continuously, make necessary changes to the schema when you spot something that will make your schema more efficient/performant.
Your Schema
You have given a single access pattern:
I am seeking a method to retrieve all posts created by followers of a specific user from the DynamoDB database
Users
pk
sk
followers
user1
USER#user1
[user2, user4, user99]
user2
USER#user2
[user6, user4, user99]
Posts
pk
sk
post
user2
COMMENT#00001#DATE
some comment
user2
COMMENT#00002#DATE
some comment
user2
COMMENT#00003#DATE
some comment
user2
REPLY#00001#DATE
some reply
user2
POST#00001#DATE
some post
user2
POST#00002#DATE
some post
user99
COMMENT#00001#DATE
some comment
user99
COMMENT#00002#DATE
some comment
user99
COMMENT#00003#DATE
some comment
user99
REPLY#00001#DATE
some reply
Your access pattern
*
*GetItem for user user1
*Parse list of followers
*Run a Query for each userId returned from above and state that sk begins_with POST
*You will be returned each users post in order by date
Pseudo
SELECT followers FROM Users WHERE pk = 'user1' AND sk='USER#user1'
FOR user in followers:
SELECT * FROM POSTS WHERE pk = user AND sk BEGINS_WITH 'POST'
Single Table Design
You could model both tables in a single table, but it doesn't provide much benefits other than having a single table to manage:
pk
sk
post
followers
user1
USER#user1
[user2, user4, user99]
user2
COMMENT#00001#DATE
some comment
user2
COMMENT#00002#DATE
some comment
user2
COMMENT#00003#DATE
some comment
user2
REPLY#00001#DATE
some reply
Summary
I don't particularly like this format, if it were my application I would model it something similar to the DynamoDB Forum Example.
| |
doc_23530501
|
eventClick: function(info) {
$("#calendar-modal").html("");
var scheduleClick="info.event.getDate()";
var calendarEl = document.getElementById('calendar-modal');
var calendar = new FullCalendar.Calendar(calendarEl, {
plugins: [ 'interaction', 'timeGrid' ],
header: {
left: 'prev,next today',
center: 'title',
right: 'timeGridDay'
},
displayEventTime:false,
allDaySlot: false,
slotEventOverlap:false,
eventSources:[
{
url:'http://localhost/servrevo_web/booking/getBooking',
method:'POST',
color: '#87a900',
textColor: 'black'
}
],
});
calendar.changeView('timeGridDay',scheduleClick);
calendar.render();
$('#timeSlot').modal('show');
The variable scheduleClick is not getting anything any help?
| |
doc_23530502
|
for example
import numpy as np
a=np.array([[[1,2],[3,4]]])
print(a.shape)
output of these is (1,2,2)
How can i say it without using a computer.
Thanks for any help.
A: You have 3 opening brackets at the beginning, so the shape has 3 elements.
The first shape element is 1, because the first opening bracket contains one element, ie. "[[1,2],[3,4]]".
The second shape element is 2, because you have two elements on that level, "[1,2]" and "[3,4]".
The third shape element is 2, because again you have two elements on that level "1" and "2" (as well as "3" and "4").
| |
doc_23530503
|
The content of the project template ZIP file is:
AssemblyInfo.cs
Class1.cs
ImsProject.ico
ImsProject.vstemplate
post_build.bat
ProjectTemplate.csproj
The ImsProject.vstemplate file contains the following TemplateContent:
<TemplateContent>
<Project File="ProjectTemplate.csproj" ReplaceParameters="true">
<ProjectItem ReplaceParameters="true" TargetFileName="Properties\AssemblyInfo.cs">AssemblyInfo.cs</ProjectItem>
<ProjectItem ReplaceParameters="true" OpenInEditor="true">Class1.cs</ProjectItem>
<ProjectItem>post_build.bat</ProjectItem>
</Project>
</TemplateContent>
Seems simple, but I must be doing something wrong.
A: Stoopid me!
I was thinking the post_build.bat file was not present was because when I created a new project it did not appear in the Solution Explorer in Visual Studio.
However ... it was present in the folder containing the new project.
The problem was I hadn't created a entry in an ItemGroup in the ProjectTemplate.csproj file.
Adding
<ItemGroup>
<None Include="post_build.bat" />
</ItemGroup>
solved the problem
| |
doc_23530504
|
<div id="container" style="height: 300px;">
</div>
$(function() {
var rawData = 100,
data = getData(rawData);
function getData(rawData) {
var data = [],
start = Math.round(Math.floor(rawData / 10) * 10);
data.push(rawData);
for (i = start; i > 0; i -= 1) {
data.push({
y: i
});
}
return data;
}
Highcharts.chart('container', {
chart: {
type: 'solidgauge',
marginTop: 0
},
title: {
text: ''
},
subtitle: {
text: rawData,
style: {
'font-size': '60px'
},
y: 200,
},
tooltip: {
enabled: false
},
pane: [{
startAngle: -90,
endAngle: 90,
background: [{ // Track for Move
outerRadius: '100%',
innerRadius: '70%',
backgroundColor: Highcharts.Color(Highcharts.getOptions().colors[0]).setOpacity(0.1).get(),
borderWidth: 0,
shape: 'arc'
}],
size: '100%',
center: ['50%', '65%']
}, {
startAngle: -180,
endAngle: 180,
size: '95%',
center: ['50%', '65%'],
background: []
}],
yAxis: [{
min: 0,
max: 100,
labels: {
enabled: true
},
stops: [
[0, '#fff'],
[0.1, '#0f0'],
[0.2, '#2d0'],
[0.3, '#4b0'],
[0.4, '#690'],
[0.5, '#870'],
[0.6, '#a50'],
[0.7, '#c30'],
[0.8, '#e10'],
[0.9, '#f03'],
[1, '#f06']
]
}, {
}],
series: [{
animation: false,
dataLabels: {
enabled: false
},
borderWidth: 0,
color: Highcharts.getOptions().colors[0],
radius: '100%',
innerRadius: '70%',
data: data
}]
});
});
http://jsfiddle.net/dt4wu39e/1/
A: An element in which the chart is located has z-index: 0;, so you need to set higher z-index for the header:
header {
z-index: 1;
...
}
Live demo: http://jsfiddle.net/BlackLabel/95jhwn06/
| |
doc_23530505
|
On my O365 developer account, it successfully retrieves the token.
The Add-in has been deployed to the client's Outlook but when they try to retrieve the token, this is the response message:
MessageText:"The token for this extension could not be retrieved.
"ResponseClass:"Error"
ResponseCode:"ErrorInvalidClientAccessTokenRequest"
Token:null
__type:"GetClientAccessTokenResponseMessage:#Exchange"
The code is exactly the same and so is the request. Is there any clues I could look into to figure out what about their environment would cause this to fail?
A: The ErrorInvalidClientAccessTokenRequest field is applicable for clients that target Exchange Online and versions of Exchange starting with Exchange Server 2013.
| |
doc_23530506
|
is there a way to change style of the active button?
so the focused button (even in autoplay mode) is highlighted by a css style
<amp-carousel id="carousel-with-preview" width="400" height="300" layout="responsive" type="slides" autoplay>
<amp-img src="https://unsplash.it/400/300?image=10" width="400" height="300" layout="responsive" alt="a sample image"></amp-img>
<amp-img src="https://unsplash.it/400/300?image=11" width="400" height="300" layout="responsive" alt="a sample image"></amp-img>
<amp-img src="https://unsplash.it/400/300?image=12" width="400" height="300" layout="responsive" alt="a sample image"></amp-img>
<amp-img src="https://unsplash.it/400/300?image=13" width="400" height="300" layout="responsive" alt="a sample image"></amp-img>
</amp-carousel>
<div class="carousel-preview">
<button on="tap:carousel-with-preview.goToSlide(index=0)"><amp-img src="https://unsplash.it/60/40?image=10" width="60" height="40" layout="responsive" alt="a sample image"></amp-img></button>
<button on="tap:carousel-with-preview.goToSlide(index=1)"><amp-img src="https://unsplash.it/60/40?image=11" width="60" height="40" layout="responsive" alt="a sample image"></amp-img></button>
<button on="tap:carousel-with-preview.goToSlide(index=2)"><amp-img src="https://unsplash.it/60/40?image=12" width="60" height="40" layout="responsive" alt="a sample image"></amp-img></button>
<button on="tap:carousel-with-preview.goToSlide(index=3)"><amp-img src="https://unsplash.it/60/40?image=13" width="60" height="40" layout="responsive" alt="a sample image"></amp-img></button>
</div>
A: although it took me hours searching for a way before posting, I found the solution right after :)
this can be done using amp-bind, add amp-bind to head
<script async custom-element=\"amp-bind\" src=\"https://cdn.ampproject.org/v0/amp-bind-0.1.js\"></script>
add this before carousel
<amp-state id="selected"><script type="application/json"> {"slide": 0} </script></amp-state>
and this to the amp-carousel code
on="slideChange:AMP.setState({selected: {slide: event.index}})"
add class to button with active class if selected
<div class="carousel-preview">
<button [class]="selected.slide == 0 ? 'active' : ''" class="active" on="tap:carousel-with-preview.goToSlide(index=0)">title1</button>
<button [class]="selected.slide == 1 ? 'active' : ''" on="tap:carousel-with-preview.goToSlide(index=1)">title2</button>
<button [class]="selected.slide == 2 ? 'active' : ''" on="tap:carousel-with-preview.goToSlide(index=2)">title3</button>
<button [class]="selected.slide == 3 ? 'active' : ''" on="tap:carousel-with-preview.goToSlide(index=3)">title4</button>
<button [class]="selected.slide == 4 ? 'active' : ''" on="tap:carousel-with-preview.goToSlide(index=4)">title5</button>
</div>
| |
doc_23530507
| ||
doc_23530508
|
But I'm getting this error:
Error Exception: Action not allowed
when trying to access the current active document using this function where the add-on is installed.
var doc = DocumentApp.getActiveDocument();
It works fine in development and testing, but for live users I get this error in the Executions log.
The auth scope related to this functionality is added and approved:
https://www.googleapis.com/auth/documents.currentonly
This scope should be enough to access the document so I don't know what the issue could be.
Happy for any assistance!
| |
doc_23530509
|
Here is an example graph:
Here is an example query:
match {class: user, as: user, where: (name='tihomir')}
.both('hasA'){as: task}.both('hasA'){as: tag}
RETURN user, task.name, tag.name
The result is the expected one:
But what I really need is something like this:
[
{
user: {
name: "user_name",
tasks: [{
name: "task_name",
tags: [{
name: "tag_name"
}]
}]
}
}
]
I couldn't achieve this with the fetch API.
A: Did you try with "nested projections"?
The following should do the job:
match
{class: user, as: user, where: (name='tihomir')}
.both('hasA'){as: task}.both('hasA'){as: tag}
RETURN user:{*, tasks:{*, tags:{*}}}
Full docs here:
https://orientdb.com/docs/3.0.x/sql/SQL-Projections.html#nested-projections
A: @thinklinux you need version 3.0 or higher (not suitable for production) to do this the way Luigi Dell'Aquila does this.
| |
doc_23530510
|
My input file contains:
"Opening from A Tale of Two Cities by Charles Darwin
It was the best of times, it was the worst of times. It was the age
of wisdom, it was the age of foolishness. It was the epoch of
belief, it was the epoch of incredulity. "
Here's how I tried:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#define max_story_words 1000
#define max_word_length 80
int main (int argc, char **argv)
{
char story[max_story_words][max_word_length] = {{0}};
char line[max_story_words] = {0};
char *p;
char ch = 0;
char *punct="\n ,!.:;?-";
int num_words = 1;
int i = 0;
FILE *file_story = fopen ("TwoCitiesStory.txt", "r");
if (file_story==NULL) {
printf("Unable to open story file '%s'\n","TwoCitiesStory.txt");
return (EXIT_FAILURE);
}
/* count words */
while ((ch = fgetc (file_story)) != EOF) {
if (ch == ' ' || ch == '\n')
num_words++;
}
rewind (file_story);
i = 0;
/* read each line in file */
while (fgets (line, max_word_length, file_story) != NULL)
{
/* tokenize line into words removing punctuation chars in punct */
for (p = strtok (line, punct); p != NULL; p = strtok (NULL, punct))
{
/* convert each char in p to lower-case with tolower */
char *c = p;
for (; *c; c++)
*c = tolower (*c);
/* copy token (word) to story[i] */
strncpy ((char *)story[i], p, strlen (p));
i++;
}
}
/* output array */
for(i = 0; i < num_words; i++)
printf ("story[%d]: %s\n", i, story[i]);
printf("\ntotal words: %d\n\n",num_words);
return (EXIT_SUCCESS);
}
A: Your num_words takes account of the two extra whitespaces, that's why you get 48.
You should simply print i immediately after the fgets-strtok loop, if I'm not mistaken.
A: Something along these lines:
while ((ch = fgetc (file_story)) != EOF) {
if (ch == ' ') {
num_words++;
while( (ch = fgetc (file_story)) == ' ' && (ch != EOF) )
}
if (ch == '\n') {
num_words++;
while( (ch = fgetc (file_story)) == '\n' && (ch != EOF) )
}
Though I wonder why you are only taking whitespace and newline characters for counting new words. Two words separated by some other punctuation mark are definitely not accouted for in your code
A: My suggestion is to change the words counting loop as follows:
/* count words */
num_words = 0;
int flag = 0; // set 1 when word starts and 0 when word ends
while ((ch = fgetc (file_story)) != EOF) {
if ( isalpha(ch) )
{
if( 0 == flag ) // if it is a first letter of word ...
{
num_words++; // ... add to word count
flag = 1; // and set flag to skip not first letters
}
continue;
}
if ( isspace(ch) || ispunct(ch) ) // if word separator ...
{
flag = 0; // ... reset flag
}
}
| |
doc_23530511
|
enter image description here
A: open the package.json file and change jasmine-core" 3.7.1 to 3.8 and "karma-jasmine-html-reporter" from 1.5.0 to 1.7.0 and save it in devDependancies
Then run
npm install
ng serve
A: If you don't have a node module folder, you have to run
npm install
command first.
If you already have node module file, you can delete the folder and then run npm install command.
If any of this does not work, you can delete the whole project folder and start from ng new.
If the issue continues, you can update the angular cli and core. Can you send a screenshot of the command prompt after running this command - ng --version ?
| |
doc_23530512
|
I had no problem, adding Javascript/css made a permanent after image on website and I want this to apply several pictures not just 1. With the css it only apply to one picture in that certain spot. I tried using Javascript to create a loop for every picture, but I am completely stuck right now, can anyone help with some guidance?
function hideImage() {
document.getElementById("full").src = "";
}
function showImage(img) {
var src = img.src;
var largeSrc = src.replace('small', 'large');
document.getElementById('full').src = largeSrc;
}
var images = document.querySelectorAll('.thumbnail');
for(var i = 0; i < images.length; ++i) {
var image = images[i];
console.log(image.src); // output: image1.jpg, image2.jpg, image3.jpg
}
#full {
position: absolute;
width: 100px;
height: 100px;
display:block;
top: 200px; left:170px;
border: 10px solid rgb(255, 255, 255);
outline: 1px solid black;
margin: 10px
}
<td><input type="checkbox" name="index[]" value="10" /></td>
<td><img onmouseover="showImage(this)" onmouseout="hideImage()" src="images/art/thumbs/05030.jpg" class="thumbnail" /></td>
<span><img id="full" /></span>
A: I think that's what you're trying to do .. You can handle it without javascript
img{
width:150px;
height: 150px;
transition: .5s ease-in-out;
}
img:hover{
transform: scale(1.3)
}
<img src="https://imgs.search.brave.com/uWn5s0ly7BjMZKuhrBAPx9ribLL5QuMPt04vwwqQqak/rs:fit:759:225:1/g:ce/aHR0cHM6Ly90c2Uz/Lm1tLmJpbmcubmV0/L3RoP2lkPU9JUC51/NWpkMkliUnhZLTJY/YnFQWUM0QUFnSGFF/byZwaWQ9QXBp" alt="img1" />
<img src="https://imgs.search.brave.com/v74JGUc9jLt5bZnyFimXTlAQOJgbrxTygj8i-ZDueTc/rs:fit:759:225:1/g:ce/aHR0cHM6Ly90c2Uz/Lm1tLmJpbmcubmV0/L3RoP2lkPU9JUC5u/VTJ3THpWbjJPdlRh/ZDFCOTl1cU93SGFF/byZwaWQ9QXBp" alt="img2" />
A: Can you please try this.
First collect all the images.
Then add an eventlistener for mouseover and mouseout events for making images bigger and hide them.
const images = document.querySelectorAll('.thumbnail');
for (let i = 0; i < images.length; i++){
images[i].addEventListener('mouseover', (e) =>{
const image = e.target;
image.src = 'https://source.unsplash.com/random/400x400';
image.style.width = "100%";
})
images[i].addEventListener('mouseout', (e) =>{
const image = e.target;
image.style.visibility = "hidden";
})
}
*,
*::before,
*::after {
box-sizing: border-box;
}
body{
min-height: 100vh;
overflow: hidden;
display: grid;
place-content: center;
margin:0;
background-color: bisque;
}
.container{
display: grid;
grid-template-columns: repeat(2,1fr);
grid-template-rows: repeat(2,1fr);
gap: 5rem;
}
.thumbnail {
border: 2px solid black;
width: 75%;
display: block;
cursor: pointer;
}
<div class="container">
<img
src="https://picsum.photos/400/400?random=2"
alt=""
class="thumbnail"
/>
<img
src="https://picsum.photos/400/400?random=3"
alt=""
class="thumbnail"
/>
<img
src="https://picsum.photos/400/400?random=4"
alt=""
class="thumbnail"
/>
<img
src="https://picsum.photos/400/400?random=6"
alt=""
class="thumbnail"
/>
</div>
| |
doc_23530513
|
Is it oke to delete it?
A: Yes it's perfectly fine to delete them.
These folder will come back with a fresh installation of anaconda and other mentioned packages.
| |
doc_23530514
|
var el = document.createElement('li');
el.className = "list-group-item";
el.attr({"data-content": contentForm, "data-type": contentType, "data-number": value});
The error is 'Uncaught TypeError: el.attr is not a function'
I follow another discussion here, and I cannot find the issue...
A: attr() is a jquery function you can't call it on DOM element, try to use setAttribute or make your el a jquery element before calling attr() :
el.setAttribute();
//OR
$(el).attr();
If you want ro add data attributes you should use data() e.g :
$(el).data({content: contentForm, type: contentType, number: value});
Hope this helps.
A: If you're going to use jQuery, do the whole thing in jQuery.
var el = $('<li/>').addClass('list-group-item').attr({"data-content": contentForm, "data-type": contentType, "data-number": value});
| |
doc_23530515
|
Here is a simplified example of what I'm trying to do:
INSERT INTO Mytable
(field1
,field2
,field3
,nonKeyUniqueInt)
SELECT
(field1
,field2
,field3
,(SELECT MAX(nonKeyUniqueInt)+1 FROM mytable)
FROM
mytable
WHERE
(conditions)
However this doesn't work because the SELECT MAX query only runs once, giving all my new rows the same value for that field. Given the following rows to copy:
field1 field2 field3 nonKeyUniqueInt
x y z 1
a b c 2
I get output of:
field1 field2 field3 nonKeyUniqueInt
x y z 1
a b c 2
x y z 3
a b c 3
Is what I'm trying to do possible?
A: The problem is that the subquery gets evaluated once for the insert, not once per row. The solution is to use row_number():
INSERT INTO Mytable(field1, field2, field3, nonKeyUniqueInt)
SELECT field1, field2, field3,
x.maxk + row_number() over (order by (select NULL))
FROM mytable CROSS JOIN
(SELECT MAX(nonKeyUniqueInt) as maxk FROM mytable) x
WHERE (conditions);
I moved the max calculation to the FROM clause to make it clear that it is evaluated only once.
| |
doc_23530516
|
var Cat = (function( cat ){
cat.speak = function(){ return 'meew'; };
return cat;
} ( Cat || {} ));
// @prepros-prepend cat.eat.js
The cat.eat.js consists of:
var Cat = (function( cat ){
cat.eat = function(){ return 'om nom nom'; };
return cat;
} ( Cat || {} ));
I use Prepros to minify and concatenate my project files.
Somehow, Prepros doesn't concatenate the two js files. Am I missing something?
A: Please remove the space between @ and slashes.
var Cat = (function( cat ){
cat.speak = function(){ return 'meew'; };
return cat;
} ( Cat || {} ));
//@prepros-prepend cat.eat.js
| |
doc_23530517
|
Here is the function:
public function login($username, $token, $redirect)
{
$accountLogin = AccountLogin::Where('username', $username)
->Where('token', $token)
->first();
if ($accountLogin) {
try {
Auth::login($accountLogin);
print_r(Auth::check()) // prints 1.
header('Location: ' . $redirect);
exit;
} catch (\Exception $e) {
return $e->getMessage();
}
}
echo(array(
'status' => 'error',
'message' => 'Error logging in'
));
}
My problem is that the user seems to be logged out after the redirect.
Within the function above I can see the the account is found, and
Auth::check() returns 1 or if I echo out Auth::User() then I do get the users info.
We then attempt to do a redirect which should take us to the route passed in as a parameter (you do need to be authenticated for all these routes) but instead I am redirected to the home page and logged out. If I log in from that page then it takes me to the url I wanted to reach originally (so it's like it is remembering where I want to go, but just doesn't recognise that I'm logged in).
Both sites are currently running locally via apache.
TIA.
A: Did you extend Authenticatable in your AccountLogin model?
You could try using the loginUsingId($id, $remember = false) function with the user id.
A: instead of rediricting in this way :
header('Location: ' . $redirect);
exit;
use this :
return redirect($redirect);
Laravel has it's own redirect helper method which you had better use it instead. also redirecting in your way and exit at the end of it, ruins the user id that was stored in session , so it supposed that user is not logged in.
| |
doc_23530518
|
A: Here is the clear answer:
*
*At first, you need to purchase and verify your domain in SES(you've done this already, it's good to go for next step)
*You need to write a support ticket to move your SES account out from sandbox mode as it's in sandbox mode by default(You need to provide all info AWS requires in detail)
moving out from sandbox mode
This might take 1 day around, finally you can get production SES status and check in your statistics section from SES console.
*Next, you need to go AWS WorkMail service console and create your email accounts to be used for sender or receiver in your platform by your purchased domain(i.e, if your domain is abc.com, info@abc.com or support@abc.com).
When I say creating email accounts, it says you need to create email address, username and password for each email account.
*Finally, If you need to check out the inbox for above created accounts, WorkMail provides a cool web client for it.
Here is the WorkMail web client documentation from AWS
It says this:
The web client URL looks like this: https://alias.awsapps.com/mail. Replace alias with the alias you received from your site administrator.
Here, alias is configured by you when you create your organization in WorkMail console.
The reason why SES requires to verify domain is something like ID verification of email sender, and verification of 3rd party email addresses gives us a flexibility to work with any other email addresses not registered in SES, also allow development and test before registration of domain in sandbox mode.
Cheers
A: The email address you want to verify must have existing mail service, before you can validate the address in SES.
From AWS docs, about receiving email
https://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email.html
When you receive email, Amazon SES processes it according to instructions you provide. For example, Amazon SES can deliver incoming mail to an Amazon S3 bucket, publish it to an Amazon SNS topic, or send it to Amazon WorkMail.
If you need an inbox service, use Amazon WorkMail.
Creating a IAM user doesn't create an inbox. And SES has no inbox capability at all. The point of validation is to allow sending in behalf of the service. In certain use cases, you can process inbound email via Lambda, store attachments on S3 etc. but there is no POP3/IMAP inbox-like service included in the SES.
Creating an IAM user is not required to validate your email. That is only for authentication purposes for accessing AWS account services.
A: AWS SES can receive emails and mostly this is used for automated email processing.
If you have verified in identity that you own the domain(by adding txt record in your domain DNS table) then by default you have verified all emails that fall in that domain.
You don't have to follow the steps to verify individual emails by clicking the link received on the emails.
https://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-domains.html
From your example: since the domain examaple.com is verified, you don't have to again verify emails(user@example.com) that fall under same domain.
Individual email verification is for scenarios where you cant verify domain by placing dns records. Here you will not be able to receive emails, but if you still want to send emails from an address then you can verify it by clicking the link you receive on your inbox. This can be done with gmail or other mail providers.
| |
doc_23530519
|
First, I wanted to change the date format, but I gave up on that. Now, I'm focusing on trying to insert my data in phpmyadmin.
| |
doc_23530520
|
I searched around for fixes for a while and found a few things but it's still not working. The javascript I'm using should be looking for keycode 13, which is enter, but when I hit enter the field clears itself without submitting the form and the keycode is not triggered. The keycode thing only returns results for letters and not for shift/enter.
Here's the HTML my current solution:
<form action="chatscreen.php" name="loginform" method="post">
<p>Please enter your name to continue:</p>
<label for="name">Name:</label>
<input type="text" name="name" id="name" onkeyup="whichButton("loginform","enter")"/>
<button type="submit" name="enter" id="enter" value="Enter">Button</button>
</form>
And here's the javascript I tried to implement as a fix:
<script type="text/javascript">
function whichButton(formname,elementname) {
alert("got a key = " + event.keyCode);
if (event.keyCode === 13) {
var followingInput = document.getElementById(elementname);
document.formname.elementname.click();
}
}
</script>
A: <script type="text/javascript">
function whichButton(formname,elementname) {
var keyID = (window.event) ? event.keyCode : keyEvent.keyCode;
if (keyID === 13) {
var followingInput = document.getElementById(elementname);
document.formname.elementname.click();
}
}
</script>
A: Simple; just set your defaultbutton property
<form id="Form1" defaultbutton="enter" action="chatscreen.php" name="loginform" method="post">
Hope that helps :)
| |
doc_23530521
|
Is this in fact creating a memory leak?
The line in question is:
var newSeasons = temp = {}; temp[yr] = data;
You can test the code here.
function parseData (yr, stat, data) {
// The way I've been taught
var oldSeasons = {};
oldSeasons[yr] = data;
console.log('The way Ive been taught\n');
console.log(oldSeasons);
console.log('\n****************************\n');
// Experimental way
var newSeasons = temp = {}; temp[yr] = data;
console.log('Experimental way');
console.log(newSeasons);
}
var data = {
Pos: '1B',
Age: '33',
G: '116',
stat:'batting',
yr: '2005',
H:'89',
R: '42',
RBI: '48'
};
parseData(data.yr,data.stat,data);
A: Since you do not declare temp as var temp, you are in fact assigning to window.temp, i.e. to a global variable (assuming we're talking about JS in the browser). The object will not be garbage collected when it's no more needed, unless you explicitely delete the global reference or reassign it.
Edit : This is not a "memory leak" per se : every time you call the function, you're reusing the same global reference, so there is no risk of gradual locking of available space with useless data. However, it constitutes a suboptimal use of resources.
| |
doc_23530522
|
I'm imagining something analogous to /usr/bin/time
A: (This is an already answered, old question.. but just for the record :)
I was inspired by Yang's script, and came up with this small tool, named memusg. I simply increased the sampling rate to 0.1 to handle much short living processes. Instead of monitoring a single process, I made it measure rss sum of the process group. (Yeah, I write lots of separate programs that work together) It currently works on Mac OS X and Linux. The usage had to be similar to that of time:
memusg ls -alR / >/dev/null
It only shows the peak for the moment, but I'm interested in slight extensions for recording other (rough) statistics.
It's good to have such simple tool for just taking a look before we start any serious profiling.
A: Well, if you really want to show the memory peak and some more in-depth statistics i recommend using a profiler such as valgrind. A nice valgrind front-end is alleyoop.
A: Valgrind one-liner:
valgrind --tool=massif --pages-as-heap=yes --massif-out-file=massif.out ./test.sh; grep mem_heap_B massif.out | sed -e 's/mem_heap_B=\(.*\)/\1/' | sort -g | tail -n 1
Note use of --pages-as-heap to measure all memory in a process. More info here: http://valgrind.org/docs/manual/ms-manual.html
This will slow down your command significantly.
A: On Linux:
Use /usr/bin/time -v <program> <args> and look for "Maximum resident set size".
(Not to be confused with the Bash time built-in command! So use the full path, /usr/bin/time)
For example:
> /usr/bin/time -v ./myapp
User time (seconds): 0.00
. . .
Maximum resident set size (kbytes): 2792
. . .
On BSD, MacOS:
Use /usr/bin/time -l <program> <args>, looking for "maximum resident set size":
>/usr/bin/time -l ./myapp
0.01 real 0.00 user 0.00 sys
1440 maximum resident set size
. . .
A: Here's a one-liner that doesn't require any external scripts or utilities and doesn't require you to start the process via another program like Valgrind or time, so you can use it for any process that's already running:
grep ^VmPeak /proc/$PID/status
(replace $PID with the PID of the process you're interested in)
A: [Edit: Works on Ubuntu 14.04: /usr/bin/time -v command Make sure to use the full path.]
Looks like /usr/bin/time does give you that info, if you pass -v (this is on Ubuntu 8.10). See, e.g., Maximum resident set size below:
$ /usr/bin/time -v ls /
....
Command being timed: "ls /"
User time (seconds): 0.00
System time (seconds): 0.01
Percent of CPU this job got: 250%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.00
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 0
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 315
Voluntary context switches: 2
Involuntary context switches: 0
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
A: You can use a tool like Valgrind to do this.
A: Here is (based on the other answers) a very simple script that watches an already running process. You just run it with the pid of the process you want to watch as the argument:
#!/usr/bin/env bash
pid=$1
while ps $pid >/dev/null
do
ps -o vsz= ${pid}
sleep 1
done | sort -n | tail -n1
Example usage:
max_mem_usage.sh 23423
A: Perhaps (gnu) time(1) already does what you want. For instance:
$ /usr/bin/time -f "%P %M" command
43% 821248
But other profiling tools may give more accurate results depending on what you are looking for.
A: On MacOS Sierra use:
/usr/bin/time -l commandToMeasure
You can use grep to take what you want maybe.
A: Heaptrack is KDE tool that has a GUI and text interface. I find it more suitable than valgrind to understand the memory usage of a process because it provides more details and flamegraphs. It's also faster because it does less checking that valgrind. And it gives you the peak memory usage.
Anyway, tracking rss and vss is misleading because pages could be shared, that's why that memusg. What you should really do is track the sum of Pss in /proc/[pid]/smaps or use pmap. GNOME system-monitor used to do so but it was too expensive.
A: /usr/bin/time maybe does what you want, actually. Something like.
/usr/bin/time --format='(%Xtext+%Ddata %Mmax)'
See time(1) for details...
A: time -f '%M' <run_program>
A: If the process runs for at least a couple seconds, then you can use the following bash script, which will run the given command line then print to stderr the peak RSS (substitute for rss any other attribute you're interested in). It's somewhat lightweight, and it works for me with the ps included in Ubuntu 9.04 (which I can't say for time).
#!/usr/bin/env bash
"$@" & # Run the given command line in the background.
pid=$! peak=0
while true; do
sleep 1
sample="$(ps -o rss= $pid 2> /dev/null)" || break
let peak='sample > peak ? sample : peak'
done
echo "Peak: $peak" 1>&2
A: Because /usr/bin/time is not present in many modern distributions (Bash built-in time instead), you can use Busybox time implementation with -v argument:
busybox time -v uname -r
It's output is similar to GNU time output.
Busybox is pre-installed in most Linux distros (Debian, Ubuntu, etc.). If you using Arch Linux, you can install it with:
sudo pacman -S busybox
A: Use Massif: http://valgrind.org/docs/manual/ms-manual.html
A: Re-inventing the wheel, with hand made bash script. Quick and clean.
My use case: I wanted to monitor a linux machine which has less RAM and wanted to take a snapshot of per container usage when it runs under heavy usage.
#!/usr/bin/env bash
threshold=$1
echo "$(date '+%Y-%m-%d %H:%M:%S'): Running free memory monitor with threshold $threshold%.."
while(true)
freePercent=`free -m | grep Mem: | awk '{print ($7/$2)*100}'`
do
if (( $(awk 'BEGIN {print ("'$freePercent'" < "'$threshold'")}') ))
then
echo "$(date '+%Y-%m-%d %H:%M:%S'): Free memory $freePercent% is less than $threshold%"
free -m
docker stats --no-stream
sleep 60
echo ""
else
echo "$(date '+%Y-%m-%d %H:%M:%S'): Sufficient free memory available: $freePercent%"
fi
sleep 30
done
Sample output:
2017-10-12 13:29:33: Running free memory monitor with threshold 30%..
2017-10-12 13:29:33: Sufficient free memory available: 69.4567%
2017-10-12 13:30:03: Sufficient free memory available: 69.4567%
2017-10-12 16:47:02: Free memory 18.9387% is less than 30%
your custom command output
A: On macOS, you can use DTrace instead. The "Instruments" app is a nice GUI for that, it comes with XCode afaik.
A:
Please be sure to answer the question. Provide details and share your research!
Sorry, I am first time here and can only ask questions…
Used suggested:
valgrind --tool=massif --pages-as-heap=yes --massif-out-file=massif.out ./test.sh; grep mem_heap_B massif.out | sed -e 's/mem_heap_B=\(.*\)/\1/' | sort -g | tail -n 1
then:
grep mem_heap_B massif.out
...
mem_heap_B=1150976
mem_heap_B=1150976
...
this is very different from what top command shows at similar moment:
14673 gu27mox 20 0 3280404 468380 19176 R 100.0 2.9 6:08.84 pwanew_3pic_com
what are measured units from Valgrind??
The /usr/bin/time -v ./test.sh never answered — you must directly feed executable to /usr/bin/time like:
/usr/bin/time -v pwanew_3pic_compass_2008florian3_dfunc.static card_0.100-0.141_31212_resubmit1.dat_1.140_1.180 1.140 1.180 31212
Command being timed: "pwanew_3pic_compass_2008florian3_dfunc.static card_0.100-0.141_31212_resubmit1.dat_1.140_1.180 1.140 1.180 31212"
User time (seconds): 1468.44
System time (seconds): 7.37
Percent of CPU this job got: 99%
Elapsed (wall clock) time (h:mm:ss or m:ss): 24:37.14
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 574844
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 74
Minor (reclaiming a frame) page faults: 468880
Voluntary context switches: 1190
Involuntary context switches: 20534
Swaps: 0
File system inputs: 81128
File system outputs: 1264
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
A: 'htop' is best command for see which process is using how much RAM.....
for more detail
http://manpages.ubuntu.com/manpages/precise/man1/htop.1.html
| |
doc_23530523
|
dtDate Value
2010-01-01 00:00:00.000 5.0000
2011-01-01 00:00:00.000 15.0000
2012-01-01 00:00:00.000 25.0000
2013-01-01 00:00:00.000 35.0000
2014-01-01 00:00:00.000 45.0000
Now i want to use this datatable for monthly process. so for all months in year 2010 i want to use value=5. e.g. If my date is 02/01/2010 Feb 2010 it should return me
How do I use datatable.select method? I want some effective filter expression.
A: The DataTable.Select(filterExpression) overload method contains the parameter filterExpression that use the same syntax used to create calculated columns via the Expression property.
You could use this code to get required row
Dim dt as DateTime dt = new DateTime(2010,1,1,0,0,0)
Dim r as DataRow() r = DataTable.Select("dtDate = #" + dt.ToString("yyyy-MM-dd") + "#");
If your intention is to get the record for year 2010 passing any date with year = 2010 then you could use this syntax
' Supposing date passed is 1/May/2010 and you want the record with 2010 as year.
Dim dt as DateTime dt = new DateTime(2010,5,1,0,0,0)
Dim r as DataRow() r = DataTable.Select("dtDate = #01/01/" + dt.Year.ToString()) + "#");
| |
doc_23530524
|
unless confirm_token.errors.empty?
raise ActionController::RoutingError.new('Not Found')
end
For this I need to check is it GET parameter reset_password_token same as one in the table column reset_password_token.
There is a method for that confirm_by_token, but it is for validate email URL and checks column 'confirmation_token'.
Is there built in Devise method for checking is it valid reset_password_token or I need to create it ?
| |
doc_23530525
|
resource "aws_apigatewayv2_route" "signup_route" {
api_id = "${aws_apigatewayv2_api.signup_redirect.id}"
route_key = "POST /signup"
target = "integrations/${aws_apigatewayv2_integration.lambda_integration.id}"
}
resource "aws_apigatewayv2_stage" "staging_stage" {
api_id = "${aws_apigatewayv2_api.signup_redirect.id}"
name = "staging"
auto_deploy = true
route_settings {
route_key = "POST /signup"
logging_level = "INFO"
detailed_metrics_enabled = true
}
}
I got below error when deploying:
Error: error creating API Gateway v2 stage: NotFoundException: Unable to find Route by key POST /signup within the provided RouteSettings
It seems that the stage was deployed before the route was created. How can I add a dependency on stage to depend on route?
A: The best way to create dependencies in Terraform is to write references to the resources you want to depend on. In this case, that might look like this:
resource "aws_apigatewayv2_route" "signup_route" {
api_id = "${aws_apigatewayv2_api.signup_redirect.id}"
route_key = "POST /signup"
target = "integrations/${aws_apigatewayv2_integration.lambda_integration.id}"
}
resource "aws_apigatewayv2_stage" "staging_stage" {
api_id = aws_apigatewayv2_api.signup_redirect.id
name = "staging"
auto_deploy = true
route_settings {
route_key = aws_apigatewayv2_route.signup_route.route_key
logging_level = "INFO"
detailed_metrics_enabled = true
}
}
Because the route_key in route_settings refers to aws_apigatewayv2_route.signup_route, Terraform will see this as a dependency on that resource. Letting dependencies be implied like this is nice because it allows you to focus on describing how data propagates from one resource to another, and if you later remove this route_settings block then the dependency it implies would be automatically removed without you needing to remember to update some other declaration.
However, in some cases the design of an underlying system makes this sort of explicit data-flow dependency impossible. One example of this is AWS IAM roles, where the policies attached to the role are separate from the role itself and so the natural dataflow-inferred dependency relationship is that both the policy and the object that will assume the role depend on the role, and the object assuming the role doesn't naturally depend on the policy. In that case we tend to need to add additional explicit dependencies depends_on to ensure that the system doesn't try to assume the role before it has had its policies applied:
resource "aws_iam_role" "for_lambda" {
name = "lambda_function"
assume_role_policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
})
}
resource "aws_iam_role_policy" "for_lambda" {
# (policy that the lambda function needs to do its work)
}
resource "aws_lambda_function" "example" {
name = "example"
# ...
# This reference makes the function depend on the role,
# but the role isn't ready to use until the associated
# policy has been attached to it too.
role = aws_iam_role.for_lambda.arn
# ...so we need to explicitly declare this hidden dependency:
depends_on = [aws_iam_role_policy.for_lambda]
}
There's more information on how dependencies work in Terraform in Resource Dependencies.
A: It doesn't look like aws_apigatewayv2_route exports any useful attributes that we could use. But does depends_on not work on this case? -
resource "aws_apigatewayv2_stage" "staging_stage" {
depends_on = [aws_apigatewayv2_route.signup_route]
...
https://www.terraform.io/docs/configuration/resources.html#depends_on-explicit-resource-dependencies
=====
(EDIT here because I don't have enough rep yet to comment on the other answer) I didn't realize you could use inputs from one resource as an attribute. That's pretty nifty, and definitely the way to go.
| |
doc_23530526
|
ret = (0, 1) // (0, 1) this is response getting from server
self.assertTrue(ret == '(0, 1)') // is this right way to do?
A: Assuming the response from the server is a tuple, you could test it with a simple test case as follows:
import unittest
response = (0, 1)
class SimpleTest(unittest.TestCase):
# Returns True or False.
def test(self):
self.assertTrue((response == (0, 1)), "The response is not (0, 1)")
if __name__ == '__main__':
unittest.main()
If it is not a tuple but a string that you receive, you could change the value in the assertTrue condition from (0, 1) to "(0, 1)".
Please refer to the documentation on unittest for more details.
If you don't want to use unittest, but you do want to make sure that the response is correct, you could also use the assert statement (however, there might be better ways to check this):
response = (0, 1)
assert(response == (0, 1)) # This will do nothing
assert(response == (1, 1)) # This results in an AssertionError
Due to the AssertionError your program will stop. If you don't want this, you could use a try-except block:
response = (0, 1)
try:
assert(response == (0, 1))
except AssertionError:
print("The response is not correct.")
EDIT:
As the response you are receiving is of type MQTTMessageInfo, you want to compare against this. I didn't find much documentation on this type, but you can see what the class looks like on Github.
Here, you can see the response you are seeing is a string representation of the following:
def __str__(self):
return str((self.rc, self.mid))
The first value in (0, 1) is the self.rc and the second is self.mid. If you only want to assert that these two values are indeed correct, you can modify the test case above to something like this:
self.assertTrue((response.rc == 0 and response.mid == 1)), "The MQTTMessageInfo is not rc=0, and mid=1")
| |
doc_23530527
|
*
*From main package X, import packages Y and Z.
*Package M exports a go callback F.
*Packages X and Y are both built with accompanying C files, both want
to call F from C source code.
Generally speaking I'm trying to figure out how to call a callback from accompanying C files in other modules which are used to build a final application. I coudn't figure out how to achieve this or something similar. I'm also interested in convoluted solutions.
A: I don't see a way to call a Go function across packages, but all cgo packages are linked into the same binary and can call each other. This means that you can export M.F to a C function in package M and call that C function from packages Y and Z.
m/m.go:
package m
// void F();
import "C"
import "fmt"
//export F
func F() {
fmt.Println("m.f")
}
m/m.h:
void m_f();
m/m.c:
#include <stdio.h>
#include "_cgo_export.h"
#include "m.h"
void m_f() {
printf("m_f\n")
F();
}
y/y.go:
package y
// The LDFLAGS lines below are needed to prevent linker errors
// since not all packages are present while building intermediate
// packages. The darwin build tag is used as a proxy for clang
// versus gcc because there doesn't seem to be a better way
// to detect this.
// #cgo darwin LDFLAGS: -Wl,-undefined -Wl,dynamic_lookup
// #cgo !darwin LDFLAGS: -Wl,-unresolved-symbols=ignore-all
// #include "y.h"
import "C"
import (
"fmt"
_ "m"
)
func Y() {
fmt.Println("y.Y")
C.y()
}
y/y.h:
void y();
y/y.c:
#include <stdio.h>
#include "../m/m.h"
void y() {
printf("y.C.y\n");
m_f();
}
A: Here is an example, that will accept any go callback (not thread-safe).
b.go:
package b
// typedef void (*cbFunc) ();
// void do_run(cbFunc);
// void goCallback();
import "C"
//export goCallback
func goCallback() {
if goCallbackHolder != nil {
goCallbackHolder()
}
}
var goCallbackHolder func()
func Run(callback func()) {
goCallbackHolder = callback
C.do_run(C.cbFunc(C.goCallback))
}
b.c:
#include "_cgo_export.h"
void do_run(void (*callback)())
{
callback();
}
A: I couldn't make it to work in a simple fashion IMO.
Given main package X that imports Y and Z, both having to call (from C source code) F declared in package M,
I had to:
*
*Create a small wrapper W1 for F in Y and export it to be called from Y's C source.
*Create a small wrapper W2 for F in Z and export it to be called from Z's C source.
*In Y CGO CPPFLAGS define -DCALLBACK=W1
*In Z CGO CPPFLAGS define -DCALLBACK=W2
*From C source code, anywhere, I'm now able to refer to F as CALLBACK (yeah, internally it's all different stuff, which I refer to using a single name at one end to call a single function at the other end).
This is convoluted, but it's working, although configuring such macros and producing little wrappers
is not being ideal. If anyone could detail a simpler procedure I would be glad. Everything
I tried ended up with duplicated symbols or non-visible declarations.
| |
doc_23530528
|
1/2
1/3
10/20
12/31
I simply need to Split or Parse this on "/". Is there a simple function that will allow me to do this?
A: Below example is for BigQuery Standard SQL
#standardSQL
WITH `project.dataset.table` AS (
SELECT 1 id, '1/2' list UNION ALL
SELECT 2, '1/3' UNION ALL
SELECT 3, '10/20' UNION ALL
SELECT 4, '15/' UNION ALL
SELECT 5, '12/31'
)
SELECT id,
SPLIT(list, '/')[SAFE_OFFSET(0)] AS first_element,
SPLIT(list, '/')[SAFE_OFFSET(1)] AS second_element
FROM `project.dataset.table`
-- ORDER BY id
with result as below
Row id first_element second_element
1 1 1 2
2 2 1 3
3 3 10 20
4 4 15
5 5 12 31
A: Check the following SQL functions:
*
*https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#split
*https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#regexp_extract_all
For example with SPLIT you can do
SELECT parts[SAFE_OFFSET(0)], parts[SAFE_OFFSET(1)]
FROM (SELECT SPLIT(field) parts FROM UNNEST(["1/2", "10/20"]) field)
| |
doc_23530529
|
One of my functions takes a square matrix d and a scalar alpha, and performs the elementwise operation alpha/(alpha+d). Background: this function is used to test which value of alpha is 'best', so it is in a loop where d is always the same, but alpha varies.
All of the following time scales are an average of 100 instances of running the function.
In numpy, it takes around 0.27 seconds to do this, and the code is as follows:
def kfun(d,alpha):
k = alpha /(d+alpha)
return k
but xtensor takes about 0.36 seconds, and the code looks like this:
xt::xtensor<double,2> xk(xt::xtensor<double,2> d, double alpha){
return alpha/(alpha+d);
}
I've also attempted the following version using std::vector but this something I do not want to use in long run, even though it only took 0.22 seconds.
std::vector<std::vector<double>> kloops(std::vector<std::vector<double>> d, double alpha, int d_size){
for (int i = 0; i<d_size; i++){
for (int j = 0; j<d_size; j++){
d[i][j] = alpha/(alpha + d[i][j]);
}
}
return d;
}
I've noticed that the operator/ in xtensor uses "lazy broadcasting", is there maybe a way to make it immediate?
EDIT:
In Python, the function is called as follows, and timed using the "time" package
t0 = time.time()
for i in range(100):
kk = k(dsquared,alpha_squared)
print(time.time()-t0)
In C++ I call the function has follows, and is timed using chronos:
//d is saved as a 1D npy file, an artefact from old code
auto sd2 = xt::load_npy<double>("/path/to/d.npy");
shape = {7084, 7084};
xt::xtensor<double, 2> xd2(shape);
for (int i = 0; i<7084;i++){
for (int j=0; j<7084;j++){
xd2(i,j) = (sd2(i*7084+j));
}
}
auto start = std::chrono::steady_clock::now();
for (int i = 0;i<10;i++){
matrix<double> kk = kfun(xd2,4000*4000,7084);
}
auto end = std::chrono::steady_clock::now();
std::chrono::duration<double> elapsed_seconds = end-start;
std::cout << "k takes: " << elapsed_seconds.count() << "\n";
If you wish to run this code, I'd suggest using xd2 as a symmetric 7084x7084 random matrix with zeros on the diagonal.
The output of the function, a matrix called k, then goes on to be used in other functions, but I still need d to be unchanged as it will be reused later.
END EDIT
To run my C++ code I use the following line in the terminal:
cd "/path/to/src/" && g++ -mavx2 -ffast-math -DXTENSOR_USE_XSIMD -O3 ccode.cpp -o ccode -I/path/to/xtensorinclude && "/path/to/src/"ccode
Thanks in advance!
A: A problem with the C++ implementation may be that it creates one or possibly even two temporary copies that could be avoided. The first copy comes from not passing the argument by reference (or perfect forwarding). Without looking at the rest of the code its hard to judge if this has an impact on the performance or not. The compiler may move d into the method if its guaranteed to be not used after the method xk(), but it is more likely to copy the data into d.
To pass by reference, the method could be changed to
xt::xtensor<double,2> xk(const xt::xtensor<double,2>& d, double alpha){
return alpha/(alpha+d);
}
To use perfect forwarding (and also enable other xtensor containers like xt::xarray or xt::xtensor_fixed), the method could be changed to
template<typename T>
xt::xtensor<double,2> xk(T&& d, double alpha){
return alpha/(alpha+d);
}
Furthermore, its possible that you can save yourself from reserving memory for the return value. Again, its hard to judge without seeing the rest of the code. But if the method is used inside a loop, and the return value always has the same shape, then it can be beneficial to create the return value outside of the loop and return by reference. To do this, the method could be changed to:
template<typename T, typename U>
void xk(T& r, U&& d, double alpha){
r = alpha/(alpha+d);
}
If it is guaranteed that d and r do not point to the same memory, you can further wrap r in xt::noalias() to avoid a temporary copy before assigning the result. The same is true for the return value of the function in case you do not return by reference.
Good luck and happy coding!
| |
doc_23530530
|
Here is my OrderStatusMnemonic Configuration class that reads a txt file:
@Configuration
public class OrderStatusMnemonic {
private static final Logger log = LoggerFactory.getLogger("OrderStatusMnemonic.class");
private ResourceLoader resourceLoader;
@Autowired
public OrderStatus orderStatus;
public OrderStatusMnemonic(ResourceLoader resourceLoader) {
this.resourceLoader = resourceLoader;
}
@PostConstruct
public void init() {
try {
log.info("Loading order-status-mnemonic file ");
Resource resource = resourceLoader.getResource("classpath:order-status-mnemonic.txt");
InputStream inputStream = resource.getInputStream();
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(inputStream, "UTF-8"));
String str;
List<String> orderStatusMnemonicList = new ArrayList<>();
while ( (str = bufferedReader.readLine()) != null) {
log.info("str = " + str);
orderStatusMnemonicList.add(str);
}
orderStatus.setValues(orderStatusMnemonicList);
log.info("orderStatusMnemonicList = " + orderStatusMnemonicList.toString());
} catch (IOException | NullPointerException e) {
log.error("Failing to Load order status mnemonic file" + e.getMessage(), e);
}
}
}
OrderStatus POJO:
@Getter
@Setter
@ToString
public class OrderStatus {
private List<String> values;
}
Since I am autowiring OrderStatus POJO class I am getting error:
Consider defining a bean of type 'com.spectrum.sci.osm.orderstatus.OrderStatus' in your configuration.
A: @Component or @Autowired should be used only for classes managed by Spring. POJOs are not managed by Spring. So, should neither add @component nor you should autowire it. Since you are trying to autowire POJO class, you are getting error to define bean of type OrderStatus
A: Your OrderStatus as it is now does not need annotation @Component so you should not add it. Also you should not try to @Autowire it anywhere without @Component.
You surely can add @Component and then @Autowire it anywhere you need it but there is no point in it since you can more easy instantiate your POJO by just issuing new OrderStatus(). And it might also be a waste of resources.
Then, when do you need those two annotations? Whenever your POJO needs to become a managed bean. In other words when there is a need for Spring to do some automagical things. Consider your POJO would have something more complex, like (check the comments):
// Enables autowiring OrderStatus -> autowired OrderStatus is managed
// by Spring
@Component
public class OrderStatus {
private List<String> values;
// Then there is something to autowire to OrderStatus also
// Without OrderStatus being managed by Spring this would be ignored!
// But because managed, Spring autowires also this one
// Of course SomeOtherManagedBean must be a @Component, for example
@Autowired
private SomeOtherManagedBean somb;
}
A: Any plain-vanilla Java Class which is being used as light weight bean and treated/called as POJO should not be bound with any of the Spring components.
The java class can be decorated with either of the spring annotations (i.e. @Component, @Service, @Bean, @Configuration) only if that class has been enhanced by adding extra flavour and only if that class is being used to support other classes with enhanced features other then getter setter properties.
@Autowired can be used only if the class is marked with any of the spring components as the @Autowired is spring annotation which would scan only those beans which are being managed withing Spring.
| |
doc_23530531
|
function App() {
return (
<div className="App">
<Navbar />
<Router>
<Link to="/">Home</Link>
<Link to="/aggrid">Aggrid</Link>
<Route path="/" component={GhibliModal} />
<Route path="/aggrid" component={Aggrid} />
</Router>
</div>
);
}```
A: Well first things first, you need to wrap your whole App component with <BrowserRouter>, but I from what you said in the question, I would assume you already know that.
Secondly, you don't need the <Router> component. Read here.
From reading the documentation, all <Route> components must be wrapped in a <Routes> (note the 's' at the end) component.
And lastly, I'm pretty sure you cannot have <Link> components inside the <Routes> component.
Also, the component prop is now called element, so
<Route path="/" component={GhibliModal} />
should become
<Route path="/" element={<GhibliModal/>} />
A: You need to add <Outlet /> tag in the components that are loaded by Router.
I usually put it at the end of the JSX:
return (
<div>
<yourcodehere/>
<Outlet/>
</div>
)
| |
doc_23530532
|
This is the service I'm using:
package com.oss.mail.service;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.integration.handler.LoggingHandler;
import org.springframework.stereotype.Service;
import com.oss.mail.dao.EmailReadingDao;
@Service
public class EmailReadingService {
Logger logger = LoggerFactory.getLogger(LoggingHandler.class);
EmailReadingDao emailReadingDao=new EmailReadingDao();
public void readEmails(){
logger.info("Called readEmail method from EmailReadingService");
logger.info("Calling readEmailDao() from EmailReadingDao");
emailReadingDao.readEmailDao();
}
}
This is how I defined my DAO:
@Configuration
public class EmailReadingDao {
Logger logger = LoggerFactory.getLogger(LoggingHandler.class);
@Autowired
private Environment env;
@Autowired
private GetEmails getEmailsUtil;
String emailHost;
String emailPort;
String emailUserName;
String emailPassword;
int NoOfEmails;
public void readEmailDao(){
logger.info("Called readEmailDao() from EmailReadingDao");
Map<String, String> emailsString=new HashMap<String, String>();
emailHost=env.getProperty("mail.pop3s.host");//Error at thir line.
emailPort=env.getProperty("mail.pop3s.port");
emailUserName=env.getProperty("mail.pop3s.username");
emailPassword=env.getProperty("mail.pop3s.password");
NoOfEmails=Integer.parseInt(env.getProperty("mail.NoOfEmails"));
And this is what I'm seeing in my logs:
2018-07-30 03:49:38 INFO o.s.i.handler.LoggingHandler - Called readEmailDao() from EmailReadingDao
Exception in thread "main" java.lang.NullPointerException
at com.oss.mail.dao.EmailReadingDao.readEmailDao(EmailReadingDao.java:36)
at com.oss.mail.service.EmailReadingService.readEmails(EmailReadingService.java:20)
at com.oss.ProductionIncidentAutomation.ProductionIncidentAutomationApplication.main(ProductionIncidentAutomationApplication.java:32)
I'm not sure why the spring is not wiring this class. Please help me in getting the resolution of this.
A: Autowiring doesn't work if you create an object using the new keyword. It only works in container managed beans. So you have to autowire EmailReadingDao too.
Change:
EmailReadingDao emailReadingDao=new EmailReadingDao();
to:
@Autowired
EmailReadingDao emailReadingDao;
Also EmailReadingDao is not a configuration. You should annotate it with @Repository:
@Repository
public class EmailReadingDao {
A: Change EmailReadingDao emailReadingDao=new EmailReadingDao(); to
@Autowired
private EmailReadingDao emailReadingDao;
or even better use constructor injection.
Then also change in your EmailReadingDao from @Configuration to @Component
@Component
public class EmailReadingDao {
...
}
| |
doc_23530533
|
Here's my beginning code and init code:
class BusinessOwnerVC: UIViewController, MyProtocol {
let paymentContext: STPPaymentContext
init() {
let customerContext = STPCustomerContext(keyProvider: SwiftAPI())
self.paymentContext = STPPaymentContext(customerContext: customerContext)
super.init(nibName: nil, bundle: nil)
self.paymentContext.delegate = self
self.paymentContext.hostViewController = self
self.paymentContext.paymentAmount = 5000 // This is in cents, i.e. $50 USD
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
.....
I am using Storyboards, as I've heard that matters. When I run the code, the fatalError gets thrown and crashes the app. The Stripe example project has this exact code in there, and it works.
Why is my app crashing? What is this required init even doing? I think I understand why I need it, but if you could elaborate beyond it's required for subclasses, that would be helpful.
A: The solution to my issue was removing the init and only user the required init, like so:
required init?(coder aDecoder: NSCoder) {
//fatalError("init(coder:) has not been implemented")
let customerContext = STPCustomerContext(keyProvider: SwiftAPI())
self.paymentContext = STPPaymentContext(customerContext: customerContext)
super.init(coder: aDecoder)
self.paymentContext.delegate = self
self.paymentContext.hostViewController = self
self.paymentContext.paymentAmount = 5000 // This is in cents, i.e. $50 USD
}
I left the commented fatelError portion, but as you can see it's not necessary. It's like the others said, the required init gets used by storyboards and you must have it when you're setting up data in your storyboard class, like Stripe requires.
Just have the super.init in there and you should be all good to go.
| |
doc_23530534
|
A: Make sure you are dragging the field into the Details section of the report. If it is in the header or other section, you may get only the first row.
A: You have to first make a table on the report, then drag the desired fields into the table headers.
| |
doc_23530535
|
Any help or insight would be appreciated
occasionally when playing around I get the infinite loop error from React
import React, { useEffect, useState } from "react";
import { useSelector, useDispatch } from "react-redux";
import { Auth } from "aws-amplify";
import {
BrowserRouter as Router,
Switch,
Route,
Link,
Redirect,
} from "react-router-dom";
import { Provider } from "react-redux";
import ProtectedRoute from "./Utils/ProtectedRoute";
import PublicRoutes from "./Utils/PublicRoute";
import { AuthState, onAuthUIStateChange } from "@aws-amplify/ui-components";
import store from "./store";
import { userLogIn, userLogOut } from "./Actions/userActions";
import NavBar from "./Components/NavBar";
import Home from "./Screens/Home";
import AmplifySignUp from "./Components/AmplifyLogIn";
import Dashboard from "./Screens/Dashboard";
const Routes = (props) => {
const [authState, setAuthState] = useState();
const [user, setUser] = useState();
const [userName, setUserName] = useState();
const dispatch = useDispatch();
const userState = useSelector((s) => s.user);
useEffect(() => {
return onAuthUIStateChange((nextAuthState, authData) => {
setAuthState(nextAuthState);
setUser(authData);
});
});
useEffect(() => {
if (AuthState.SignedIn && user) {
dispatch(userLogIn(user.attributes.email));
}
}, [user]);
useEffect(() => {
if (authState !== AuthState.SignedIn) {
dispatch(userLogOut());
}
}, [user]);
// check auth
const isAuth = async () => {
try {
const status = await Auth.currentAuthenticatedUser();
return status.username;
} catch (err) {}
};
// Auth.currentAuthenticatedUser()
// .then((user) => {
// console.log(user.username)
// setUserName(prevState => {
// if(prevState !== user.username){
// return user.username
// }else {return prevState}
// }
// )
// })
// .catch((err) => console.log(err));
return (
<Router>
<NavBar />
<Switch>
<Route exact path="/">
{userName ? <Redirect from='/' to="/dashboard" /> : <Home />}
</Route>
<Route path="/login">
{authState === AuthState.SignedIn && user ? (
<Redirect to="/dashboard" />
) : (
<AmplifySignUp />
)}
</Route>
<ProtectedRoute path="/dashboard" user={user}>
<Dashboard />
</ProtectedRoute>
</Switch>
</Router>
);
};
export default Routes;
ProtectedRoutes.js
import React from "react";
import { Route, Redirect } from "react-router-dom";
import { useSelector, useDispatch } from "react-redux";
const ProtectedRoutes = ({user, children, ...rest }) => {
console.log(user && user)
return (
<Route
{...rest}
render={() => {
return user ? (
children
) : (
<Redirect
to='/'
/>
);
}}
/>
);
};
export default ProtectedRoutes;
A: Issue
On this line...
{userName ? <Redirect from='/' to="/dashboard" /> : <Home />}
You're not updating the username state with the currently authenticated user, hence the dashboard redirect never happens.
Possible Solution
*
*Add a function to get the currently authenticated user
*Add a useEffect hook to update the username state with the returned user
Code
const getAuthenticatedUser = async () => {
try {
const user = await Auth.currentAuthenticatedUser();
return {
user,
};
} catch (error) {
return { error };
}
};
useEffect(() => {
const { user, error } = await getAuthenticatedUser();
if (!error) {
setUsername(user.username);
}
}, []);
| |
doc_23530536
|
The problem is that the 'Vector' structure which is defined in header file init.h which is included in the main.c for futher use. Was thinking everything is cool but error occured! Damn, it highlightes Vector* vStart; line (THE ERROR LINE). Well after some reasearch of that error I have found that it's very general error which occur in either the structure or header related cases.
Error code:
a label can only be part of a statement and a declaration is not a
statement
Example:
init.h
#ifndef INIT_H
#define INIT_H
#define vecLength 4
typedef struct Vector {
double * vector;
int N;
} Vector;
typedef struct Matrix {
double ** matrix;
int nRow;
int nCol;
} Matrix;
int matrixInit(Matrix* nMatrix);
int vectorInit(Vector* nVector);
#endif // INIT_H
main.c
#include "init.h"
main(){
...
...
switch(_char)
{
case Start:
Vector* vStart;
if(vectorInit(vStart)){
getStartPoint(vStart);
vectorPrint(vStart);
}
else{
hFe("Vector vStart is not created!");
return 1;
}
getch();
break;
case Translation:
hFe(NULL);
return 1;
case Exit:
return 0;
default:
system("cls");
goto AGAIN;
}
}
A: Assuming you are using a somewhat modern C compiler, the error is not at all related to scope as the present answers suggest.
Like the compiler says: "error: a label can only be part of a statement and a declaration is not a statement". The error merely comes from incorrect label syntax. case follows the syntax rules of labels, the syntax must be like this (6.8.1):
labeled-statement:
identifier : statement
case constant-expression : statement
default : statement
Meaning a label must be followed by a statement, not a declaration or something else that isn't regarded as a statement in C. So you get the same compiler error as you would get if attempting something like goto label; label: int x;
One way to dodge the compiler error is simply to add an empty statement:
case Start:
;
Vector* vStart;
That being said, you might still want a local scope for each case by adding braces: doing so is good practice.
Looking at the greater picture however, it doesn't seem like it even makes sense to declare vStart in a local scope anyhow. You should declare it at the beginning of main and initialize it to a safe value, for example NULL.
A: Change your switch case to
case Start:
{
Vector* vStart;
if(vectorInit(vStart))
{
getStartPoint(vStart);
vectorPrint(vStart);
}
else
{
hFe("Vector vStart is not created!");
return 1;
}
getch();
}
break;
In this way, with brackets, you create a scope inside the case where you can declare variables.
BTW you should declare it at the top of your function to make a readable code.
A: A label must be on a separate line, in general starting in the first column of the line, and have nothing else on the line. I.E.
mylabel:
| |
doc_23530537
|
Here is the code I am following:
import java.io.*;
import java.util.*;
public class ID3
{
int []array = new int[values.size()];
for (int i=0; i< array.length; i++) {
String symbol = (String)values.elementAt(i);
array = domains[attribute].indexOf(symbol);//Type Error
}
values = null;
return array;
}
public void decomposeNode(TreeNode node) {
double bestEntropy=0;
boolean selected=false;
int selectedAttribute=0;
int numdata = node.data.size();
int numinputattributes = numAttributes-1;
node.entropy = calculateEntropy(node.data);
if (node.entropy == 0) return;
for (int i=0; i< numinputattributes; i++) {
int numvalues = domains.size(); //Cannot resolve method (?)
if ( alreadyUsedToDecompose(node, i) ) continue;
double averageentropy = 0;
for (int j=0; j< numvalues; j++) {
Vector subset = getSubset(node.data, i, j);
if (subset.size() == 0) continue;
double subentropy = calculateEntropy(subset);
averageentropy += subentropy *
subset.size();
}
domains = new Vector[numAttributes];
for (int i=0; i < numAttributes; i++) domains = new Vector();//TYPE ERROR
attributeNames = new String[numAttributes];
for (int i=0; i < numAttributes; i++) {
attributeNames = tokenizer.nextToken(); //TYPE ERROR
}
.....
DataPoint point = new DataPoint(numAttributes);
for (int i=0; i < numAttributes; i++) {
point.attributes = getSymbolValue(i, tokenizer.nextToken()//TYPE ERROR
);
}
root.data.addElement(point);
}
bin.close();
return 1;
}
int numvalues = node.children.length;
for (int i=0; i < numvalues; i++) {
System.out.println(tab + "if( " +
attributeNames[node.decompositionAttribute] + " == \"" +
domains[node.decompositionAttribute].elementAt(i)
+ "\") {" );
printTree(node.children, tab + "\t"); //Incompatible types
if (i != numvalues-1) System.out.print(tab + "} else ");
else System.out.println(tab + "}");
}
}
public void createDecisionTree() {
.....
}
I am getting the following errors:
Error:(368, 57) java: incompatible types: java.util.Vector cannot be converted to java.util.Vector[]
Error:(374, 49) java: incompatible types: java.lang.String cannot be converted to java.lang.String[]
Error:(410, 50) java: incompatible types: int cannot be converted to int[]
Error:(449, 71) java: incompatible types: int[] cannot be converted to int
Error:(473, 27) java: incompatible types: ID3.TreeNode[] cannot be converted to ID3.TreeNode
Very much appreciate your help!
A: The [] symbols should only be used for arrays.
String[] array = "error";
Does not compile, because the type on the left is an array of vectors, while on the right there is a single string only.
| |
doc_23530538
|
[ {
"_id": "5825a49dasdasdasd8417c1b6d5",
}
"_id": "dfsdfsdf4960932218417c1b6d5",
}
"_id": "23434344960932218417c1b6d5",
},]
For that i have written in the main:
main.post('/..../add', Controller.addEvent);
In the Controller, i want to recieve the request and search in the mongodb for these ID's, because i need some informations about these IDs
exports.addEvent = function(req, res) {
var collection = db.get().collection('events');
My question is, if someone send me over "localhost:8080/events/add" the given simple json file, how do i have to handle this json? i need the ID's and want to search with them.
Thanks for help!
----------ADDED------------
I am bit further now. In my controller i have the following function
exports.addevent = function(req, res)
{
var ids = req.body;
console.log(ids);
}
Now i'm getting all IDs, which i have posted with "Postman" from chrome.
The output in the console is:
[ { _id: '5825a49dasdasdasd8417c1b6d5' },
{ _id: 'dfsdfsdf4960932218417c1b6d5' },
{ _id: '23434344960932218417c1b6d5' } ]
How can i have every single ID?
A: Ok, the request is an object and not a string. That was the error :-/
for(var i in ids) {
console.log(ids[i]._id);
}
and it works, now i can connect to the database and search for the id
| |
doc_23530539
|
It works perfectly on my localhost machine but does not work on my webserver!
Here is my PHP:
<?php
$uploaddir = 'uploads/';
if(!is_dir($uploaddir))
{
$uploadfile = $uploaddir . basename($_FILES['fileToUpload']['name']);
echo '<pre>';
if (move_uploaded_file($_FILES['fileToUpload']['tmp_name'], $uploadfile)) {
echo "File is valid, and was successfully uploaded.\n";
} else {
echo "Possible file upload attack!\n";
}
echo 'Here is some more debugging info:';
print_r($_FILES);
}
else
{
echo 'Dir doesnt exist!';
}
?>
And here is my C#:
public async Task UploadImageTask(StorageFile file)
{
HttpClient client = new HttpClient();
client.BaseAddress = new Uri("http://mywebsite.com/");
MultipartFormDataContent form = new MultipartFormDataContent();
HttpContent content = new StringContent("fileToUpload");
form.Add(content, "fileToUpload");
var stream = await file.OpenStreamForReadAsync();
content = new StreamContent(stream);
content.Headers.ContentDisposition = new ContentDispositionHeaderValue("form-data")
{
Name = "fileToUpload",
FileName = file.Name
};
form.Add(content);
await client.PostAsync("uploadanimage.php", form);
response.EnsureSuccessStatusCode();
DebugInfo = response.Content.ReadAsStringAsync().Result; //DebugInfo is a string which returns the source of my website's homepage. all 404's redirect to the homepage aswell. if I send something thats not a storage file, i get a "Possible File Upload Attack"
}
What am I doing wrong? Any help is appreciated. Thanks in advance!
| |
doc_23530540
|
Eg1: data-frame columns names are: 'one', 'two', 'three'
List items are : 'one' , 'four', 'two'.
Wanted to print only 'one' and 'two' data-frame column
List1=['one', 'four', 'two']
for item in List1:
if item in df.columns:
print(df.item)
Above code throws AttributeError: 'DataFrame' object has no attribute 'item' which is perfectly fine.
I am just trying to print only those data-frame columns whose names are present in the given list, if it is possible.
Tried to find the workaround from pandas docs:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.columns.html
| |
doc_23530541
|
I have something like:
>DF1 <- data.frame(a=1:3, b=4:6)
>DF2 <- data.frame(c=-2:0, d=3:1)
and I want to get something like
>DF1
a b c d
1 -2 4 -2 3
2 -1 5 -1 2
3 0 6 0 1
I'd normally do it by hand, as in
DF1$c <- DF2$c
DF1$d <- DF2$d
and that's fine as long as I have few variables, but it becomes very time consuming and prone to error when dealing with several variables. Any idea on how to do this efficiently? It's probably quite simple but I swear I wasn't able to find an answer googling, thank you!
A: The result from your example is not correct, it should be:
> DF1$c <- DF2$c
> DF1$d <- DF2$d
> DF1
a b c d
1 1 4 -2 3
2 2 5 -1 2
3 3 6 0 1
Then cbind does exactly the same:
> cbind(DF1, DF2)
a b c d
1 1 4 -2 3
2 2 5 -1 2
3 3 6 0 1
A: (I was going to add this as a comment to Jilber's now deleted and then undeleted post.) Might be safer to recommend something like
DF1 <- cbind(DF1, DF2[!names(DF2) %in% names(DF1)])
| |
doc_23530542
|
Here is my code.
from keras.preprocessing import image
img_path = 'test/test_image.jpg' # This is an image I took in my kitchen.
img = image.load_img(img_path, target_size=(224, 224))
When I run the code, I get the following error.
anaconda3/lib/python3.5/site-packages/PIL/ImageFile.py in load(self)
238 if not self.map and not LOAD_TRUNCATED_IMAGES and err_code < 0:
239 # still raised if decoder fails to return anything
--> 240 raise_ioerror(err_code)
241
242 # post processing
anaconda3/lib/python3.5/site-packages/PIL/ImageFile.py in raise_ioerror(error)
57 if not message:
58 message = "decoder error %d" % error
---> 59 raise IOError(message + " when reading image file")
60
61
OSError: broken data stream when reading image file
Please note, if I convert test_image.jpg to test_image.png, then the given code works perfectly. But I have several thousands of pictures and I can't convert all of them to png format. I tried several things after searching for solution in web but couldn't get rid of the problem.
Any help would be appreciated!
A: According to here Pillow upgrade by pip install Pillow --upgrade should solve this issue.
If you are still facing the problem you can use mogrify to batch convert all your images. mogrify -format png *.jpg
A: Use this at the beginning of your code:
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
I found it here. And this is working for me.
| |
doc_23530543
|
//ResetPssword Button
@IBAction func ResetPassword(sender: AnyObject) {
if validateEmail(EmailTextField.text!) == false {
print("Enter a Valid Email Address")
let VaildMessage = "Enter an Email Address"
//Empty TextField Alert Message
self.disaplayErrorMessage(VaildMessage)
}
//Reset
else {
ref.resetPasswordForUser(EmailTextField.text) { (ErrorType) -> Void in
if ErrorType != nil {
let error = ErrorType
print("There was an error processing the request \(error.description)")
let errorMessage:String = "The Email You Entered is not Exist"
//Error Alert Message
self.disaplayErrorMessage(errorMessage)
} else {
print("Password Reset Sent Successfully")
let successMessage = "Email Message was Sent to You at \(self.EmailTextField.text)"
//Success Alert Message
self.disaplayErrorMessage(successMessage)
}
} //reset
} //Big Else
} //Button
//Display Alert Message With Confirmation
func disaplayErrorMessage(theMessage:String)
{
//Display alert message with confirmation.
let myAlert = UIAlertController(title: "Alert", message: theMessage, preferredStyle: UIAlertControllerStyle.Alert);
let OkAction = UIAlertAction(title: "Ok", style: UIAlertActionStyle.Default) {
action in
self.dismissViewControllerAnimated(true, completion: nil);
}
myAlert.addAction(OkAction);
self.presentViewController(myAlert, animated: true, completion: nil)
}
//Validate Email Function
func validateEmail(candidate: String) -> Bool {
let emailRegex = "[A-Z0-9a-z._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,6}"
return NSPredicate(format: "SELF MATCHES %@", emailRegex).evaluateWithObject(candidate)
}
If the password reset sent successfully, there will be an alert that prints out the email address. But it prints email with optional word!
How can I print it without the optional word?
A: You have to unwrap the Optional text field.
For example with if let:
if let text = self.EmailTextField.text {
let successMessage = "Email Message was Sent to You at \(text)"
//Success Alert Message
self.disaplayErrorMessage(successMessage)
}
A: Did you tried to force unwrap the value using the ! sign ?
| |
doc_23530544
|
chart()
{
if (mChartView == null)
{
d = new BuildMultipleDataset();
db.open();
//code for some database query
LinearLayout layout = (LinearLayout) findViewById(R.id.chart);
mChartView = ChartFactory.getLineChartView(this, d.datasetbuilder(cursor1,cursor2), d.render());
layout.addView(mChartView, new LayoutParams(LayoutParams.FILL_PARENT, chartHeight));
db.close();
}
else
{
mChartView.repaint();
}
}
I call this method when a update is triggered from database. and in that time I make mChartView = null; But the problem is that it does not draw the updated chart. update is reflected into chart only when if I switch screen orientation. what's wrong with my code?
A: I was only able to get this to work when I removed the View, set mChartView = null;, defined mChartView, and the set the View.
i.e.
layout.removeView(mChartView);
mChartView = null;
mChartView = ChartFactory //rest of mChartView code
layout.addView(mChartView);
| |
doc_23530545
|
2019-03-06T14:49:55+01:00
I thought that I can do it in such way:
NSDate(timeIntervalSince1970: TimeInterval(NSDate().timeIntervalSince1970))
but I got such time:
2021-01-24 15:42:31 +0000
I thought that I have to user decoding pattern, so used such way:
let dateFormatterGet = DateFormatter()
dateFormatterGet.dateFormat = "yyyy-MM-dd HH:mm:ss+z"
let dateFormatterPrint = DateFormatter()
dateFormatterPrint.dateFormat = "MMM dd,yyyy"
let time = NSDate(timeIntervalSince1970: TimeInterval(NSDate().timeIntervalSince1970))
if let date = dateFormatterGet.date(from: time.description) {
print(dateFormatterPrint.string(from: date))
} else {
print("There was an error decoding the string")
}
but its output was:
There was an error decoding the string
what means that I can't decode this time in such way. What I did wrong?
A: You are creating a string from a date from a time interval from a date, three of the conversions are waste.
The conversion failed because time.description doesn't match the format yyyy-MM-dd HH:mm:ss+z
To get an ISO8601 string with time zone the date format is yyyy-MM-dd'T'HH:mm:ssZ and you have to specify a fixed locale
let formatter = DateFormatter()
formatter.locale = Locale(identifier: "en_US_POSIX")
formatter.dateFormat = "yyyy-MM-dd'T'HH:mm:ssZ"
let isoString = formatter.string(from: Date())
There is a shorter way as suggested by Rob in the comments
let formatter = ISO8601DateFormatter()
formatter.timeZone = .current
let isoString = formatter.string(from: Date())
| |
doc_23530546
|
The problem is that the ones after isn't changing color when everything else is the same. How do I get around it?
Here is the code that I am using to generate the background color. It works for the first 25 rows but it wasn't able to grab the next 75 rows
var allRowData = table.getRows();
// loops the entire row
allRowData.forEach(x => {
// using the row data, grab the columns and compare with a condition
scope.columns.forEach(y => {
if (y.priority == 3) { // checks for the priority number to add a color to it
var rowCell = x.getCell(`${y.fieldName}`);
rowCell.getElement().style.backgroundColor = "#F00";
}
});
});
A: You should not be trying to directly manipulate the layout of the table from outside of Tabulator.
Tabulator uses a virtual DOM which means that it will create and destroy elements of the table as needed, which means that you can only style elements that are currently visible, when these elements are updated any previous formatting can be lost without notice.
If you wish to style cell elements you must use the formatter function in the column definition or the rowFormatter function on the table which are called when the rows/cells are redrawn.
Full details of these can be found in the Formatter Documentation
If you are wanting to change the state of rows based on something outside of the table, then your formatters should reference this external variable and when you update the external state you should trigger the redraw function on the table to retrigger the formatters
| |
doc_23530547
|
EDIT:
This is what I've tried. I made two instance variables called _previousPosition and _currentPosition. In -touchesBegan, I set them both to be the current finger location in the scene. In -touchesMoved, I set _currentPosition to be the current finger location once again. Keep in mind that during -touchesMoved, when I'm updating _currentPosition, _currentPosition is being constantly updated, while _previousPosition is not. Finally, in touchesEnded, I create another variable (not global, but private) called pixelsMoved, and set that equal to _currentPosition - _previousPosition. Right after that, in -touchesEnded, I reset _previousLocation to be the current finger location. It's all very complicated, so I'm almost positive I've made some mistake somewhere. Any help would be appreciated.
A:
I'd simply like to know if there is a way to detect how many pixels the finger has moved during the -touchesMoved function?
-touchesMoved:withEvent: provides an event, and from the event you can get individual touch objects, each of which have an associated location that you get with -[UITouch locationInView:]. You don't get information about how far the touch has moved since the last time you looked, but you can keep track of the location of each touch and do the comparison yourself.
| |
doc_23530548
|
id_form id_purchase pn
PUR3-20190515022552 PUR-20190515022552 02N64073
PUR2-20190515022552 PUR-20190515022552 02N64073
PUR1-20190515022552 PUR-20190515022552 02N64073
This is my code :
SELECT COUNT(*) as Total FROM pur_supp WHERE pn = '02N64073' GROUP BY id_purchase
When I run the code, that total still 3 but I want that total is 1.
A: Try this
SELECT COUNT(DISTINCT id_purchase) as Total FROM pur_supp WHERE pn = '02N64073'
| |
doc_23530549
|
I'm quite stuck in the creation of the database.
I know it's wrong as I can't recover the leg stats for each player but I don't know how to solve this correctly.
A match has at least two legs, and I need to keep track of stats of each player playing the leg (as number of darts thrown for example).
So, for a game, I need to be able to get the Game played, which player have played the game and the stats of both players.
How can I link those tables to be able to get that ? Did I have to add another table between games and legs maybe ?
A: It might be, what are you looking for:
Now you can have information about players in every leg. You can insert there information about score. In player table insert information about players. Game table contain information about whole game. For every leg is one row in Game table. Changed DB schema, maybe this is more suitable to you.
| |
doc_23530550
|
I am trying to put together a command to be used elsewhere based on these two columns. So I want to output this into another cell, perhaps C2:
replace [contents of old_name] [contents of new_name]
How can I accomplish this task?
I don't seem to be able to simply type:
replace =A2 =B2
nor
replace =CELL("contents", A2) =CELL("contents", B2)
--------------------------------------------------
| A | B |
--------------------------------------------------
| 1 | old_name | new_name |
--------------------------------------------------
| 2 | filename1.txt | filename2.txt |
--------------------------------------------------
Expected output:
replace filename1.txt filename2.txt
A: You need this formula:
="replace "&A2&" "&B2
A: If i understand your question correctly - its all about concatenate
with concatenate you can merge content of multiple cells' strings , and add more data in between " ", the ',' seperates the values
if A1 will have - "Daddy"
and A2 will have "Great"
you can write in A3
=CONCATENATE(A1, " is ", A2)
and the result of the code will be "Daddy is Great"
so the answer for your request is =concatenate("Replace ", A1," ",A2)
is that what you meant ?
| |
doc_23530551
|
I want frmTimer to not launch frmUser when frmAdmin is open.
I'm using a named mutex to tell frmTimer if frmAdmin is open; however, the mutex appears not to be released after frmAdmin is closed.
The mutex is created in frmAdmin with code like this:
public partial class frmAdmin : Form
{
Mutex m;
protected override void OnShown(EventArgs e)
{
base.OnShown(e);
m = new Mutex(true, "frmAdmin");
}
protected override void OnClosed(EventArgs e)
{
base.OnClosed(e);
m.ReleaseMutex();
MessageBox.Show("Debug 1 -- In the frmAdmin ONCLOSED Event."); //test code
Debug.WriteLine("Debug 1 -- In the frmAdmin ONCLOSED Event."); //test code
}
public frmAdmin(string strPassedFromLogin)
{
InitializeComponent();
<<Code snipped>>
}
private void frmAdmin_FormClosing(object sender, FormClosingEventArgs e)
{
//Start _ Added
bool mutexSet = true;
try
{
Mutex.OpenExisting("frmAdmin");
MessageBox.Show("Debug 2 -- In the frmAdmin FORMCLOSING Event."); //test code
}
catch (WaitHandleCannotBeOpenedException)
{
mutexSet = false;
}
if (mutexSet)
{
base.OnClosed(e);
m.ReleaseMutex();
}
//End _ Added
Application.Exit();
}
<<Code snipped>>
}
Initially, I did not have any mutex code in the frmAdmin_FormClosing method (the method only contained the Application.Exit() line). I added the mutex code in an attempt to release the mutex, but it is still not being released.
The mutex is used in frmTimer like this:
private void tmTimer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
bool adminIsOpen = true;
try
{
Mutex.OpenExisting("frmAdmin");
MessageBox.Show("Debug 3 -- Mutex exists: frmAdmin IS open."); //test code
}
catch (WaitHandleCannotBeOpenedException)
{
adminIsOpen = false;
MessageBox.Show("Debug 4 -- Mutex doesn't exists: frmAdmin is NOT open."); //test code
}
if (adminIsOpen == false)
{
//frmAdmin is closed; go ahead and open frmUser.
<<Code snipped>>
}
}
When I run the application, the messagebox with the 'Debug 4' text appears each time the timer fires until I open frmAdmin (frmAdmin is launched from frmLogin after password verification), from then on the messagebox with the 'Debug 3' text appears each time the timer fires, even after I exit frmAdmin. When exiting frmAdmin, I see the messagebox with the 'Debug 2' text. I've never seen the messagebox (or an output window message) with the 'Debug 1' text.
It appears as though the mutex doesn't release after frmAdmin is closed and this prevents frmUser from launching.
Any help is appreciated.
This is a follow-up question to this question.
UPDATE
Here is my code after getting it to work. I got it to work because of the answers from Hans Passant and Chris Taylor and from Serhio from this post.
The mutex is now created in frmAdmin with code like this:
Mutex m;
protected override void OnShown(EventArgs e)
{
base.OnShown(e);
m = new Mutex(true, "frmAdmin");
}
//This 'OnClosed' event is skipped when this application is terminated using only Exit(); therefore, call Close() before calling Exit().
//The 'catch' code is added to insure the program keeps running in the event these exceptions occur.
protected override void OnClosed(EventArgs e)
{
if (m != null)
{
try
{
base.OnClosed(e);
m.ReleaseMutex();
m.Close();
}
catch (AbandonedMutexException)
{
//This catch is included to insure the program keeps running in the event this exception occurs.
}
catch (ApplicationException)
{
//This catch is included to insure the program keeps running in the event this exception occurs.
}
catch (SynchronizationLockException)
{
//This catch is included to insure the program keeps running in the event this exception occurs.
}
}
}
The mutex is used in frmTimer like this:
private void tmTimer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
bool adminIsOpen = false;
Mutex _muty = null;
try
{
//If the named mutex does not exist then OpenExisting will throw the 'WaitHandleCannotBeOpenedException',
//otherwise the mutex exists and Admin is open.
_muty = Mutex.OpenExisting("frmAdmin");
adminIsOpen = true;
_muty.Close();
}
catch (WaitHandleCannotBeOpenedException)
{
//This catch is thrown when Admin is not opened (keep 'adminIsOpen = false'). Do not delete this catch.
}
catch (AbandonedMutexException)
{
//This catch is included to insure the program keeps running in the event this exception occurs.
}
if (adminIsOpen == false)
{
//frmAdmin is closed; go ahead and open frmUser.
<<Code snipped>>
}
}
A: The problem is that once you run the admin application the Mutex exists and then the OpenExisting succeeds. Releasing a Mutex does not destroy the Kernel object, it just releases the hold on the mutex so that other waiting threads can execute. Therefore susequent Mutex.OpenExisting calls open the mutex successfully.
You probably want to use Mutex.WaitOne(TimeSpan) if you successfully open the Mutex and if WaitOne returns false then you know you you could not acquire the mutex so the Admin application still hold the mutex.
A: The problem is in the Elapsed event handler, it checks if the mutex exists with Mutex.OpenExisting(). Sure it exists. You are not actually checking if it is signaled. That takes calling its WaitOne(0) method.
Also beware that creating a form in a Timer.Elapsed event is quite inappropriate. That event runs a threadpool thread, it is not at all suitable to act as a UI thread. It has the wrong COM state ([STAThread] and Thread.SetApartmentState), a property you cannot change on a threadpool thread. Use a regular Form.Timer instead so that the form gets created on program's UI thread.
Edit: also beware of the inevitable race, the timer could create the User form one microsecond before the Admin form closes. In other words, you'll have a User form without an Admin form, the one condition you wrote this code to prevent. Is that appropriate? Trying forms in different processes to affect each other is a bad idea...
| |
doc_23530552
|
In my program I need to edit queries according to user identity. For example, "show me all orders, but filter the results to just those with my own ID". I'd also like to have the user injected for me.
One pattern offered to me on advice pages is:
@Query("select o from Orders o where o.username = ?#{principal.username}")
List<Orders> findAllOrders();
(I'm not sure how the code resolves "principal"...)
I suppose that I could edit this to:
@Query("select o from Orders o where o.username = ?#{principal.localUser.id}
and o.flavor = ?#{principal.localUser.flavor}")
List<Orders> findAllFavoriteFlavorOrders();
if I had a way of adding an instance of LocalUser to Principal. To make this happen I'd need to extend the Principal class and do the instance adding during the login logic.
Is there a better way? During the login process could I also store a User object globally, so that this would work:
@Query("select o from Orders o where o.username = ?#{localUser.id}
and o.flavor = ?#{localUser.flavor}")
List<Orders> findAllFavoriteFlavorOrders();
This link stackoverflow question about UserDetailsService describes one way of managing this.
This link another stackoverflow question about UserDetailsService describes something else.
Are either of these still "state of the art"?
Thanks in advance,
Jerome.
| |
doc_23530553
|
Is there a straightforward/less-frowned-upon way to use a module in an arbitrary environment? Ideally such a npm2notnpm bridge would be able to interface with the complete module as forked, there is also no expectation to have it work in 100% cases :)
Why?.. the CMS engine we have to work with can execute arbitrary javascript using Spidermonkey engine (on the server); unfortunately that's the only way to build anything functional on the platform. I'd like to be able to leverage available packages as much as possible (cheerio on the wishlist) rather than re-inventing the wheel or copy-pasting code without context.
A: You can use Require.js to load many Common.js packaged modules. Or you could define exports = window and pass that to the module to get access to the module.
http://requirejs.org/docs/commonjs.html
| |
doc_23530554
|
<ListBox Name="LBox" HorizontalContentAlignment="Stretch" Grid.Column="2" SelectionMode="Single">
<ListBox.ItemTemplate>
<DataTemplate>
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition/>
<ColumnDefinition/>
</Grid.ColumnDefinitions>
<TextBlock x:Name="LTxtBox" Text="{Binding NAME}" Grid.Column="0"/>
<ProgressBar x:Name="PBarLbox" Grid.Column="1" Minimum="0" Maximum="100" Value="{Binding FORTSCHRITT}" />
</Grid>
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
I can add/change/Remove Items in my listbox.
Now ive tried to save the Items in a txt file.
If i Add a item, its work well, i can save them in my txt File.
Now ive tried to save changes in my Listbox, but how can i get access to my Items in a Listbox? .
Here is the Code behind for my Observable List and Property Class.
private ObservableCollection<TodoItem> Todo =new ObservableCollection<TodoItem>();
public class TodoItem : INotifyPropertyChanged
{
public string NAME { get; set; }
public int FORTSCHRITT{ get; set; }
//###########################################
public string Name
{
get { return this.NAME; }
set
{
if (this.NAME != value)
{
this.NAME = value;
this.NotifyPropertyChanged("NAME");
}
}
}
public int fortschritt
{
get { return this.FORTSCHRITT; }
set
{
if (this.FORTSCHRITT != value)
{
this.FORTSCHRITT = value;
this.NotifyPropertyChanged("FORTSCHRITT");
}
}
}
public event PropertyChangedEventHandler PropertyChanged;
public void NotifyPropertyChanged(string propName)
{
if (this.PropertyChanged != null)
this.PropertyChanged(this, new PropertyChangedEventArgs(propName));
}
}
My thought was, if im gonna to close my Window, Override the old List with the new Changed Items in my Listbox.
For this ive Created a Window close Event:
void DataWindow_Closing(object sender, CancelEventArgs e)
{ // TODO
// Wenn er das Fenster schließt soll er alle Daten die im ToDo Fenster sind speichern und die alte Datei Überschreiben.
}
Ive tried with For each to get access, ive tried for loops but i cant get access.
With:
List<string> list = new List<string>();
string[] arr;
foreach (var x in LBox.Items)
{
list.Add(x.ToString());
}
arr= list.ToArray();
string display=String.Join(Environment.NewLine, arr);
MessageBox.Show(display);
I can see That he got access to the items but he print the following:
enter image description here
How can i print the right values ?
A: You should not access the control to get the data items. Rather access the source collection directly:
MainViewModel.cs
class MainViewModel : INotifyPropertyChanged
{
public ObservableCollection<TodoItem> TodoItems { get; }
public MainViewModel()
{
this TodoItems = new ObservableCollection<TodoItem>();
}
public async Task SaveDataAsync()
{
var fileContentBuilder = new StringBuilder();
foreach (TodoItem todoItem in TodoItems)
{
fileContentBuilder.AppendLine($"{todoItem.Name}, {todoItem.Fortschritt}");
}
await using var destinationFileStream = File.Open("Destination_File_Path", FileMode.Create);
await using var streamWriter = new StreamWriter(destinationFileStream);
string fileContent = fileContentBuilder.ToString();
await streamWriter.WriteAsync(fileContent);
}
}
MainWindow.xaml.cs
public partial class MainWindow : Window
{
private MainViewModel MainVieModel { get; }
public MainWindow()
{
InitilaizeComponent();
this.MainViewModel = new MainViewModel();
this.DataContext = this.MainViewModel;
this.Closing += SaveDataToFile_OnClosing;
}
private async void SaveDataToFile_OnClosing(object sender, CancelEventArgs e)
=> await this.MainViewModel.SaveDataAsync();
}
MainWindow.xaml
<Window>
<ListBox itemsSource="{Binding TodoItems}">
...
</ListBox>
</Window>
Properties in C# must look like this (pay attention to the proper casing: use camelCase for fields and PascalCase for all other members). Also use nameof to specify the property's name:
private int fortschritt;
public int FortSchritt
{
get => this.fortschritt;
set
{
if (value != this.fortschritt)
{
this.fortschritt = value;
this.NotifyPropertyChanged(nameof(this.Fortschritt));
}
}
}
| |
doc_23530555
|
def list2Stream[A,B,F[_],S](vs: List[A],
f: A => EitherT[IO,S,Stream[IO,B]]
): EitherT[IO,S,Stream[IO,B]] = {
???
}
which would map each value from vs to a stream of values and collect all those values in a new stream.
I tried something like:
vs.map(f).sequence.flatten
but it seems there is no implicit definition for Stream.
A: The following answer was provided in the gitter channel by Michael Pilquist:
vs.traverse(f).map(s => Stream.emits(s).flatten)
| |
doc_23530556
| ||
doc_23530557
|
#include <iostream>
#include <string>
using namespace std;
int main()
{
const char* numbers[10]{"One", "Too", "Three", "Four", "Five",
"Six", "Seven", "Eight", "Nine", "Zero"};
/* This version did not work. Why?
for (const char** ptr = numbers; *ptr != nullptr; *ptr++) {
const char* pos = *ptr;
while (*pos != '\0')
cout << *(pos++) << " ";
}
*/
for(unsigned int i = 0; i < sizeof(numbers) / sizeof(numbers[0]); ++i)
{
const char* pos = numbers[i];
while(*pos != '\0')
printf("%c ", *(pos++));
printf("\n");
}
return 0;
}
I am aware that my code is a mixture of C++17 and C(in a transition from C to C++, nullptr, cout are two examples), but not sure the first for-loop with
for (const char** ptr = numbers; *ptr != nullptr; *ptr++)
is correct or not. What's wrong with it?
Is there a "best practice" to looping thru array of string(char array , not string object yet), especially with the double pointer I'd like to catch, in this case? Thanks!
A: Two things -
First, in this loop expression, you don't need to dereference the ptr after incrementing it - *ptr++.
for (const char** ptr = numbers; *ptr != nullptr; *ptr++)
^^
*ptr++ will be grouped as - *(ptr++), which means, (post)increment the ptr and dereference the result of (post)increment. It should be just ptr++, as we need the ptr pointer to point to next element in the numbers array after executing the loop body.
Second, if your loop condition is checking for nullptr then the array, which loop is iterating, should have nullptr as a marker to indicate the end of array and you need to increase the size of array as well to adjust end marker:
const char* numbers[11] {"One", "Too", "Three", "Four", "Five",
"Six", "Seven", "Eight", "Nine", "Zero", nullptr};
With the above mentioned changes, the following loop should print the numbers array strings:
for (const char** ptr = numbers; *ptr != nullptr; ptr++) {
const char* pos = *ptr;
while (*pos != '\0')
cout << *(pos++) << " ";
cout << "\n";
}
Since, you have added a new element in the numbers array to mark end of array, be cautious, in the second loop, you are doing sizeof(numbers)/sizeof(numbers[0]) to get the array size and it will give 11 and second loop will end up accessing the nullptr which will be undefined behaviour. Either subtract 1 from sizeof result in loop condition or add a check pos != nullptr before processing it.
A: In your first loop you a looking for ‘nullptr’ in your array, but you didn’t put it there.
You are reading uninitialized garbage after you array.
I suggest to use std::vector or std::array
A: Try these edits and see what happens
const char* numbers[11]{"One", "Too", "Three", "Four", "Five",
"Six", "Seven", "Eight", "Nine", "Zero", NULL};
and
;i < sizeof(numbers)/sizeof(numbers[0])-1;
in the second for loop
Basically *ptr points to the beginning of the respective string. But after last one, it goes out-of-bound. If you really have to compare to nullptr the above is the way to do.
EDIT
Yeah, in addition to the above, ptr should be incremented instead of *ptr. *ptr will be traversing over the string rather than that array.
| |
doc_23530558
|
example :
order no AB123456 become ABC123456
A: Here's a solution using Sql Server
select stuff(col1, 3, 0, 'C') as col1
from t
Here's a solution using MySQL
select insert(col1, 3, 0, 'C') as col1
from t
col1
ABC123456
ABC231621
ABC326541
ABC965471
Fiddle
| |
doc_23530559
|
Can I access all of HKEY_CURRENT_USER (the application currently access HKEY_LOCAL_MACHINE) without Administrator privileges?
A: Yes, you should be able to write to any place under HKEY_CURRENT_USER without having Administrator privileges. But this is effectively a private store that no other user on this machine will be able to access, so you can't put any shared configuration there.
A: In general, a non-administrator user has this access to the registry:
Read/Write to:
*
*HKEY_CURRENT_USER
Read Only:
*
*HKEY_LOCAL_MACHINE
*HKEY_CLASSES_ROOT (which is just a link to HKEY_LOCAL_MACHINE\Software\Classes)
It is possible to change some of these permissions on a key-by-key basis, but it's extremely rare. You should not have to worry about that.
For your purposes, your application should be writing settings and configuration to HKEY_CURRENT_USER. The canonical place is anywhere within HKEY_CURRENT_USER\Software\YourCompany\YourProduct\
You could potentially hold settings that are global (for all users) in HKEY_LOCAL_MACHINE. It is very rare to need to do this, and you should avoid it. The problem is that any user can "read" those, but only an administrator (or by extension, your setup/install program) can "set" them.
Other common source of trouble: your application should not write to anything in the Program files or the Windows directories. If you need to write to files, there are several options at hand; describing all of them would be a longer discussion. All of the options end up writing to a subfolder or another under %USERPROFILE% for the user in question.
Finally, your application should stay out of HKEY_CURRENT_CONFIG. This hive holds hardware configuration, services configurations and other items that 99.9999% of applications should not need to look at (for example, it holds the current plug-and-play device list). If you need anything from there, most of the information is available through supported APIs elsewhere.
| |
doc_23530560
|
VID RS
1 A
1 B
1 B
2 C
2 A
what I want to do is to calculate the count of each RS for each VID and want to have an output as follows:
VID A B C
1 1 2 0
2 1 0 1
Is it possible to do through a query or I need to create temp table and perform insert/update on that?
Thanks
A: If your number of RS is fixed then you can do
select vid,
sum(case when RS = 'A' then 1 else 0 end) AS A,
sum(case when RS = 'B' then 1 else 0 end) AS B,
sum(case when RS = 'C' then 1 else 0 end) AS C
from your_table
group by vid
A: You can do this in two ways
One is use Pivot
SELECT *
FROM Yourtable
PIVOT (Count(rs)
FOR rs IN([A],
[B],
[C]) )piv
Note :
If your RS column values are not static then convert the pivot to Dynamic Pivot
Another way is use Conditional Aggregate
SELECT vid,
Count(CASE Rs WHEN 'A' THEN 1 END) [A],
Count(CASE Rs WHEN 'B' THEN 1 END) [B],
Count(CASE Rs WHEN 'C' THEN 1 END) [C]
FROM Yourtable
GROUP BY vid
SQLFIDDLE DEMO
A: Check out the pivot command on msdn which will do just that.
Update:
For this example:
SELECT Vid,
[A] AS 'A',
[B] AS 'B',
[C] AS 'C'
FROM your_table
PIVOT (Count(RS)
FOR RS IN( [A],
[B],
[C])) AS PivotTable;
A: If the values in RS is dynamic, you should chose dynamic pivot.
SAMPLE TABLE
SELECT * INTO #TEMP
FROM
(
SELECT 1 VID, 'A' RS
UNION ALL
SELECT 1, 'B'
UNION ALL
SELECT 1, 'B'
UNION ALL
SELECT 2, 'C'
UNION ALL
SELECT 2, 'A'
)TAB
QUERY
Declare two variables to get columns for pivot and replace NULL with zero
DECLARE @cols NVARCHAR (MAX)
SELECT @cols = COALESCE (@cols + ',[' + RS + ']','[' + RS + ']')
FROM (SELECT DISTINCT RS FROM #TEMP) PV
ORDER BY RS
DECLARE @NullToZeroCols NVARCHAR (MAX)
SET @NullToZeroCols = SUBSTRING((SELECT ',ISNULL(['+RS+'],0) AS ['+RS+']'
FROM(SELECT DISTINCT RS FROM #TEMP GROUP BY RS)TAB
ORDER BY RS FOR XML PATH('')),2,8000)
Now take the count and pivot
DECLARE @query NVARCHAR(MAX)
SET @query = '
SELECT VID,' + @NullToZeroCols + ' FROM
(
SELECT VID,RS,COUNT(RS)OVER(PARTITION BY VID,RS)CNT
FROM #TEMP
) x
PIVOT
(
MIN(CNT)
FOR RS IN (' + @cols + ')
) p;'
EXEC SP_EXECUTESQL @query
*
*SQL FIDDLE
| |
doc_23530561
|
malte.italoborg.es
If I try to access another route, like:
malte.italoborg.es/admin
I got 404 error.
My nginx app file:
server {
listen 80;
server_name malte.italoborg.es;
root /home/italo/www/malte.italoborg.es/public;
charset utf-8;
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /home/log/nginx/malte.italoborg.es-error.log error;
error_page 404 /index.php;
sendfile off;
# Point index to the Laravel front controller.
index index.php;
location / {
try_files $uri $uri/ /index.php;
}
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.ht {
#deny all;
}
}
I tried this solution in an another link in StackOverflow, but doesn't worked for me.
All Laravel routes “not found” on nginx
I'm using nginx in Digital Ocean.
::UPDATE::
My routes.php
<?php
/*
|--------------------------------------------------------------------------
| Application Routes
|--------------------------------------------------------------------------
|
| Here is where you can register all of the routes for an application.
| It's a breeze. Simply tell Laravel the URIs it should respond to
| and give it the controller to call when that URI is requested.
|
*/
Route::get('/',
['as' => 'site.welcome.index', 'uses' => 'Site\WelcomeController@index']
);
Route::group(['prefix' => 'admin'], function () {
// Login
Route::get('login',
['as' => 'admin.auth.login', 'uses' => 'Admin\Auth\AuthController@getLogin']
);
Route::post('login',
['as' => 'admin.auth.login', 'uses' => 'Admin\Auth\AuthController@postLogin']
);
Route::get('logout',
['as' => 'admin.auth.logout', 'uses' => 'Admin\Auth\AuthController@getLogout']
);
// Password
Route::get('password/email',
['as' => 'admin.password.email', 'uses' => 'Admin\Auth\PasswordController@getEmail']
);
Route::post('password/email',
['as' => 'admin.password.email', 'uses' => 'Admin\Auth\PasswordController@postEmail']
);
// Password reset
Route::get('password/reset/{token}',
['as' => 'admin.password.reset', 'uses' => 'Admin\Auth\PasswordController@getReset']
);
Route::post('password/reset',
['as' => 'admin.password.reset', 'uses' => 'Admin\Auth\PasswordController@postReset']
);
Route::group(['middleware' => 'auth'], function () {
// Home
Route::get('/', ['as' => 'admin.home.index', 'uses' => 'Admin\HomeController@index']);
// My Account
Route::get('profile',
['as' => 'admin.profile.edit', 'uses' => 'Admin\ProfileController@edit']
);
Route::put('profile',
['as' => 'admin.profile.update', 'uses' => 'Admin\ProfileController@update']
);
// Nail polish
Route::resource('polishes', 'Admin\NailPolishController');
// Brands
Route::resource('brands', 'Admin\BrandController');
// Types
Route::resource('types', 'Admin\TypeController');
});
});
A: Please try changing this
location / {
try_files $uri $uri/ /index.php;
}
to this
location / {
try_files $uri $uri/ /index.php?$query_string;
}
And also remove this line:
try_files $uri /index.php =404;
A: Fix for Laravel 5.1:
$ vi /etc/nginx/sites-available/default
Comment out:
# try_files $uri $uri/ =404;
Add:
try_files $uri $uri/ /index.php;
A: For the lumen and laravel this solved my problem:
I've change the document root on nginx to root of my project:
root /usr/you/laravelinstall/public;
| |
doc_23530562
|
> A<-1:5
> X<-cut(A,breaks=quantile(A,probs=c(0:3)/3),labels=1:3,include.lowest=TRUE)
> A
[1] 1 2 3 4 5
> X
[1] [1,2.33] [1,2.33] (2.33,3.67] (3.67,5] (3.67,5]
Levels: [1,2.33] (2.33,3.67] (3.67,5]
A: Here's a better example than what you provided:
> v <-1:10
> X <- cut(v, breaks=quantile(v,probs=c(0:3)/3), labels=letters[1:3], include.lowest=TRUE)
> X
[1] a a a a b b b c c c
Levels: a b c
Two select values from v which correspond to level "a", just run:
> v[X=="a"]
[1] 1 2 3 4
| |
doc_23530563
| ||
doc_23530564
|
A: "Normal" methods (usually called instance methods) are invoked on an instance of the class in which they're defined. The method will always have access to its object via $this, and so it can work with data carried by that object (and indeed modify it). This is a core aspect of object oriented programming, and it's what makes a class more than just a bunch of data.
Calls to static methods, on the other hand, aren't associated with a particular object. They behave just like regular functions in this respect; indeed the only difference is that they may be marked private and also have access to private methods and variables on instances of own their class. Static functions are really just an extension of procedural programming.
For example, an instance method is called on an object:
$object = new MyClass();
$result = $object->myInstanceMethod();
A static method is called on the class itself:
$result = MyClass::myStaticMethod();
| |
doc_23530565
|
The problem I'm having is that I can't execute certain sklearn code anymore. Whereas before I could execute these same code parts without any issue.
Every time it throws this error:
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
My python version is 3.8.8
A minimal reproducible example:
import numpy as np
X = np.array([1, 0, 2, 0, 3, 0, 4, 0]).reshape(-1, 1)
y = np.array([1, 2, 3, 4, 5, 6, 7, 8]).reshape(-1, 1)
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y) # this line fails
As you can see, even very uncomplicated code already causes problems.
What have I already done so far:
*
*found the questions related to this error message on this website and read those
*reinstalled anaconda (incl using anaconda-clean)
*reinstalled the necessary modules
*tried updating the modules
*trying to find a minimal reproducible example
One of the recommendations was to look for the line of code that causes the issue. However, it seems like a more general problem: whenever I use sklearn code this happens, whereas before it just worked.
I am starting to think that some recent general Ubuntu 18.04 updates are causing the issue, however, I cannot tell.
Someone who could help me some steps in the right direction? That would be great.
Any help would be appreciated a lot!
edit:
I am getting closer to pinpointing the problem I think.
this piece of code also throws the same error message
import numpy as np
X = np.array([1, 0, 2, 0, 3, 0, 4, 0]).reshape(-1, 1)
y = np.array([1, 2, 3, 4, 5, 6, 7, 8]).reshape(-1, 1)
X = np.concatenate([np.ones(shape=[8, 1]), X], axis=1)
betas = np.invert(np.transpose(X).dot(X)).dot(np.transpose(X)).dot(y) # this line fails
indicating the problem is located within my numpy ?
| |
doc_23530566
|
AudioInputStream clip1 = null, clip2 = null;
for (int i = 1; i < fileAudio.size(); i++){
try {
if (i == 1) {
clip1 = AudioSystem.getAudioInputStream(new File("audio/" + fileAudio.get(0)));
}else{
clip1 = AudioSystem.getAudioInputStream(new File("merged.wav"));
}
clip2 = AudioSystem.getAudioInputStream(new File("audio/" + fileAudio.get(i)));
AudioInputStream appendedFiles =
new AudioInputStream(
new SequenceInputStream(clip1, clip2),
clip1.getFormat(),
clip1.getFrameLength() + clip2.getFrameLength());
AudioSystem.write(appendedFiles,
AudioFileFormat.Type.WAVE,
new File("merged.wav"));
System.out.println("Appending audio:" + clip1.getFrameLength());
clip1.close();
clip2.close();
appendedFiles.close();
} catch (Exception e) {
e.printStackTrace();
}
}
The result given is the correct length of audio files combined, but the sound is only from the last audio file. Meaning it didn't truly append the audio of the file(s).
| |
doc_23530567
|
I get this error:
java.lang.NullPointerException: Attempt to invoke virtual method 'int com.rs.approbot.DatabaseHandler.getContactsCount()' on a null object reference
On every tab, I am using this to detect when it becomes visible to the user:
@Override
public void setMenuVisibility(final boolean visible) {
super.setMenuVisibility(visible);
if (visible) {
}
}
And it's always the first line of what I right inside that gives problems.
Thanks
| |
doc_23530568
|
var QuestionSchema = new Schema({
title: {
type: String,
default: '',
trim: true
},
body: {
type: String,
default: '',
trim: true
},
user: {
type: Schema.ObjectId,
ref: 'User'
},
category: [],
comments: [{
body: {
type: String,
default: ''
},
root: {
type: String,
default: ''
},
user: {
type: Schema.Types.ObjectId,
ref: 'User'
},
createdAt: {
type: Date,
default: Date.now
}
}],
tags: {
type: [],
get: getTags,
set: setTags
},
image: {
cdnUri: String,
files: []
},
createdAt: {
type: Date,
default: Date.now
}
});
As a result, I need to sort comments by root field, like this
I tried to sort the array of comments manually at backend and tried to use aggregation, but I was not able to sort this. Help please.
A: Presuming that Question is a model object in your code and that of course you want to sort your "comments by "date" from createdAt then using .aggregate() you would use this:
Question.aggregate([
// Ideally match the document you want
{ "$match": { "_id": docId } },
// Unwind the array contents
{ "$unwind": "comments" },
// Then sort on the array contents per document
{ "$sort": { "_id": 1, "comments.createdAt": 1 } },
// Then group back the structure
{ "$group": {
"_id": "$_id",
"title": { "$first": "$title" },
"body": { "$first": "$body" },
"user": { "$first": "$user" },
"comments": { "$push": "$comments" },
"tags": { "$first": "$tags" },
"image": { "$first": "$image" },
"createdAt": { "$first": "$createdAt" }
}}
],
function(err,results) {
// do something with sorted results
});
But that is really overkill since you are not "aggregating" between documents. Just use the JavaScript methods instead. Such as .sort():
Quesion.findOneById(docId,function(err,doc) {
if (err) throw (err);
var mydoc = doc.toObject();
mydoc.questions = mydoc.questions.sort(function(a,b) {
return a.createdAt > b.createdAt;
});
console.log( JSON.stringify( doc, undefined, 2 ) ); // Intented nicely
});
So whilst MongoDB does have the "tools" to do this on the server, it makes the most sense to do this in client code when you retrieve the data unless you actually need to "aggregate" accross documents.
But both example usages have been given now.
| |
doc_23530569
|
What I want (like the Mail app, no sidebar button):
My code & result (I didn't add any code about the sidebar button in my project):
struct ContentView: View {
@State private var selection: Int = 1
@State private var search: String = ""
@State private var columnVisibility = NavigationSplitViewVisibility.all
var body: some View {
NavigationSplitView(columnVisibility: $columnVisibility) {
Sidebar(selection: $selection)
} content: {
Content(selection: $selection)
} detail: {
DetailView(selection: $selection)
}
.searchable(text: $search) {
}
}
}
Is there a way to hide it? I appreciate any help you can provide.
| |
doc_23530570
|
a = np.array([[["a111","a112","a113"],
["b","b","b"],
["c","c","c"],
["d","d","d"]],
[["a211","a212","a213"],
["b","b","b"],
["c","c","c"],
["d","d","d"]],
[["a311","a312","a313"],
["b","b","b"],
["c","c","c"],
["d","d","d"]],
[["a411","a412","a413"],
["b","b","b"],
["c","c","c"],
["d","d","d"]]])
and i want to get something like this:
np.array([[["a111","a112","a113"],
["a211","a212","a213"],
["a311","a312","a313"],
["a411","a412","a413"]],
[["b","b","b"],
["b","b","b"],
["b","b","b"],
["b","b","b"]],
[["c","c","c"],
["c","c","c"],
["c","c","c"],
["c","c","c"]],
[["d","d","d"],
["d","d","d"],
["d","d","d"],
["d","d","d"]]])
Right now I'm looping through the whole array and stacking it manually.
A: Use swapaxes:
a.swapaxes(0,1)
output:
array([[['a111', 'a112', 'a113'],
['a211', 'a212', 'a213'],
['a311', 'a312', 'a313'],
['a411', 'a412', 'a413']],
[['b', 'b', 'b'],
['b', 'b', 'b'],
['b', 'b', 'b'],
['b', 'b', 'b']],
[['c', 'c', 'c'],
['c', 'c', 'c'],
['c', 'c', 'c'],
['c', 'c', 'c']],
[['d', 'd', 'd'],
['d', 'd', 'd'],
['d', 'd', 'd'],
['d', 'd', 'd']]], dtype='<U4')
| |
doc_23530571
|
function pick<T, K extends keyof T>(obj: T, key: K): Pick<T, K> {
return { [key]: obj[key] }
}
I get the following error: "TS2322: Type { [x: string]: T[K]; } is not assignable to type Pick." I am wondering why key is generalized to string even if it is declared to be keyof T. Primarily, how could one implement pick without using any or casting like as Pick<T, K>? I also want to clarify that I do not want to use Partial<T> as a return type. I want to return a "slice" of the original type that contains exactly one field chosen by the caller.
Note: I also tried the equivalent:
function pick<T, K extends keyof T>(obj: T, key: K): { [key in K]: T[K] } {
return { [key]: obj[key] }
}
This (of course) gives essentially the same error. I am on TypeScript version 4.7.4.
A: There is currently a limitation in TypeScript where a computed property key whose type is not a single string literal type is widened all the way to string. See microsoft/TypeScript#13948 for more information. So, for now, in order to use a computed property with a type narrower than string you will need to do something a little unsafe like use a type assertion.
One reason there hasn't already been a fix for this issue is that you can't simply say that {[k]: v} is of type Record<typeof k, typeof v>. If typeof k is a union type (or if it is a generic type, which might end up being specified as a union type), then Record<typeof k, typeof v> has all the keys from the union type, whereas the true type of {[k]: v} should have just one of those keys.
You would run into the same problem with your pick() implementation. The type of pick(obj, key) is not necessarily Pick<T, K>, precisely because K might be specified with a union type. This complication makes things a bit harder to deal with.
The "right" type for pick(obj, key) is to distribute Pick<T, K> across unions in K. You could either use a distributive conditional type like K extends keyof T ? Pick<T, K> : never or a distributive object type like {[P in K]-?: Pick<T, K>}[K].
For example, if you have the following,
interface Foo {
a: number,
b: number
}
const foo: Foo = { a: 0, b: 1 }
const someKey = Math.random() < 0.5 ? "a" : "b";
// const someKey: "a" | "b"
const result = pick(foo, someKey);
You don't want result to be of type Pick<Foo, "a" | "b">, which is just Foo. Instead, we need to define pick() to return one of the distributive types above, and we need to use a type assertion to do it:
function pick<T, K extends keyof T>(obj: T, key: K) {
return { [key]: obj[key] } as K extends keyof T ? Pick<T, K> : never
}
And that results in the following result:
const result = pick(foo, someKey);
// const result: Pick<Foo, "a"> | Pick<Foo, "b">
So result is either a Pick<Foo, "a"> or a Pick<Foo, "b">, as desired.
Playground link to code
A: You can try to build the object and then return it. Something similar to this
function pick<T, K extends keyof T>(obj: T, key: K): Pick<T, K> {
let ret: any = {}
ret[key] = obj[key]
return ret
}
You can see this working here
| |
doc_23530572
|
+---------+---------+------------+-----------------------+---------------------+
| visitId | userId | locationId | comments | time |
+---------+---------+------------+-----------------------+---------------------+
| 1 | 3 | 12 | It's a good day here! | 2012-12-12 20:50:12 |
+---------+---------+------------+-----------------------+---------------------+
what I am trying to do is to count the amount of the visits group by hours, I'd like to generate results like this way:
+------------+-----+-----+-----+-----+-----+-----+-------+------+
| locationId | 0 | 1 | 2 | 3 | 4 | 5 | ... | 23 |
+------------+-----+-----+-----+-----+-----+-----+-------+------+
| 12 | 15 | 12 | 34 | 67 | 78 | 89 | ... | 34 |
+------------+-----+-----+-----+-----+-----+-----+-------+------+
How can I do that?
I want to evaluate the variance of visits in the whole day.
A: This will guive you the number of visits hour by hour for each locationid
SELECT locationId, HOUR(time), COUNT(*)
FROM table
GROUP BY locationId, HOUR(time)
A: Without testing I think this will work:
select locationid, hour(time) as hour, count(distinct userid) as usercnt
from table_1
group by locationid, hour;
It wont be transposed as you suggested but the data is the same
A: Try to use query like below one:
SELECT location_id, sum(case when HOUR(time) = 0 then 1 else 0) as '0',
sum(case when HOUR(time) = 1 then 1 else 0) as '1',-- till 23
FROM table
GROUP BY location_id
| |
doc_23530573
|
I am trying to insert multiple point to influx db, but I am getting bad timestamp error
'ovrs,M=91091096
s=1593683375,shift="02-07-20-S2",pc=1,e=1593683479,V=1,d=104
1593660200000000000\novrs,M=91091096
s=1593678600,shift="02-07-20-S2",pc=0.208953,e=1593683375,V=0,d=4775
1593660200000000000'
Tried to insert
INSERT 'ovrs,M=91091096 s=1593683375,shift="02-07-20-S2",pc=1,e=1593683479,V=1,d=104 1593660200000000000\novrs,M=91091096 s=1593678600,shift="02-07-20-S2",pc=0.208953,e=1593683375,V=0,d=4775 1593660200000000000'
ERR: {"error":"unable to parse ''ovrs,M=91091096
s=1593683375,shift="02-07-20-S2",pc=1,e=1593683479,V=1,d=104
1593660200000000000\novrs,M=91091096
s=1593678600,shift="02-07-20-S2",pc=0.208953,e=1593683375,V=0,d=4775
1593660200000000000'': bad timestamp"}
Tried to inset without code
INSERT ovrs,M=91091096 s=1593683375,shift="02-07-20-S2",pc=1,e=1593683479,V=1,d=104 1593660200000000000\novrs,M=91091096 s=1593678600,shift="02-07-20-S2",p
c=0.208953,e=1593683375,V=0,d=4775 1593660200000000000
ERR: {"error":"unable to parse 'ovrs,M=91091096
s=1593683375,shift="02-07-20-S2",pc=1,e=1593683479,V=1,d=104
1593660200000000000\novrs,M=91091096
s=1593678600,shift="02-07-20-S2",pc=0.208953,e=1593683375,V=0,d=4775
1593660200000000000': bad timestamp"}
Kindly some one help me
I tired the same with Nifi insert also i am facing the issue
A: You cannot use multiple points in the INSERT clause. You have to either use separate INSERTS. Or try to handle a bulk insert using a file.
| |
doc_23530574
|
obj <- {}
class(obj)
and found an object of class NULL.
class(obj)
[1] "NULL"
I'd like to know the opinions on this technique in the R community. Is there merit in it? Do the possible downsides (really, no class?) outweigh those?
A: {} is equivalent to NULL. Note identical({}, NULL) is TRUE. NULL is clearer, imo, but there are no repercussions to using {} instead---except maybe the risk of inducing momentary confusion on someone reviewing your code.
| |
doc_23530575
|
- (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
I try this:
if ([[tableView cellForRowAtIndexPath:indexPath] reuseIdentifier] == @"imageCell")
As I have 3 different cells setup with different identifiers in my storyboard. However my app just crashes here with EXC_BAD_ACCESS.
Any idea why?
Thanks.
A: You're comparing a string, so you should be using isEqualToString:
if ([[[tableView cellForRowAtIndexPath:indexPath] reuseIdentifier] isEqualToString:@"imageCell"])
A:
(CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
you are supposed to calculate the height for you row, not to get the Cell of your tableView.
In my opinion, in the lifeCycle of a TableView delegate, the 1st step (before allocating each UITableViewCell), the TableView delegate call heightForRowAtIndexPath for each row (but at this moment, the UItableViewCell are not allocated). The, in a 2nd step, the TableView call
(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
to create UitableView Cells.
| |
doc_23530576
|
*
*JWordNet(http://sourceforge.net/projects/jwordnet/)
*MIT Java WordNet Interface (http://projects.csail.mit.edu/jwi/)
*RiTa WordNet(http://rednoise.org/rita/wordnet/documentation/index.htm)
can i get some advise about which one is better?
A: You can refer to the paper Java Libraries for Accessing the PrincetonWordnet: Comparison and Evaluation which compares 12 Java libraries for accessing WordNet. It compares their performance and other features.
| |
doc_23530577
|
The socket connection was aborted. This could be caused by an error processing y
our message or a receive timeout being exceeded by the remote host, or an underl
ying network resource issue. Local socket timeout was '00:00:59.9843903'.
if i use basicHttpBinding the problem doesn't occur.
is any one knows why this problem occurs ???
Thanks,
Liran
A: This is expected behavior. When you close the server, TCP connection on the server is closed and you can't call it from the client anymore. Starting the server again will not help. You have to catch the exception on the client, Abort current proxy and create and open new one.
With BasicHttpBinding it works because NetTcpBinding uses single channel for whole life of the proxy (the cannel is bound to TCP connection) whereas BasicHttpBinding creates new one for each call (it reuses existing HTTP connection or create new one if connection doesn't exist).
| |
doc_23530578
|
Access to XMLHttpRequest at 'http://127.0.0.1:4000/company/details/?ticker=AMD' from origin 'http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Firstly I am using React for frontend and express for backend
Part of the express code is shown below:
const rateLimit = require('express-rate-limit')
const helmet = require('helmet')
const cors = require('cors');
const server = require('./server')
let express = require("express");
let app = express();
app.use(limiter)
app.use(helmet())
app.disable('x-powered-by')
app.use(cors({
origin: '*',//'http://127.0.0.1:3000'
credentials:true,
exposedHeaders: ['Origin', 'X-Requested-With', 'Content-Type', 'Accept']
}));
app.get('/company/details/', (req, res) => {
const client = new MongoClient(uri);
async function run() {
try{
console.log("GETTING DETAILS", req.query)
const database = client.db('company_details')
const collection = database.collection('details')
if (!req.query.ticker) {
res.status(400)
res.send("Ticker does not exist for details")
}
const query = {"ticker": req.query.ticker}
await collection.find(query)
.toArray()
.then(items => {
console.log(`Successfully found ${items.length} documents.`)
// items.forEach(console.log)
res.send(items)
})
.catch(err => console.error(`Failed to find documents: ${err}`))
} finally {
await client.close()
}
}
run().catch(console.dir)
})
In the App.js of react:
var companyToCompare = ''
const setChoice = (choice) => {
companyToCompare = choice
}
const onSubmit = () => {
console.log('choice', companyToCompare)
getCompanyDetails(companyToCompare).then(res => {
console.log("QQQ")
console.log(res)
})
.catch(function(error) {
console.log("WWW")
})
}
<Header/>
<div style={{width: '50%'}}>
<Select
onChange={(choice) => setChoice(choice.label)}
options={listOfSP500Tickers}
/>
</div>
<Button variant="outline-success" onClick={onSubmit} className="mx-1">Add</Button>
The getCompanyDetails code:
const config = {
// I've tried uncommenting what's in the headers but by doing that all api requests result in CORS issues
headers:{
// 'Access-Control-Allow-Origin' : '*',
// 'Access-Control-Allow-Methods':'*',
// "Access-Control-Allow-Headers": '*'
// 'Access-Control-Allow-Credentials': 'true'
}
};
export function getCompanyDetails(ticker) {
return axios.get(URL + '/company/details/?ticker=' + ticker, config)
.then(res => {
return res
})
.catch(function(error) {
console.log(error)
})
}
This header component also has an onSubmit function which also calls getCompanyDetails. Strange thing is the Header components call does not trigger CORS issues but the other one does.
My question is what am I doing wrong? Have I not set the CORS header correctly? But if that is the case why is it that the API call works in the <Header/> component but not the other API call? I'm out of ideas
Couple stackoverlow posts I've looked at below, but I've searched through at least 20 other similar posts:
*
*ReactJS: has been blocked by CORS policy: Response to preflight request doesn't pass access control check
*Getting error "No 'Access-Control-Allow-Origin' header is present on the requested resource" after Heroku deployment
A: Thanks to the comment from Evert, I was able to think outside the box. Even though from the console I was getting CORS blocked error the real issue actually came from the express-rate-limit
I had the rate limit set:
const limiter = rateLimit({
windowMs: 1 * 60 * 1000, // 1 minute
max: 10, // Limit each IP to 10 requests per `window`
standardHeaders: true, // Return rate limit info in the `RateLimit-*` headers
legacyHeaders: false, // Disable the `X-RateLimit-*` headers
})
I anticipated no more than 10 requests a minute on all endpoints but it was exceeding that. Will need to investigate the optimal number of requests a minute but when I increase the max, the CORS error went away.
| |
doc_23530579
|
They are loaded like this:
List<Object> constructorArgs = new ArrayList<Object>();
List<Object> methodArgs = new ArrayList<Object>();
Constructor<?> c = (Constructor<?>)constructorArgs.get(0);
Method m = (Method)methodArgs.get(0);
Object o = c.newInstance();
Method myMethod = o.getClass().getMethod("myMethod", String.class);
However, none of the logging in those classes and methods work. I assume because they are not loaded at startup for some reason.
Is there a way to register this class with the logger? I am using slf4j with a log4j implementation.
| |
doc_23530580
|
I need to call a function before every page is loaded.
I did it in the _Layout.cshtml view:
$(function () {
$('body').on('click', function (e) {
var valor = GetSession();
$('#hdnSessionTime').val(valor);
});
}
I save a value in a hidden field defined in the layout page, and every time the user click on every page, the function called GetSession executes.
The problem is that it execute after @RenderBody() and I need it before...
Is that possible?
A: Sometimes you want to perform logic either before an action method is called or after an action method runs. To support this, ASP.NET MVC provides filters. Filters are custom classes that provide both a declarative and programmatic means to add pre-action and post-action behavior to controller action methods. You can use custom filter like this.
Client Side
If you want to run code in client side, you can use this code (before @RenderBody()):
<script type="text/javascript">
$(document).ready(function () {
var valor = GetSession();
$('#hdnSessionTime').val(valor);
});
</script>
see this for similar example.
| |
doc_23530581
|
*
*declare a pointer and a Grade variable
*store the variable's address in the pointer variable
*Call the corresponding functions to print the output. (Refer to the functions using the pointer notation, e.g, ptr->function())
However, I'm a little confused. The "Grade" variable is a class variable, but how do I declare the pointer? Do I declare the pointer as an int?
So I would have something like:
int *ptr;
Grade grade;
ptr = &grade;
A: You're very close. But, you declared a pointer to an integer. That's what int * means. What you want is a pointer to Grade:
Grade* ptr;
The rest of what you wrote looks correct so far.
Once you have ptr = &grade;, you can then call methods in grade by saying ptr->foo() in place of grade.foo(). Both will call the method foo() on the variable grade.
A: You declared a pointer to int.
Declaring a pointer to a type is always:
type * ptr;
With that in mind:
Grade grade;
Grade *ptr;
ptr = &grade;
// use
ptr->function();
A: If you want a pointer to a class T, just write T* foo, just as you would expect. So all in all:
//Declare Pointer and grade Variable:
Grade grade;
Grade* ptr;
//Store variable's address in pointer:
ptr = &grade;
//Call function:
ptr->function();
| |
doc_23530582
|
enter image description here
It's for a script that is a simple physics engine bur rather than creating a simulation object it creates animation curves for regular objects. I want to optimize it down from adding keyframes on every frame, to adding only essential keyframes and using animation curves.
I think simple arcs could be done in animation automation, but I have no idea what kind of math would be involved.
| |
doc_23530583
|
So, I want to change location of iframe instead of changing location of top-level window.
How can I do that?
Thanks!
| |
doc_23530584
|
Changes done in one module may require (but not necessarily) changes on the other components.
I've maintained a git repository for each individual module.
So, Project
repo1 -> Component A (PHP)
repo2 -> Component B (NodeJS)
repo3 -> Component C (NodeJS)
repo4 -> Component D (JAVA)
I am now advised to use a single repo for the entire project.
However, I am paranoid, that in the future, if the module increases in size or structure, it would be better to maintain the module in a self repo than the single repo.
I want to achieve the following :
*
*Each of these components will be on different servers
*Ease in future development and maintainence
*Hooks to auto deploy the components to their respective servers
What is the recommended git structure/workflow for this?
Notes :
The entire codebase is still local.
I tried using subtree, and I now have a single project which retains all the commits from other components, I am not sure after using subtree, if, in the future splitting a module to a separate git repo would be possible.
(I've heard not good things about sub modules, so did not try that yet).
A: If they are independent projects that can be updated independently, then keep them as separate projects. Don't use submodules, don't mix them into a single module. Just develop them independently.
If there are changes in one component that affect another, use some kind of API versioning to deal with that; make sure that when you make an incompatible change in one, that you update the version number, so that if someone pulls just that and not the other one, they'll know immediately what went wrong.
Beyond that, just treat them as independent projects, and deploy the latest version of each one. There's not much need to complicate it beyond that.
If they are intimately linked, and one can't exist independently from the other, and just about every significant change requires that two of them are updated together, then just develop them in one big repository. If things are that tightly linked, it doesn't make sense to split it up. Use your deploy scripts to deploy from the one big repository to the different servers.
Whichever one you choose, don't worry about it too much. One of the nice things about Git is that it's fairly easy to merge or split repositories later once you realize that you did it wrong. If you try the split approach and it winds up being too confusing or cumbersome, do a subtree merge and now you have a single unified repository. If you try the unified approach and it gets too big? Run git filter-branch over it to separate out each of the subdirectories for the subcomponents, and now you have several independent repositories (do keep the original around in this case, so you actually have the full unified history, in case that becomes relevant in the future).
| |
doc_23530585
|
cscript %windir%\system32\Printing_Admin_Scripts\en-us\prnmngr.vbs -t -p "\\ipp://dc2.Mydomain.com\2F26P"
My Syntax is wrong, do you see my glaring failure?
REG ADD %KEY%\030 /V 1 /D "cscript %windir%\system32\Printing_Admin_Scripts\en-us\prnmngr.vbs -t -p "\\ipp://dc2.Mydomain.com\2F26P"" /f
A: Believe there is a backslash missing on "\ipp
| |
doc_23530586
|
do_insert (IN in_x varchar(64), IN in_y varchar(64))
BEGIN
declare v_x int(20) unsigned default -1;
declare v_y int(20) unsigned default -1;
select x into v_x from xs where x_col = in_x;
if v_x = 0
then
insert into xs (x_col) values(in_x);
select x into v_x from xs where x_col = in_x;
end if;
select y into v_y from ys where y_col = in_y;
if v_y = 0
then
insert into ys (y_col) values(in_y);
select y into v_y from ys where y_col = in_y;
end if;
insert ignore into unique_table (xId, yId) values(v_x, v_y);
END
Basically I look to see if I already have the varchars defined in their respective tables, and if so I select the id. If not, then I create them and get their IDs. Then I insert them into the unique_table ignoring if they're already there. (Yes I could probably put more logic in there to NOT have to do the final insert, but that shouldn't be an issue and KISS.)
The problem I have is that when I run this in a batch JDBC statement using Google Cloud SQL I get duplicate entries inserted into my xs table. The next time this stored proc is run I get the following exception:
java.sql.SQLException: Result consisted of more than one row Query: call do_insert(:x, :y)
So what I think is happening is that two calls with the same in_x values are occurring in the same batch statement. These two calls are being run in parallel, both selects come back with 0 as it's a new entry, then they both do an insert. The next run then fails.
Questions:
*
*How do I prevent this?
*Should I wrap my select (and possible insert) calls in a LOCK TABLE for that table to prevent this?
*I've never noticed this on a local MySQL, is this Google Cloud SQL specific? Or just a fluke that I haven't seen it on my local MySQL?
A: I haven't tested this yet, but I'm guessing the batch statement is introducing some level of parallelism. So the 'select x into v_x' statement can race with the 'insert into vx' statement allowing the latter to be executed more than once.
Try changing the 'insert into xs' statement to an 'insert ignore' and add a unique index on the column.
| |
doc_23530587
|
The questions I have are pretty basic, but where do I start. Should I start with replacing the Spring layer, then the Spring MVC layer?
Can the two work i the same environment, so that I can start with editing just one controller/view and then expand, and how do I do this?
A: *
*Start with the view:
Spring MVC won't work without Spring anyway. And since other layers don't depend on its API (well, shouldn't, at least), the presentation layer should be the easiest to change (although the one that will take most of the effort, since it will consist in a complete reimplementation).
*Go after Spring:
a) If you don't use much of Spring's utility classes (*Template, *DaoSupport, etc.) or infrastructure (transaction management, security), migrating to Guice would probably be a matter of rewriting (XML- or annotation-based) configuration in Guice modules/annotations, since pure dependency injection, if not portable, is pretty much directly mapped between the frameworks.
b) If you do use Spring's utility classes and infrastructure (which you probably do, since that's the whole point of using Spring instead of no-value-added-yet-another-dependency-injection-containers...), you'll have to migrate them to Guice somehow. If you plan to do this incrementally, you could look for some integration between the two (probably using the Spring infrastructure from Guice), and after you migrated the depedencies, switch to Guice-native interceptor implementations (and test, test, test, since little differences in behavior could break your application). This other question may provide some tips on this.
*Then, Hibernate:
Since you'll be keeping Hibernate, its configurations shouldn't be affected by the transition. Only its bootstrap will change when you migrate your infrastructure and configuration to Guice. I don't recommend keeping two parallel SessionFactories, if you can avoid it.
A: The gradual transition from one framework to the other seems possible for the development but not for production. The issues you'll have to face doing a page by page replacement in a single environment.
*
*url mapping
The bootstrapping of springMVC+spring+hibernate and wicket+guice+hibernate is done using url patterns. You have to tell the server if springmvc or wicket will serve the request.
You should completely separate the contexts by using a root pattern. Therefore, during the migration phase, the urls will be difficult to reconcile. The spring version can not reference urls version wicket.
*data synchronization
Two hibernate mappings will operate in parallel. Do not use cache and reload all the information on each page request to be sure not to have problems with data synchronization.
A disadvantage of this solution is the starting time of the server due to the double load hibernate mapping.
You should start by adding a wicket-guice application in your web.xml:
<filter>
<filter-name>guiceFilter</filter-name>
<filter-class>com.google.inject.servlet.GuiceFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>guiceFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
<listener>
<listener-class>com.app.web.guice.MyGuiceServletContextListener</listener-class>
</listener>
| |
doc_23530588
|
<!DOCTYPE html>
<html>
<head>
<title>Test</title>
<script src="js/jquery.js"></script>
<script src="js/script.js"></script>
</head>
<body>
<div id="SomeDiv"></div>
</body>
</html>
Here is the Content of 'script.js':
$(function() {
function load_text() {
$.get('test.html').done(function(data) {
// Check this condition below.
if ($.trim($("#SomeDiv").html()).length != $.trim(data).length){
$("#SomeDiv").html(data);
}
});
}
setInterval(load_text, 2000);
});
Here is the Content of 'test.html':
<div>Hello World!</div>
The above test.html has just that line of code nothing other than that. So Every time when the AJAX will load it will return the same HTML and populate the DIV tag with that over and over. According to the if condition the HTML inside #SomeDiv and the Loaded HTML will not be equal for the First Time After That it should be equal and should not Repopulate the DIV.
But it seems that the lengths are different even after it loaded for the first time. This just hit me! How can it be possible? I am loading same content then I have fetched the same content from another file and compared with previously loaded content but they are not equal...
So is that normal or is there any other way to make it not populate the DIV after first load?
What I am trying to do is if the loaded content is different then show it otherwise don't show it.
A: Note: you were (before edit) trimming the length due to incorrect bracketing :)
You are comparing raw text string to "structured" HTML (after parsing elements can be added to the DOM and whitespace can change).
Try comparing like with like (convert it to DOM elements first):
// Need to wrap the incoming elements in a parent
var $data = $('<div>').html(data);
if ($.trim($("#SomeDiv").html()).length != $.trim($data.html()).length)
| |
doc_23530589
|
Thanks in advance,
Kathir
A: the methods in Service(javax.xml.rpc.Service) Interface or the ServiceFactory(javax.xml.rpc.ServiceFactory) class will throw ServiceException.
some of the example condtions are,
*
*exception in creating a Service
*when specifing a illegal endpoint while creating a Service
*any error in creating a Call(javax.xml.rpc.Call) object
*error when loading a service or creating a ServiceFactory instance.
A: Service exceptions are thrown usually when the service is not accessible or if the service is not defined properly and has some errors. I hope that answers your question.
A: We use Service Exceptions for throwing environment & data & business exceptions. Here are some examples:
*
*Environment setting/property does not exist or is blank, that drives logic such as file names to send to another application. So likely what happened is we configured UAT for this environment property but forgot to promote that property to Prod, or someone removed it not realize it would create a defect;
*Upstream system that called our service did not follow the business rules and we cannot process this transaction
In both examples, we:
*
*Throw a new ServiceException with a specific message;
*In the calling logic we Catch that ServiceException;
*Then throw a new ServiceException with a message: current context string + GetMessage of the caught exception.
| |
doc_23530590
|
When I pass a Group instance to Openapi.RegisterHandlers instead of an Echo instance, I always get a 400 error with {"message":"no matching operation was found"} for any request in that group:
swagger, err := Openapi.GetSwagger()
if err != nil {
fmt.Fprintf(os.Stderr, "Error loading swagger spec\n: %s", err)
os.Exit(1)
}
// Use oapi validation middleware to check all requests against the
// OpenAPI schema.
g := e.Group("/api", middleware.OapiRequestValidator(swagger))
Openapi.RegisterHandlers(g, &MyApi{})
If send request /api/foo, where foo is an API endpoint defined in the generated server code, I get a 400 error. If I do /api/<some undefined api> I also get 400. If I do send a request for /baz, I get 404 as expected, since that isn't a defined route. If I don't pass a prefix to Group(), I get a 400 error for every request. I get the same behavior if I use RegisterHandlersWithBaseURL()
A: There seems to be a bug where if you specify the a base path, either to the Group() function or to RegisterHandlersWithBaseURL(), theOapiRequestValidator middle ignores the base path when checking the request path against the routes. It uses the routes defined in the OpenAPI spec without the base path. To work around this, I overwrote the inline.tmpl template and hacked the GetSwagger() function to include this at the bottom:
func GetSwagger(pathPrefix string) (swagger *openapi3.T, err error) {
...
var updatedPaths openapi3.Paths = make(openapi3.Paths)
for key, value := range(swagger.Paths) {
updatedPaths[pathPrefix + key] = value
}
swagger.Paths = updatedPaths
}
The key in the Path map is the route. I just append the base path to every key.
| |
doc_23530591
|
Query editor don't suport this kind of statements, function HExecuteSQLQuery() declares this query to the HFSQL engine and fails, and SQLExec() doesn't work because SQLConnect() isn't avaliable in Android.
Exists any way to do this with WM21?
A: Did you try to declare you table with the HDeclareExternal function and execute SQLExec on it ?
http://doc.windev.com/en-US/?3044204&name=hdeclareexternal_function
| |
doc_23530592
|
I have a small web application running on apache server on a machine that uses javascript to do some XHR's. For a long time it worked with no problems, today all the XHR's stopped working but only on localhost, if you access it from outside it works perfectly.
Problem:
Using mozilla firefox, firebug warns:
"Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://127.0.0.1:3581/datasnap/rest/TdssMloteamento/getLoteamento/true/. This can be fixed by moving the resource to the same domain or enabling CORS."
But i'm on localhost acessing a local content that have XHR calls a local datasnap server on the same machine, resuming, locally it fails, and from web it works.
Comments:
I am acessing apache web page within url: http://127.0.0.1:3582/beeWebLoteamento/Principal.php
This is totally, really, weird for me, that just does not make sense, no-logic, why i get a cross-domain error if i'm acessing the same domain?
Objective:
I want to know what is happening and solve this problem to continue doing my XHR's locally and via the web (external) too.
A: I found the solution/problem, that is:
Replaced all 127.0.0.1 to 192.168.25.100(that is the local machine ip) and everything worked fine, so the request was:
http://127.0.0.1:3581/datasnap/rest/TdssMloteamento/getLoteamento/true/
and became:
http://192.168.25.100:3581/datasnap/rest/TdssMloteamento/getLoteamento/true/
And i acessed my web application (apache) within the URL:
http://192.168.25.100:3582/beeWebLoteamento/Principal.php
Resuming:
To avoid these cross-domain problems, use the local ip address of the machine "that usually starts with 192.168.xxx.xxx" to access everything that are hosted on it, nor XHR either Apache, instead of using 127.0.0.1 or localhost
A: try send header access-control-allow-origin: * in your backend script
| |
doc_23530593
|
However, on a datatype like:
Maybe a :: Nothing | Just a
There is no list (as it seems to me) to perform the fold action on.
I'm sure I have some issue with understanding the basic concepts here, and I would greatly appreciate some clearing up.
A: Foldable is a pretty confusing class, to be honest, because it doesn't have an awful lot of laws and it's quite possible to write quite a lot of different Foldable instances for almost any given type. Fortunately, it's possible to figure out what a Foldable instance should do in a purely mechanical way based on a Traversable instance for the same type—if there is one.
We have
class (Functor t, Foldable t) => Traversable t where
traverse :: Applicative f => (a -> f b) -> t a -> f (t b)
Traversable has several different laws, but it turns out the most important one is traverse Identity = Identity. Let's see how this applies to Maybe:
traverse :: Applicative f => (a -> f b) -> Maybe a -> f (Maybe b)
traverse g Nothing = _none
traverse g (Just a) = _some
Now in the first case, you need to produce f (Maybe b), and all you have is g :: a -> f b. Since you don't have any f values, and you don't have any a values, the only thing you can produce is pure Nothing.
In the second case, you have to produce f (Maybe b) and you have g :: a -> f b and a. So the only interesting way to start is to apply g to a, getting g a :: f b. Now you have two options to consider: you could throw away this value, and just return Nothing, or you can wrap it up in Just.
By the identity law, traverse Identity (Just a) = Identity (Just a). So you're not allowed to return Nothing. The only legal definition is
traverse _ Nothing = pure Nothing
traverse g (Just a) = Just <$> g a
The Traversable instance for Maybe is completely determined by the Traversable laws and parametricity.
Now it's possible to fold using traverse:
foldMapDefault :: (Traversable t, Monoid m)
=> (a -> m) -> t a -> m
foldMapDefault f xs =
getConst (traverse (Const . f) xs)
As this applies to Maybe,
foldMapDefault f Nothing =
getConst (traverse (Const . f) Nothing)
foldMapDefault f (Just a) =
getConst (traverse (Const . f) (Just a))
Expanding our definitions,
foldMapDefault f Nothing = getConst (pure Nothing)
foldMapDefault f (Just a) = getConst (Just <$> (Const (f a)))
By the definitions of pure and <$> for Const, these are
foldMapDefault f Nothing = getConst (Const mempty)
foldMapDefault f (Just a) = getConst (Const (f a))
Unwrapping the constructor,
foldMapDefault f Nothing = mempty
foldMapDefault f (Just a) = f a
And this is indeed exactly how foldMap is defined for Maybe.
A: As "basic concepts" go, this is pretty mind-bending, so don't feel too bad.
It may help to set aside your intuition about what a fold does to a list and think about what type a specific folding function (let's use foldr) should have if applied to a Maybe. Writing List a in place of [a] to make it clearer, the standard foldr on a list has type:
foldr :: (a -> b -> b) -> b -> List a -> b
Obviously, the corresponding fold on a Maybe must have type:
foldrMaybe :: (a -> b -> b) -> b -> Maybe a -> b
Think about what definition this could possibly have, given that it must be defined for all a and b without knowing anything else about the types. As a further hint, see if there's a function already defined in Data.Maybe that has a similar type -- maybe (ha ha) that'll give you some ideas.
| |
doc_23530594
|
So far i've done the self check-in system and randomly generate room number
Now i'm confused in selecting available room
here is the table of room provided
dor is date of reservation or checkin date
dco is checkout date
room_num roomtype dor dco
101 Single 0000-00-00 0000-00-00
102 Single 2014-05-29 2014-05-31
103 Single 0000-00-00 0000-00-00
111 Deluxe 0000-00-00 0000-00-00
112 Deluxe 0000-00-00 0000-00-00
113 Deluxe 2000-00-00 0000-00-00
114 Deluxe 2014-06-01 2014-06-06
115 Deluxe 0000-00-00 0000-00-00
116 Deluxe 2014-06-08 2014-06-11
121 Superior 0000-00-00 0000-00-00
122 Superior 0000-00-00 0000-00-00
0000-00-00 means the room number not yet selected by the system. Because the room_num selected randomly by system
and below is the table room_booked. All data below comes from SQL update trigger from rooms table
room_num roomtype dor dco
102 Single 2014-05-29 2014-05-31
114 Deluxe 2014-06-01 2014-06-06
116 Deluxe 2014-06-08 2014-06-11
now what's the SQL code to select the available room number from room table, based on selected roomtype. Which is not at the between checkin date and checkout date mentioned on room_booked?
Thanks in advance
A: As I've already said in my comment, I would prefer another database structure. So I created the tables room and room_booked first
-- DROP TABLE IF EXISTS room_booked;
-- DROP TABLE IF EXISTS room;
CREATE TABLE room (
room_num INT NOT NULL,
roomtype ENUM('Single', 'Deluxe', 'Superior') NOT NULL,
PRIMARY KEY (room_num)
) ENGINE=InnoDB;
CREATE TABLE room_booked(
id INT NOT NULL,
room_num INT NOT NULL,
dor DATE NOT NULL,
dco DATE NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (room_num) REFERENCES room(room_num)
) ENGINE=InnoDB;
and filled them with your original data
INSERT INTO room (room_num, roomtype) VALUES (101, 'Single');
INSERT INTO room (room_num, roomtype) VALUES (102, 'Single');
INSERT INTO room (room_num, roomtype) VALUES (103, 'Single');
INSERT INTO room (room_num, roomtype) VALUES (111, 'Deluxe');
INSERT INTO room (room_num, roomtype) VALUES (112, 'Deluxe');
INSERT INTO room (room_num, roomtype) VALUES (113, 'Deluxe');
INSERT INTO room (room_num, roomtype) VALUES (114, 'Deluxe');
INSERT INTO room (room_num, roomtype) VALUES (115, 'Deluxe');
INSERT INTO room (room_num, roomtype) VALUES (116, 'Deluxe');
INSERT INTO room (room_num, roomtype) VALUES (121, 'Superior');
INSERT INTO room (room_num, roomtype) VALUES (122, 'Superior');
INSERT INTO room_booked (id, room_num, dor, dco) VALUES (1, 102, '2014-05-29', '2014-05-31');
INSERT INTO room_booked (id, room_num, dor, dco) VALUES (2, 114, '2014-06-01', '2014-06-06');
INSERT INTO room_booked (id, room_num, dor, dco) VALUES (3, 116, '2014-06-08', '2014-06-11');
Now the SELECT statement. In this example the user wants to book a Deluxe room at 2014-06-01.
SELECT
room_num, roomtype
FROM
room
WHERE
room_num NOT IN (
SELECT
room.room_num
FROM
room
LEFT OUTER JOIN
room_booked ON room_booked.room_num = room.room_num
WHERE
-- room type
roomtype != 'Deluxe'
OR (
-- wished booking date is after or at the DOR date
'2014-06-01' >= dor
-- OR wished booking date is before the DCO date
AND '2014-06-01' < dco
)
)
ORDER BY
RAND()
LIMIT 0, 1
;
If you only take the part before ORDER BY, you'll get a list of the Deluxe rooms available at 2014-06-01.
A: It sounds like you want something like:
SELECT TOP 1 ROOM_NUM
FROM TABLE_NAME
WHERE roomtype=varRoomType AND
dor=0000-00-00
Now I think TOP 1 is a mysql thing if your using oracle I think you have to add a rownumber=1 to the where clause. But this should get you the first found open room that matches a certain room type.
A: You could try:
SELECT room_num
from room
where roomtype = $roomtype
and $room_booked
not between dor and dco
That should grab any rooms that are not currently booked between those two dates.
A: What I understand from your problem statement is that, you need SQL statement which will provide all currently available room for booking.
There could be two solutions for this
Solution 1
SELECT
room_num
FROM
room
WHERE
room_num NOT IN(SELECT room_num FROM room_booked)
AND roomtype = {ROOMTYPE}
ORDER BY RAND() LIMIT 1
Solution 2
SELECT
room_num
FROM
room
WHERE
dor != '0000-00-00' AND dco != '0000-00-00'
AND roomtype = {ROOMTYPE}
ORDER BY RAND() LIMIT 1
A: You have to select the room randomly and you will give the room type in mysql you can use this query
SELECT * FROM room where roomtype='your_given_room_type' and dor='0000-00-00' and dco='0000-00-00' ORDER BY RAND() LIMIT 1
So it will pick one random room from the available rooms
| |
doc_23530595
|
SELECT * FROM CA_SAC_persons,CA_KC_persons,CA_SFC_persons,CA_SJ_persons
WHERE MATCH('@fullname("^John$" | "^Joseph$" | "^Jose$" | "^Josh$" | "^Robs$")')
ORDER BY filing_date_ts DESC LIMIT 0,1;SHOW META;
Result :
+---------------+-------------+
| Variable_name | Value |
+---------------+-------------+
| total | 1000 |
| total_found | 4813 |
| time | 0.019 |
| docs[9] | 4603 |
| hits[9] | 5312 |
+---------------+-------------+
SELECT * FROM CA_SAC_persons,CA_KC_persons,CA_SFC_persons,CA_SJ_persons
WHERE MATCH('@fullname("^John$" | "^Joseph$" | "^Jose$" | "^Josh$" | "^Robs$")')
ORDER BY filing_date_ts ASC LIMIT 0,1;SHOW META;
Result :
+---------------+-------------+
| Variable_name | Value |
+---------------+-------------+
| total | 1000 |
| total_found | 4812 |
| time | 0.019 |
| docs[9] | 4603 |
| hits[9] | 5312 |
+---------------+-------------+
Why the total_found shows 1 record less in the 2nd Query ?
| |
doc_23530596
|
I want to compile the java source code in debug info mode in order to get method parameter names at runtime.
How can I do this ?
A: Windows->Preferences->Installed JRE's. Add JDK(Yes, you have to add JDK) like(C:\Program Files\Java\jdk1.6.0_34). Set it as default, so that you can see java source code
OR
Window -> Preferences -> Java -> Compiler
Click all check boxes under headline "Classfile Generation".
A: Run the server in debug mode.add breakpoint in your code.Then navigate it with F6
| |
doc_23530597
|
String q = "select author from books where 1";
try{
HttpClient httpClient = new DefaultHttpClient();
HttpPost httpPost = new HttpPost("http://www2.XXXX./XXXX/X.php?qy="+q);
//"http://10.0.2.2/tut.php", http://www.XXXX/XXXX/tut.php
HttpResponse response = httpClient.execute(httpPost);
HttpEntity entity = response.getEntity();
is = entity.getContent();
}catch(Exception e){
e.printStackTrace();
System.out.println("Exception 1 caught");
}
However, the php file cannot get the value from java(php connected to mysql correctly).
php coding:
<?php
$con=mysql_connect("XXX.XXX","XX","XXX");
mysql_select_db("XX",$con);
$st = $_GET['qy'];
$r = mysql_query("$st");
while($row=mysql_fetch_array($r))
{
$out[]=$row;
}
print(json_encode($out));
mysql_close($con);
?>
I found that if I just pass the table name to php, it works. But if the passing variable become longer, it went to caught one. How can I fix this? How about passing more than one variable to php (i.e. mysql_query("select $_GET['col'] from $_GET['table'] where $_GET['condition']");)?
A: Use Post instead of Get, it's more secure.
Something like this:
String col = "author";
String table = "books";
String condition = "1";
try{
List<NameValuePair> params = new ArrayList<NameValuePair>();
params.add(new BasicNameValuePair("col", col));
params.add(new BasicNameValuePair("table", table));
params.add(new BasicNameValuePair("condition", condition));
HttpClient httpClient = new DefaultHttpClient();
HttpPost httpPost = new HttpPost("http://www2.XXXX./XXXX/X.php");
httpPost.setEntity(new UrlEncodedFormEntity(params));
HttpResponse response = httpClient.execute(httpPost);
HttpEntity entity = response.getEntity();
is = entity.getContent();
}catch(Exception e){
e.printStackTrace();
System.out.println("Exception 1 caught");
}
PHP:
<?php
if (isset($_POST['col']) && isset($_POST['table']) && isset($_POST['condition'])){
$columnName= $_POST['col'];
$tableName = $_POST['table'];
$condition = $_POST['condition'];
$dbh=mysql_connect ("localhost", "username", "password") or die('Cannot connect to the database because: ' . mysql_error());
mysql_select_db ("database_name");
$sql=mysql_query("select '$columnName' from '$tableName' where '$condition'");
while($row=mysql_fetch_assoc($sql)) $output[]=$row;
print(json_encode($output));
mysql_close();
}
?>
it works(by mandi yeung)
A: Never ever send any queries to backend. Your approach is equivalent to an EditText where user can execute desired query on your database. Keep your queries out of your requests at any cost. Query parts (as previous uaer suggested) count too. Query parts from request actually do the same thing.
Sending select query to backend may (and totally will) grant any attacker full access to your database. That means he may change or just delete the data.
I could just tamper your web packet and send drop table statement instead.
drop table authors;
drop table books;
sBetterapproach would be sending json requests like this:
String query = "{
requestedData: getAllBooks,
sessionId: 637euegdifoidhrgeydydihvr
}";
new BasicNameValuePair("smth", query);
Then from your php side you read your input as a plain text $_POST["smth"].
You must then decode your json value into object or array and determine what query you must run.
if ($_GET["smth"]["requestedData"] === "getBooks") {
// get books query executed here
}
Remember: this approach is still not perfect, but it's better than yours
| |
doc_23530598
|
<%= render 'info' %>
<%= render 'info2' %>
_info.html.erb:
<button class="btn btn-info btn-lg" type="button" data-toggle="modal" data-target="#myModal">Info</button>
<div id="myModal" class="modal fade" tabindex="-1">
<div class="modal-dialog">
<div class="modal-content">
<div class="modal-header">
<button class="close" type="button" data-dismiss="modal">×</button>
<h4 class="modal-title">Contract 1</h4>
</div>
<div class="modal-body">
...
</div>
<div class="modal-footer">
<button class="btn btn-default" type="button" data-dismiss="modal">Close</button>
</div>
</div>
</div>
</div>
_info2.html.erb:
<div id="myModal2" class="modal fade" tabindex="-1">
<div class="modal-dialog">
<div class="modal-content">
<div class="modal-header">
<button class="close" type="button" data-dismiss="modal">×</button>
<h4 class="modal-title">Contract 1</h4>
</div>
<div class="modal-body">
...
</div>
<div class="modal-footer">
<button class="btn btn-default" type="button" data-dismiss="modal">Close</button>
</div>
</div>
</div>
</div>
And I have item.js:
$('#myModal').on('hidden.bs.modal', function (e) {
$('#myModal2').modal('show');
})
And application.js :
//= require jquery
//= require jquery_ujs
//= require bootstrap
//= require cocoon
//= require angular
//= require turbolinks
//= require_tree .
When I open page via link and press buton to show info, it doesn't work sometimes, just blackout without modal. But often it work correct.
However when I refresh page by F5 it always show 1st modal when I press info button.
Please help me with that magic.
| |
doc_23530599
|
I've got a little Python app using Flask as my web framework, and I want to package it up into a jar.
I've followed the jar method tutorial here, and I've got my app components all set inside the jar. However when I attempt to execute my script inside the jar using this command:
java -jar folder/myapp.jar myapp.py runserver
I'm met with your typical Python import error:
File "folder/myapp.py", line 24, in <module>
from flask import Flask, render_template, flash, redirect, request, session
ImportError: No module named flask
I'm assuming I need to somehow package up my modules inside the jar with the rest of my code, but I'm at a loss as to how. And advice would be appreciated!
A: The simplest solution is to add the flask module to the root of myapp.jar. You can do this with the jar utility that comes with the JDK, or use the <jar> task in Ant. I am sure Maven has a way to do the same thing, but I don't know Maven.
Something like this should get flask into the root of your *.jar file, assuming that flask is the path to your flask module:
$ jar uf folder/myapp.jar flask
I am currently using Ant, and that would look something like this if all you are doing is adding flask:
<target name="add-flask-module">
<jar basedir="flask" destfile="folder/myapp.jar" update="true"/>
</target>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.