_id stringlengths 2 6 | text stringlengths 4 46k | title stringclasses 1 value |
|---|---|---|
d15201 | I am not sure what you want, but perhaps just -auto-level in Imagemagick.
Input:
I downloaded a smaller version, ldem_4.tif (1440x720) and using IM 7 (which you need for the HDRI compile since your image is outside the normal range of 16-bit integers).
magick ldem_4.tif -auto-level x.png | |
d15202 | Check getParameterMap from Request object
http://docs.oracle.com/javaee/6/api/javax/servlet/ServletRequest.html#getParameterMap()
if it isn't there you need to check whether your parameter is passed from browser - you can check network tab in Chrome development tools | |
d15203 | After various google searches, i came across a simple solution. since i'm still a relatively new kid on the programming block, i didn't notice that my code didn't call chromium at all. my price to be scrapped is in java. I just had to render the page via: r.html.render() in chromium.
solution:
from requests_html import HTMLSession
url = 'https://www.staegerag.ch/shop/index.php?
id_product=321&controller=product&search_query=level&results=2#/273-farbe-
gold_tone_light_oak/233-sprachsteuerung-nein'
session = HTMLSession()
r = session.get(url)
r.html.render()
print(r.html.find('span[id="our_price_display"]', first=True).text) | |
d15204 | Use toDF(), provide the column names if you have them.
val textDF = textRDD.toDF("title": String, "content": String)
textDF: org.apache.spark.sql.DataFrame = [title: string, content: string]
or
val textDF = textRDD.toDF()
textDF: org.apache.spark.sql.DataFrame = [_1: string, _2: string]
The shell auto-imports (I am using version 1.5), but you may need import sqlContext.implicits._ in an application.
A: I usually do this like the following:
Create a case class like this:
case class DataFrameRecord(property1: String, property2: String)
Then you can use map to convert into the new structure using the case class:
rdd.map(p => DataFrameRecord(prop1, prop2)).toDF() | |
d15205 | You have several obvious errors here.
Firstly, you are posting to /ajax_demo/, but you don't have that as a URL; only /demo/. You did either change the URL pattern or the URL you are posting to.
Secondly, you have included the app patterns wrongly; to must not use a terminal $ with include. It should be:
url(r'^$', include( 'app.urls', namespace = "mainPage")),
Thirdly, you are not including any data with your request. You need a data key:
$.ajax({
url: '/demo/',
type: 'get',
data: {"cluster": "value"}
... | |
d15206 | I have read YouTube Documents in Japanese, but its translation is somehow misunderstood. The comments on the question helped me understand that.
Clearly YouTube declares they count only play via a native play button.
It means playback via API call is ignored.
(I guess it means user tap action would be counted.
A play button on my app is not "native")
Note: A playback only counts toward a video's official view count if it is initiated via a native play button in the player.
Playback controls and player settings | |
d15207 | Well, my solution is to put -stdlib flag to build/config/Darwin-clang and configure build with required Darwin-clang config. | |
d15208 | If you are renaming a project within a solution, that is under source control, really all you need to do is to rename the project in Visual Studio and then submit the changes to the project file and the solution file, back into source control.
Visual Studio and TFS should handle all of the changes for you, VS will rename the project and update the references in the SLN file.
TFS will handle the rename and will maintain the history line.
The only time it should get complicated is if you are moving projects and solutions within source control, when you are carrying out this sort of task then the list above is a fair description of what needs to be done, but after step 4 i would just open the solution remove the project that can no longer be found and add in the newly renamed project, this would then automatically handle the sln file changes. now obviously this would orphan your history on the project if it was under source control, but you would make the project name change through TFS before reopening the solution.
if you want to manually change the sln file then a find and replace operation is the simplest way to update the file.
Coming Back to your question.
You really should ensure the sln file is correct as this tells VS where to download the files from and what projects actually make up the solution, by not updating the sln file correctly you, or other users of TFS may not get the correct files downloaded and you may have issues opening your solution.
An example of fall out from not having these files in line can be found in this question Why missing <SccProjectName> in project file cause "The project file is not bound to source control" | |
d15209 | You can use the time() function for that.
Here is an example where it marks candles between 1200 and 1500. It shouldn't be difficult to adapt according to your needs.
//@version=5
indicator("My Script", overlay=true)
timeAllowed = input.session("1200-1500", "Allowed hours")
// Check to see if we are in allowed hours using session info on all 7 days of the week.
timeIsAllowed = time(timeframe.period, timeAllowed + ":1234567")
plotchar(timeIsAllowed, size=size.small) | |
d15210 | Hi You can do something like this:
library(shiny)
library(shinydashboard)
CustomHeader <- dashboardHeader()
CustomHeader$children[[3]]$children <- div(style="min-width:200px;",tags$input(id="searchbox",placeholder = " Search...",type="text",class="chooser-input-search",style="width:200px;height:50px;"))
ui <- dashboardPage(
CustomHeader,
dashboardSidebar(),
dashboardBody()
)
server <- function(input, output, session) {}
shinyApp(ui, server)
A: Based on Pork Chop answer, you can simply use selectInput (or other shiny inputs) that you put in a div with float:left to span horizontally:
CustomHeader <- dashboardHeader()
CustomHeader$children[[3]]$children <- list(
div(style="float:left;height:50px",selectInput("select1", NULL, c("a","b","c"))),
div(style="float:left;height:50px",selectInput("select2", NULL, c("d","e","f"))))
ui <- dashboardPage(
CustomHeader,
dashboardSidebar(),
dashboardBody(textOutput("text1"),textOutput("text2"))
)
server <- function(input, output, session) {
output$text1 <- renderText({input$select1})
output$text2 <- renderText({input$select2})
}
shinyApp(ui, server) | |
d15211 | You can use mouseout and mouseover events to hide or show the scroll bar using css.
A: For a cross browser solution you should use javascript/jquery. There are many successful scrollbar plugins that you can use | |
d15212 | I think you hit this gtkmm bug, apparently triggered by more recent versions of GTK+, and now fixed:
https://bugzilla.gnome.org/show_bug.cgi?id=681323
I have asked Ubuntu to update their package, but they are usually slow about that if they do it at all:
https://bugs.launchpad.net/ubuntu/+source/gtkmm3.0/+bug/1046469
A: You might want to try reinstalling libgtkmm-3.0-dev. The code compiles fine for me but I get a Seg Fault. It does work when I change Gtk::ApplicationWindow to Gtk::Window.
A: there is nothing wrong with your install. that code is bad.
try it again, using
Gtk::Window window;
instead of the ApplicationWindow. When the GNOME documentation for a given class has a description of "TODO", that's a bad thing. | |
d15213 | No. In general these are copied into the apps bundle at compile time, letting the app access them when it needs them (by finding the files). | |
d15214 | Each merchant will need to install your app. In the installation phase shopify will pass, as an argument, the access token, that is a token that you will need, to use the Admin API.
If we're talking about an embedded app is expected that every request made is authenticated.
Depending on which kind of app you want to create (embedded or not) and language you may want to use, using the Shopify Cli to create the first draft of the app is really reccommended. It will create the base to have an installable app. Here is the documentation https://shopify.dev/apps/getting-started/create
You need to install the Shopify Cli and then run
shopify app create (node | ruby | php)
depending on your language of choice.
A: For those who might be interested, I finally found the link to the information.
Shopify - Getting started with OAuth | |
d15215 | Instead of GetItemString at the top, you would need to do one or two finds in the DataWindow for the registration number in data. If the first find returns 0 you are OK. If it returns the current row, you need to search from row to the last row unless you are on the last row, which you may well be if you are inserting. However, I wouldn't do it that way because it won't work in a multi-user environment. Instead, you should go ahead and insert the row and then if you get a duplicate key error, tell the user the registration number is already in the system. If the user is going to enter a lot of data you could try to SELECT the registration number from the database when the user enters it, but you still have to be able to handle the duplicate key error.
A: *
*I don't understand what kind of check you are doing here for registration_number : you are getting the current value of the column in the DW into car_registration_number then you are comparing to the value that was modified (the code comes from the itemchanged event ?). Do you expect to enter a value that is different from the current one ?
Also beware of the GetItemString if the type of the column is not a text (as it is called ..._number) as PB may crash, return some null value or silently fail (depending on the PB mood of the moment ;). If you get a null value, your if check will always fail.
If you want to get numerical values, use GetItemNumber instead.
*you can set the capacity column to be not null in you database. When saving the DW, you will get an error telling that there is some required value that was not set.
A better idea would be to iterate in the UpdateStart event on the columns that are member of the table primary key and to check if they are set or not.
To write some dynamic code, you can get at runtime the columns from the DW by describing the datawindow.column.count then checking the pk members with #1.key (replace #1 by #2, #3, ...) if the check must be done on that column. | |
d15216 | A simple solution would be to initialize images.
So instead of just doing this:
ArrayList<Image> images = null;
Do this:
ArrayList<Image> images = new ArrayList<>(); | |
d15217 | As cyboashu mentioned in the comments, you'll need to put Application.Volatile in the function. But as Rik mentioned, this also can lead to performance issues. This particular post and its answers are relevant to your issue.
Just a summary of the options available to you:
Function xyz()
Application.Volatile 'or Application.Volatile = True in later versions
...
End Function
Or some manual keypresses
*
*F9 Recalculates all worksheets in all open workbooks
*Shift+ F9 Recalculates the active worksheet
*Ctrl+Alt+ F9 Recalculates all worksheets in all open workbooks (Full recalculation)
*Shift + Ctrl+Alt+ F9 Rebuilds the dependency tree and does a full recalculation
Or an option that worked for me (since I only needed to have my UDFs recalculated each time I ran a specific macro) was to use SendKeys like so:
Sub xyz()
...
'Recalculates all formulas by simulating keypresses for {ctrl}{alt}{shift}{F9}
SendKeys "^%+{F9}"
...
End Sub
(Here's the MSDN page for SendKeys) | |
d15218 | The access time of heap memory versus stack memory is the same on any standard PC hardware.
Since you are not resizing the vector in your filtering algorithm, you can specify the size of the vector when you create it:
std::vector<int> coef(90);
You could also use an array directly.
int * coef = new int[90];
A: Your point is moot. I assure you, handling data for DSP within std::vector is not only perfectly possible, it's also commonly done – for example, GNU Radio and its highly optimized DSP primitive library, libVOLK, use vectors extensively.
There's a lot of very strange literature that suggests that heap and stack memory behave differently – that's absolutely not the case on any platform that I've worked with (those are limited to x86, x86_64, ARMv7, and H8300 so far) and you can safely disregard these.
Memory is memory, and the memory controller/cache controller of your CPU will keep locally what was used last/most frequently. As long as your memory model is sequential (Bjarne Stroustrup has held a nice presentation with the topic "I don't know your data structure, but I'm sure my vector will kick its ass"), your CPU cache will hold it locally if you access it. | |
d15219 | This should be possible using Nuget.exe and the install-package etc. Powershell cmdlets. See the blog post Installing NuGet Packages outside of Visual Studio for details! | |
d15220 | So it's because your <FormControl is the first with this id and <input is second(or vice versa).
There are wide list of approaches:
*
*.at(0) will work, but this way you will never know if you(because of error in the code) renders multiple elements. It might happen if conditions in conditional rendering {someFlag && <.... that suppose to be mutually exclusive are not. So really, it's a bad way.
*Mock FormControl to be final element - so <input will not be returned anymore by .find()(honestly never used that, just assume it will work - but still looks messy and need additional boilerplate code for each test file, so not really handful way):
jest.mock('../FormControl.jsx', () => null);
*use hostNodes() to filter only native elements(like <span> to be returned):
const emailInput = wrapper.find("#email").hostNodes();
I vote for 3rd option as most reliable and still safe for catching code logic's errors. | |
d15221 | Pretty sure you just need to CGColorRelease(drawColor) to prevent the leak. See how that helps your performance.
A: If you're leaking CGColor objects, the first step to solving your problem is to stop leaking them. You need to call CGColorRelease when you're done with a color object. For example, you are obviously leaking the drawColor object in your example code. You should be doing this:
CGColorRelease(drawColor);
drawColor=CGColorCreate(CGColorSpaceCreateDeviceRGB(), components);
to release the old object referenced by drawColor before you assign the new object to drawColor.
CGColor objects are immutable, so you won't be able to just modify your existing objects. | |
d15222 | Typical polar coordinates 0 points to the East and they go counter-clockwise. Your coordinates start at the North and probably go clockwise. The simplest way to fix you code is to first to the conversion between angles using this formula:
flippedAngle = π/2 - originalAngle
This formula is symmetrical in that it converts both ways between "your" and "standard" coordinates. So if you change your code to:
public double bearingTo(Vector target) {
return Math.PI/2 - (Math.atan2(target.getY() - y, target.getX() - x));
}
public static Vector fromPolar(double magnitude, double angle) {
double flippedAngle = Math.PI/2 - angle;
return new Vector(magnitude * Math.cos(flippedAngle),
magnitude * Math.sin(flippedAngle));
}
It starts to work as your tests suggest. You can also apply some trigonometry knowledge to not do this Math.PI/2 - angle calculation but I'm not sure if this really makes the code clearer.
If you want your "bearing" to be in the [0, 2*π] range (i.e. always non-negative), you can use this version of bearingTo (also fixed theta):
public class Vector {
private final double x;
private final double y;
public Vector(double xIn, double yIn) {
x = xIn;
y = yIn;
}
public double getX() {
return x;
}
public double getY() {
return y;
}
public double getR() {
return Math.sqrt((x * x) + (y * y));
}
public double getTheta() {
return flippedAtan2(y, x);
}
public double bearingTo(Vector target) {
return flippedAtan2(target.getY() - y, target.getX() - x);
}
public static Vector fromPolar(double magnitude, double angle) {
double flippedAngle = flipAngle(angle);
return new Vector(magnitude * Math.cos(flippedAngle),
magnitude * Math.sin(flippedAngle));
}
// flip the angle between 0 is the East + counter-clockwise and 0 is the North + clockwise
// and vice versa
private static double flipAngle(double angle) {
return Math.PI / 2 - angle;
}
private static double flippedAtan2(double y, double x) {
double angle = Math.atan2(y, x);
double flippedAngle = flipAngle(angle);
// additionally put the angle into [0; 2*Pi) range from its [-pi; +pi] range
return (flippedAngle >= 0) ? flippedAngle : flippedAngle + 2 * Math.PI;
}
} | |
d15223 | check responsive and set rules accordingly.
For 100 by 100 you should remove all gridlines, credits, labels, title
Example will be
Highcharts.chart('container', {
chart: {
type: 'line',
width: 100,
height: 100
},
legend: {
align: 'center',
verticalAlign: 'bottom',
layout: 'horizontal'
},
yAxis: {
gridLineWidth: 0,
minorGridLineWidth: 0,
labels: {
enabled: false,
},
title: {
text: null
}
},
xAxis: {
labels: {
enabled: false,
},
title: {
text: null
}
},
title: {
useHTML: true,
text: ''
},
subtitle: {
text: null
},
tooltip: {
enabled: false,
},
legend: {
enabled: false,
},
credits: {
enabled: false
},
series: [{
data: [29.9, 71.5, 106.4, 129.2, 144.0, 176.0, 135.6, 148.5, 216.4, 194.1, 95.6, 54.4]
}]
});
$('#container').highcharts().renderer.text('mini Highcharts', 20, 90)
.css({
fontSize: '8px',
color: '#7d7d7d'
})
.add();
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://code.highcharts.com/highcharts.js"></script>
<div id="container" style="height: 400px"></div>
check this using chart.setSize();
Fiddle
Update
You could use $(elem).highcharts().setSize(100, 100);
Updated fiddle | |
d15224 | :hmac_sha256_base64($url, "rUOeZMYraAapKmXqYpxNLTOuGNmAQbGFqUEpPRlW&YePiEkSDFdYAOgscijMCazcSfBflykjsEyaaVbuJeO"));
# full URL incl sig
$furl = "https://api.twitter.com/oauth/request_token?$str&oauth_signature=$sig";
# system("curl -o /tmp/testing.txt '$furl'");
print "FURL: $furl\n";
print "STR: $str\n";
print "SIG: $sig\n";
sub urlencode {
my($str) = @_;
$str=~s/([^a-zA-Z0-9])/"%".unpack("H2",$1)/iseg;
$str=~s/ /\+/isg;
return $str;
}
Note: I realize there are many other possible reasons this is failing,
but current question is: am I sending the parameters correctly and am
I computing the signature correctly.
A: Twitter asks that you do a POST for the request token. | |
d15225 | I dislike this use of the ES6 syntax as it obscures the meaning of the code only in the interest of brevity. The best code is not always the shortest possible way to write it. Give people tools and they will sometimes use them inappropriately.
forbidden() is a function that takes one argument message that returns a middleware handler that uses that one argument. So, it's a way of making a customized middleware handler that has a parameter pre-built-in. When you call forbidden(msg), it returns a middleware handler function which you can then use as middleware.
The ES5 way of writing this (ignoring for a moment the difference in this which would be different, but is not used here) would look like this:
const forbidden = function(message) {
return function(req, res, next) {
res.status(403).send(message);
}
}
So, when you call forbidden(someMsg), you get back a function that can be used as middleware.
If so, why isn't it in the parenthesis?
With the ES6 arrow syntax, a single argument does not have to be in parentheses. Only multiple arguments require parentheses.
Another interesting thing I noticed is that the middleware forbidden is being invoked in the get request
This is because invoking it returns the actual middleware function so you have to execute to get the return value which is then passed as the middleware.
and mustBeLoggedIn is only being passed and not invoked. Why?
Because it's already a middleware function, so you just want to pass a reference to it, not invoke it yet.
FYI, this route:
.get('/', forbidden('only admins can list users'), (req, res, next) =>
User.findAll()
.then(users => res.json(users))
.catch(next))
does not make sense to me based on the code you've shown because forbidden() will return a middleware that will ALWAYS return a 403 response and will not allow the next handler to get called. This would only make sense to me if forbidden() had logic in it to check if the current user is actually an admin or not (which you don't show). | |
d15226 | All you need to escape is the single ' to 2 x '
commandText = string.Format("CREATE LOGIN [me] WITH PASSWORD = '{0}'", pass.Replace("'", "''"));
An alternative would be to create a stored procedure with a password parameter and use it to build a CREATE LOGIN string which you then sp_executeSQL. | |
d15227 | I doubt there is any type of documentation on what you want, but as suggested above, running your own tests shouldn't be too hard. I don't recall the APIs, but on any mobile device, there are going to be battery state objects you can access giving, at the very least, remaining battery energy. Write two test apps, each using the different paradigms. Run each, one at a time and for a long duration. Check on the energy usage at the beginning and end.
A: This is late for an answer but one aspect to remember about battery consumption is the use of the radios (Bluetooth and WiFi).
For tablet apps try to manage your app by stepping back and analyzing what data you'll need from the database and try to get the data in one shot so the OS can turn off the radio. If you make an sql call each time the user presses a button then the radio is on more and drains the battery. The OS might also leave the radio on "a little longer" in case you make another query.
For the rest of the UI of the app, you're safe to count on an 8 hr shift and then they dock it for recharge.
You can watch for the battery notifications as well so you can save the info in the app before the OS shuts you down.
Other than that, each app is unique and you'll need to run these tests during your QA cycle. | |
d15228 | Solved: jsfiddle
Based on this answer to add the third level menu with a little modification of the js code to close the third level menu on click:
$('ul.dropdown-menu [data-toggle=dropdown]').on('click', function(event) {
// Avoid following the href location when clicking
event.preventDefault();
// Avoid having the menu to close when clicking
event.stopPropagation();
// If a menu is already open we close it
if($(this).parent().hasClass('open')){
$('ul.dropdown-menu [data-toggle=dropdown]').parent().removeClass('open');
}else{
// opening the one you clicked on
$(this).parent().addClass('open');
var menu = $(this).parent().find("ul");
var menupos = menu.offset();
if ((menupos.left + menu.width()) + 30 > $(window).width()) {
var newpos = - menu.width();
} else {
var newpos = $(this).parent().width();
}
menu.css({ left:newpos });
}
});
A: <li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">Dropdown <span class="caret"></span></a>
<ul class="dropdown-menu">
<li class="dropdown-submenu">
<a href="#">
Dropdown2
</a>
<ul class="dropdown-menu" role="menu">
<li><a href="#">Action sub</a></li>
</ul>
</li>
The nesting should be done properly I guess, with the correct classes. | |
d15229 | I copied this from what you commented in your question:
No i want to return the id of the updated row or complete row if possible
If you want to return the ID of the just updated 'row' you're talking about. You can use Eloquent to accomplish this.
You can use a route like this:
Route::put('demo/{id}', Controller@update);
Then in your update function in your controller you can get the ID.
Find the model with $model = Model::find(id);
do things with the data you get.
Like $model->name = Input::get('name');
Then use $model->save();
After saving you can do $model->id;
Now you get back the ID about the row you just updated.
Refer back to this question:
Laravel, get last insert id using Eloquent
A: But any way it'll always be at least 2 queries (a SELECT and an UPDATE in MySQL, however you do it)
You can check Laravel Eloquent if you want a "cleaner" way to to this. | |
d15230 | You are trying to log myData which doesn't exist.
try this
$http.get(url).then(function(response){
// Will have proper data only if your response has the key 'records'.
// Otherwise your $scope.myData will have undefined value
console.log(response.data) //Check if Key 'records' is present.
$scope.myData = response.data.records;
console.log($scope.myData.name);
});
A: myData is not defined..
Use $scope.myData to get output.
A: The error is you are referencing response.data.records in the success callback of then. Your json will be available on response.data directly. So what you need to do is just access that like
$http.get('data.json')
.then(function (response) {
$scope.myData = response.data; // just access data, not data.response
console.log("sadas"); //this is ok
console.log($scope.myData.name); //you need to access $scope.myData, rather than just myData
}, function(error){
console.log(error);
});
A: var app = angular.module('myAppp', []);
app.controller('customersCtrl', function($scope, $http) {
angular.element(document).ready(function(){
$scope.myData={};
});
$http.get('data.json').then(function (response) {
$scope.myData = response.data.records;
console.log($scope.myData ); //make sure this is not null/undefined
console.log($scope.myData.name); //if not null this will work
});
});
A: Try this i think that's may be work for you
myData[0].name | |
d15231 | I finally end up with this solution code:
export const duplicateCount = (text: string): number => {
let countObj: Record<string, number> = {}
let count: number = 0
if (text.length === 0) return 0
let allLetters = text.toLowerCase().split("")
allLetters.forEach((letter) => {
if (countObj[letter]) {
countObj[letter]++
} else {
countObj[letter] = 1
}
})
for (const key in countObj) {
if (countObj[key] > 1) {
count++
}
}
return count
}
I'm open for any suggestion refactoring | |
d15232 | Thanks to jdharrison, it works when I add the following code in myFile.js :
Server.R
number <- 5
observe({
session$sendCustomMessage(type='myCallbackHandler', number)
})
myFile.js
var i ;
Shiny.addCustomMessageHandler("myCallbackHandler",
function(number) {
i = number;
}
);
var i now takes the value 5 in my javascript file. | |
d15233 | Remove the forward slashes: they are part of JavaScript regex literal syntax, not part of general regex syntax. You don't really need the ^ and $ either because for an input element pattern they are implied.
The value you are setting in your example, 05:15 PM won't match your regex either, because it has a space in it and because the "PM" is in capitals. To allow for that:
pattern="\d{1,2}:\d{2}( ?[apAP][mM])?"
Note that for html5 input pattern attributes you can't set the i case insensitive flag that JavaScript (and some other regex implementations) allow. According to the specification it does use the JS syntax for the pattern itself, but "it is compiled "with the global, ignoreCase, and multiline flags disabled".
A: try this
<input name="taskStartTime" type="text" value="05:15 PM" placeholder="Enter a valid start time." size="10" pattern="^\d{1,2}:\d{2}\s([AP]M)?$" title="Enter Valid Time"> | |
d15234 | Iterating through the owners object to find the owner details for each car would be highly inefficient, so what is a better approach?
O(n^2) is perfectly fine for reasonably small n. On a modern iOS device, you'd have to get into an order of 10k objects for this to even see a performance hit—likely much smaller than what you're being sent back by JSON.
As others mentioned before in comments, and as the old saying goes, "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil".
Just code it. If your app is slow, profile it in instruments. Only then can you really know what the bottlenecks in your application are (humans are generally very bad at guessing a priori)
A: Use NSJSONSerialization class to parse JSON.
Use + (id)JSONObjectWithData:(NSData *)data options:(NSJSONReadingOptions)opt error:(NSError **)error
to create a Foundation object from given JSON data. | |
d15235 | Entity Framework CodeFirst recognize the key, by default, by name. Valid names are Id or <YourClassName>Id.
Your property should be named Id or AccountTypesId
Another way is to use the ModelBuilder to specify the key.
Sample
public class MyDbContext : DbContext
{
public DbSet<Artists> Artists{ get; set; }
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Artists>.HasKey(x => x.ArtistId);
base.OnModelCreating(modelBuilder);
}
}
More about it you can find here
A: If you use custom property names then compiler will not understand it's meaning that it is an id and must be used as primary key in database table.
when you name it id compiler understands it's meaning .
A: This depends on whether or not using Entity Framework to set up your databases. If you are Entity Framework looks for specific property names to identity as Primary Keys.
For example, let's say you have a model called Book.
public class Book
{
public string Title {get; set;}
//all other properties here
}
When Entity Framework tries to set up your database, it looks for a property that it can identify as a primary key corresponding to the specific model. In this case EF would look for "BookID" as a primary key. So if you wished to have an accessible primary key property you would set it up like this.
public class Book
{
public int BookID {get;set;}
public string Title {get; set;}
//all other properties here
}
If you wished to set up a primary key that was not called "BookID", you could use a data annotation:
public class Book
{
[Key]
public int BookIdentifier{get;set;}
public string Title {get; set;}
//all other properties here
} | |
d15236 | Most assuredly not. It is not a matter of angular though, it's just choosing the right tool for the right thing.
Long story short:
*
*If it is a request/response model, then use http. Why?
*
*Because it's easier to handle. Proxies, dns and load balancers need no extra configuration to handle. Web Sockets do.
*You have already setup 1 so it's not a problem. How will you handle caching, routing, gziping, SEO and all the out of the box stuff the http protocol and rest-apis handle? Everything you build, all communications will need their own security consideration, design patterns etc.
*How will you handle the stateful nature of web sockets? They are currently supporting only vertical scaling, whereas rest apis scale both horizontally and vertically.
If you really need a full duplex communication (there is server push only without sockets), then you should limit web socket usage to the cases where you really need it.
Even in this case, go through a framework like signalR. All modern browsers support websockets but a lot of users still don't have browsers that do support them. SignalR falls back to long polling in those cases. If you use it on all cases, imagine what would happen if you use such a browser and you apply long polling for each request.
I could go on, but I think you get the meaning. | |
d15237 | Try this:
Wait Until Page Contains Element ${xpathIMButton}
Mouse Over ${xpathIMButton}
Click Element ${xpathIMButton} don't wait
A: Click Element started to fail for me when I upgraded to Selenium 2.35, SeleniumLibrary 2.9.1 and Selenium2Library 1.2. My browser was Firefox 22. The Click Element was pressing a Save button. The same exact code worked 2 times and the third time said it worked but the confirm page never showed up. I solved my issue by putting a Focus keyword before my Click Element
Focus ${saveRule}
Click Element ${saveRule}
Now the code works the three times it is called. Hope this helps.
A: It may be a little late to provide an answer, but I had exactly this problem. What I did was to provide a bit waiting time for the page to fully load, then my button was successfully found.
A: It seems that your Mouse Over could cause the issue. The Mouse Over could cause the element to be hidden in DOM.
But this was 6 years ago with Selenium 1 Library. Now, we are using Selenium2Library in ROBOT Framework, so if you give a try or already done it, just let us know. | |
d15238 | One option is to tave a WCF service embedded on your windows service. With this WCF you can control the behaviour without restarting the service.
But the best option IMO is to have this in a config file. You can add some keys, but you would have to restart the service when you update the config.
In this case you can try a workaround, as in this thread.
The config is a good place for this kind of detail, because it can be easily modified and, unlike a database, it will be always avaiable.
A: I don't fully understand what you're trying to say, but you define what the interface to your service is when you make it. If the operations you define take in variables, then you can pass data from your application to the service.
From what you described, just make opertaions in the service to do those 3 things you listed then call those from a button click in your UI code.
WCF would be fine for this here's a basic introduction to it http://msdn.microsoft.com/en-us/library/bb332338.aspx. | |
d15239 | I have a method that is used in the wrong scenario (it's basic assumption doesn't hold).
That sounds like an exact match for IllegalStateException:
Signals that a method has been invoked at an illegal or inappropriate time.
Admittedly the "time" part feels a little misleading, but given the name it seems reasonable to extend the meaning to "a method has been invoked when the object is in an inappropriate state for that call". Your use is similar to that of Iterator.remove() which throws IllegalStateException if the iterator is before the first element or has already removed the "current" element.
I'd definitely go with that over InternalError which is for Virtual Machine errors, not application code errors. | |
d15240 | If you are skeptical about how large your file is, you should come up using ReadLines which is deferred execution instead of ReadAllLines:
var lines = File.ReadLines(GlobalVars.strLogPath);
The ReadLines and ReadAllLines methods differ as follows:
When you use ReadLines, you can start enumerating the collection of strings before the whole collection is returned; when you use ReadAllLines, you must wait for the whole array of strings be returned before you can access the array. Therefore, when you are working with very large files, ReadLines can be more efficient.
A: As weird as it might sound, you should take a look to log parser. If you are free to set the file format you could use one that fits with log parser and, believe me, it will make your life a lot more easy.
Once you load the file with log parse you can user queries to get the information you want. If you don't care about using interop in your project you can even add a com reference and use it from any .net project.
This sample reads a HUGE csv file a makes a bulkcopy to the DB to perform there the final steps. This is not really your case, but shows you how easy is to do this with logparser
COMTSVInputContextClass logParserTsv = new COMTSVInputContextClass();
COMSQLOutputContextClass logParserSql = new COMSQLOutputContextClass();
logParserTsv.separator = ";";
logParserTsv.fixedSep = true;
logParserSql.database = _sqlDatabaseName;
logParserSql.server = _sqlServerName;
logParserSql.username = _sqlUser;
logParserSql.password = _sqlPass;
logParserSql.createTable = false;
logParserSql.ignoreIdCols = true;
// query shortened for clarity purposes
string SelectPattern = @"Select TO_STRING(UserName),TO_STRING(UserID) INTO {0} From {1}";
string query = string.Format(SelectPattern, _sqlTable, _csvPath);
logParser.ExecuteBatch(query, logParserTsv, logParserSql);
LogParser in one of those hidden gems Microsoft has and most people don't know about. I have use to read iis logs, CSV files, txt files, etc. You can even generate graphics!!!
Just check it here http://support.microsoft.com/kb/910447/en
A: Looks like you need to create a Tokenizer. Try something like this:
Define a list of token values:
List<string> gTkList = new List<string>() {"Date:","Source Path:" }; //...etc.
Create a Token class:
public class Token
{
private readonly string _tokenText;
private string _val;
private int _begin, _end;
public Token(string tk, int beg, int end)
{
this._tokenText = tk;
this._begin = beg;
this._end = end;
this._val = String.Empty;
}
public string TokenText
{
get{ return _tokenText; }
}
public string Value
{
get { return _val; }
set { _val = value; }
}
public int IdxBegin
{
get { return _begin; }
}
public int IdxEnd
{
get { return _end; }
}
}
Create a method to Find your Tokens:
List<Token> FindTokens(string str)
{
List<Token> retVal = new List<Token>();
if (!String.IsNullOrWhitespace(str))
{
foreach(string cd in gTkList)
{
int fIdx = str.IndexOf(cd);
if(fIdx > -1)
retVal.Add(cd,fIdx,fIdx + cd.Length);
}
}
return retVal;
}
Then just do something like this:
foreach(string ln in lines)
{
//returns ordered list of tokens
var tkns = FindTokens(ln);
for(int i=0; i < tkns.Length; i++)
{
int len = (i == tkns.Length - 1) ? ln.Length - tkns[i].IdxEnd : tkns[i+1].IdxBegin - tkns[i].IdxEnd;
tkns[i].value = ln.Substring(tkns[i].IdxEnd+1,len).Trim();
}
//Do something with the gathered values
foreach(Token tk in tkns)
{
//stuff
}
} | |
d15241 | Your solution is reasonable. An element object is the list of children. The .text attribute of the element object is related only to things (usually a text) that are not part of other (nested) elements.
There are things to be improved in your code. In Python, string concatenation is an expensive operation. It is better to build the list of substrings and to join them later -- like this:
output_lst = []
for child in root.find('text'):
output_lst.append(ET.tostring(child, encoding="unicode"))
output_text = ''.join(output_lst)
The list can be also build using the Python list comprehension construct, so the code would change to:
output_lst = [ET.tostring(child, encoding="unicode") for child in root.find('text')]
output_text = ''.join(output_lst)
The .join can consume any iterable that produces strings. This way the list need not to be constructed in advance. Instead, a generator expression (that is what can be seen inside the [] of the list comprehension) can be used:
output_text = ''.join(ET.tostring(child, encoding="unicode") for child in root.find('text'))
The one-liner can be formatted to more lines to make it more readable:
output_text = ''.join(ET.tostring(child, encoding="unicode")
for child in root.find('text'))
A: I was able to get what I wanted by appending all child elements of my text tag to a string using ET.tostring:
output_text = ""
for child in root.find('text'):
output_text += ET.tostring(child, encoding="unicode")
>>>output_text
>>>"<p>This is some text that I want to read</p>"
A: Above solutions will miss initial part of your html if your content begins with text. E.g.
<root><text>This is <i>some text</i> that I want to read</text></root>
You can do that:
node = root.find('text')
output_list = [node.text] if node.text else []
output_list += [ET.tostring(child, encoding="unicode") for child in node]
output_text = ''.join(output_list) | |
d15242 | ok, i think i found out the solution, i'll leave this code here in case somebody facing same problem
$scale = 1;
$currentScale = 1;
$position = 80; // change it to any value
$step = 5;
echo('original position : ' . $position . '<br>');
for($i = $step; $i >= 1; $i--){
echo('-----------------------------------------------------<br>');
$currentScale = ($scale / $step) * $i;
$position = $position * $currentScale;
$originalPosition = $position;
$maxLoop = $step - $i;
if($maxLoop){
for($j = $maxLoop; $j >= 0 ; $j--){
$tempScale = $scale - (($scale / $step) * $j);
$originalPosition = $originalPosition / $tempScale;
}
}
echo('max rescale loop : ' . $maxLoop . '<br>');
echo('current scale : ' . $currentScale . ' ( ' . $currentScale * 100 . '% )<br>');
echo('current position : ' . $position . '<br>');
echo('original position : ' . $originalPosition . '<br>');
}
exit;
here is the result, the original position stays the same as initial value | |
d15243 | It's HellowWorld.java or HelloWorldActivity.java depends on what you choose at this step. Whatever the name is, as long as the it extends Activity Class you should be OK. | |
d15244 | Edited (May2016):
With later Win10 mobile its now possible to debug a phone running Windows 10 Mobile via WIFI.
Follow the guidance here: https://msdn.microsoft.com/en-us/windows/uwp/debug-test-perf/device-portal-mobile | |
d15245 | Since the issue is probably browser-related, and you can't really fix the browser(you could report a bug to Google though), I'd suggest taking a different path.
Have a look Here:
In Node.js, given a URL, how do I check whether its a jpg/png/gif?
See the comments of the accepted answer, which suggests a method to check the file type using the file stream. I'm pretty sure this would work on browser-implemented Javascript and not only in Node.js. | |
d15246 | if your lambda function run into vpc then you have to create one endpoint to access s3
create s3 endpoint for vpc from vpc dashboard
select s3 gateway and attach your vpc
add endpoint routes to your route table.
Now you can read your s3 object. | |
d15247 | Shout out to @Helpinghand for helping me troubleshoot this.
"I found the solution within the link you posted. The problem is that i was explicitly sending params in its own object before assigning "content-type". The example you posted concatenates the query params to the url. After making this switch, its working perfectly".
this.$http.get(`http://localhost:8000/files/download?id=${item._id}`, {responseType: 'arraybuffer'})
.then(response => {
console.log(response.headers.get('content-type'));
console.log(response);
var blob = new Blob([response.body], {type:response.headers.get('content-type')});
var link = document.createElement('a');
link.href = window.URL.createObjectURL(blob);
link.download = item.filename_version;
link.click();
}) | |
d15248 | I use this to highlight generics in java correctly:
(setq c-recognize-<>-arglists t) | |
d15249 | Try this regular expression:
^[A-Za-z0-9]+([-_.][A-Za-z0-9]+)*$
This matches any sequence that starts with at least one letter or digit (^[A-Za-z0-9]+) that may be followed by zero or more sequences of one of -, _, or . ([-_.]) that must be followed by at least one letter or digit ([A-Za-z0-9]+).
A: Try this:
^[\p{L}\p{N}][\p{L}\p{N}_.-]*[\p{L}\p{N}]$
In PHP:
if (preg_match(
'%^ # start of string
[\p{L}\p{N}] # letter or digit
[\p{L}\p{N}_.-]* # any number of letters/digits/-_.
[\p{L}\p{N}] # letter or digit
$ # end of the string.
%xu',
$subject)) {
# Successful match
} else {
# Match attempt failed
}
Minimum string length: Two characters.
A: Well, for each of your rules:
*
*First and last letter/digit:
^[a-z0-9]
and
[a-z0-9]$
*empty space not allowed (nothing is needed, since we're doing a positive match and don't allow any whitespace anywhere):
*Only letters/digits/-/_/.
[a-z0-9_.-]*
*No neighboring symbols:
(?!.*[_.-][_.-])
So, all together:
/^[a-z0-9](?!.*[_.-][_.-])[a-z0-9_.-]*[a-z0-9]$/i
But with all regexes, there are multiple solutions, so try it out...
Edit: for your edit:
/^[a-z0-9][a-z0-9_.-]*[a-z0-9]$/i
You just remove the section for the rule you want to change/remote. it's that easy...
A: This seems to work fine for provided examples: $patt = '/^[a-zA-Z0-9]+([-._][a-zA-Z0-9]+)*$/'; | |
d15250 | The first parameter of the SharedEventManager::attach() is the identity of the event manager to target. This identity is dynamically assigned for any class that is event capable (implements Zend\EventManager\EventManagerAwareInterface) or has otherwise had it's identity set via $eventManager->setIdentity().
The question refers to the \Zend\Mvc\Controller\AbstractActionController; this itself is an identity given to any controller that extends \Zend\Mvc\AbstractActionController (among others), allowing for just one id to attach() to target all controllers.
To target just one controller (which is perfectly valid, there are many use cases), you can do so in two ways:
*
*via the SharedEventManager, external to the controller class (as you have been doing)
*directly fetching said controller's event manager and handling the events within the controller class.
via SharedEventManager
Use the fully qualified class name as this is is added as an identity to the event manager
$sharedEventManager->attach(
'MyModule\Controller\FooController', 'dispatch', function($e){
// do some work
});
Within controller
I modify the normal attachDefaultListeners() method (which is called automatically), this is where you can attach events directly.
namespace MyModule\Controller;
use Zend\Mvc\Controller\AbstractActionController;
use Zend\EventManager\EventInterface;
class FooController extends AbstractActionController
{
protected function attachDefaultListeners()
{
// make sure you attach the defaults!
parent::attachDefaultListeners();
// Now add your own listeners
$this->getEventManager()->attach('dispatch', array($this, 'doSomeWork'));
}
public function doSomeWork(EventInterface $event) {
// do some work
}
}
A: Why do you use your own base controller? there is no real benefit of doing that, unless you have rare edge case scenario.
Your base controller class is missing this part from AbstractController:
/**
* Set the event manager instance used by this context
*
* @param EventManagerInterface $events
* @return AbstractController
*/
public function setEventManager(EventManagerInterface $events)
{
$events->setIdentifiers(array(
'Zend\Stdlib\DispatchableInterface',
__CLASS__,
get_class($this),
$this->eventIdentifier,
substr(get_class($this), 0, strpos(get_class($this), '\\'))
));
$this->events = $events;
$this->attachDefaultListeners();
return $this;
}
See setIdentifiers call there? That is why second example works.
Also i suspect you might not actually trigger dispatch event in dispatch() method of your controller
As side note: you should never create classes without top level namespace. Eg all my classes use Xrks\ vendor namespace | |
d15251 | It looks as if your configuration is placed below the floor. The floor (where the shadows are) is placed exactly on coordinate 0.
Try moving your configuration up until you bottom of the product is on floor level.
You can find more information about scripting here: https://docs.roomle.com/scripting/resources/ | |
d15252 | The following code should help. If you don't already have it, install the devel module, it gives you a wonderful function called dpm() which will print the contents of an array/object to the messages area.
// Get some nodes ids
$nids = db_query('SELECT nid FROM {node}')->fetchCol();
// Load up the node objects
$nodes = node_load_multiple($nids);
// This will print the node object out to the messages area so you can inspect it to find the specific fields you're looking for
dpm($nodes);
// I guess you'll want to do something like this:
$terms = array();
foreach ($nodes as $node) {
// Load the taxonomy term associated with this node. This will be found in a field as this is how taxonomy terms and nodes are related in D7
$term = taxonomy_term_load($node->field_vocab_name['und'][0]['tid']);
// Set up the array
if (!isset($terms[$term->name])) {
$terms[$term->name] = array();
}
// Create some markup for this node
$markup = '<h3>' . l($node->title . ' ' . $node->field_other_field['und'][0]['value'], "node/$node->nid") . '</h3>';
// Add an image
$image = theme('image', array('path' => $node->field_image['und'][0]['uri'], 'alt' => $node->title));
$markup.= $image;
// Add the markup for this node to this taxonomy group's list
$terms[$term->name][] = $markup;
}
// Make up the final page markup
$output = '';
foreach ($terms as $term_name => $node_list) {
$output .= '<h2>' . check_plain($term_name) . '</h2>';
$output .= theme('item_list', array('items' => $node_list));
}
return $output;
Hope that helps
A: You can get views to group the returned nodes by the taxonomy term for you. Assuming you are using a field view type, just add the taxonomy field and then where it says Format:Unformatted list | Settings click on Settings at the right hand side and choose your taxonomy field as the grouping field.
Note: if you are not using a field view type, or if you are not using unformatted list then the instructions will be a variation of the above. | |
d15253 | The src attribute on the img is set improperly. It should be
<img class="card-image" :src="content.image"/> | |
d15254 | Yes, top-level const will be just dropped. Error from gcc
redefinition of ‘void fm(void (*)(int))’
As you can see const is dropped.
Quote from N3376 8.3.5/5
After producing the list of parameter types, any top-level
cv-qualifiers modifying a parameter type are deleted when forming the
function type.
A: Yes you cannot overload functions based on the const ness of a non pointer / non-reference argument, see:
Functions with const arguments and Overloading
Which in turn implies that pointers to function that accept parameter by value and const value are same type. | |
d15255 | Ok, so first of all there is a radar already.
There is a proposed workaround. I was partially successful with doing this crap inside my UICollectionReusableView subclass.
override func apply(_ layoutAttributes: UICollectionViewLayoutAttributes) {
super.apply(layoutAttributes)
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 0.01) {
self.layer.zPosition = 0
}
}
I am saying partially because if you have a little bit longer UICollectionView it's still corrupted for some headers. Most of them are ok though. So it depends on your situation. Hopefully that radar will be addressed soon.
A: you can fix this problem in delegate method.as in the following example
Objective-C
- (void)collectionView:(UICollectionView *)collectionView willDisplaySupplementaryView:(UICollectionReusableView *)view forElementKind:(NSString *)elementKind atIndexPath:(NSIndexPath *)indexPath
{
if (@available(iOS 11.0, *)) {
if ([elementKind isEqualToString:UICollectionElementKindSectionHeader]) {
view.layer.zPosition = 0;
}
}
}
Swift
func collectionView(_ collectionView: UICollectionView, willDisplaySupplementaryView view: UICollectionReusableView, forElementKind elementKind: String, at indexPath: IndexPath) {
if elementKind == UICollectionView.elementKindSectionHeader {
view.layer.zPosition = 0
}
}
A: That are not multiple scroll indicators. The z index of the header views is higher than that of the scroll indicator view.
Edit: found something more interesting, if you run the view debugger the scroll indicator is positioned above the headers... something weird is going on.
A: There is a simple method to fix this problem. In your section header view Class, you can override method layoutSubviews, and write:
- (void)layoutSubviews {
[super layoutSubviews];
self.layer.zPosition = 0;
}
Because, before iOS11, the section view's layer zPosition value is default 0, but in iOS11, this default value is 1. | |
d15256 | You would get the result 10 / 6 = 1 (integer division). Then that result would be converted to a double and assigned to result. The variables would remain what they are, of course.
A: Nothing changes about variable1 and variable2. They remain ints and hold their values.
Also, because both the operands of the divison are ints, the integer division will take place.
The result of the integer division is (implicitly) converted to a double and stored into result.
A: variable1 will continue to be 10, and variable2 will continue to be 6. The only expressions that change variables are assignment and compound-assignment expressions (=, +=, etc., and as a special shorthand ++ for += 1 and -- for -= 1). | |
d15257 | You can get it works with the following code:
TfsTeamProjectCollection tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://tfsservername:8080/tfs/DefaultCollection"));
ITestManagementTeamProject project = tfs.GetService<ITestManagementService>().GetTeamProject("projectName");
foreach (ITestPlan p in project.TestPlans.Query("Select * From TestPlan"))
{
ITestRun testRun = p.CreateTestRun(false);
var testPoints = p.QueryTestPoints("SELECT * from TestPoint");
foreach (ITestPoint testPoint in testPoints)
{
testRun.AddTestPoint(testPoint, null);
}
testRun.Save();
ITestCaseResultCollection results = testRun.QueryResults();
foreach (ITestCaseResult result in results)
{
result.Attachments.Add(result.CreateAttachment(@"C:\Users\visong\Pictures\000.jpg"));
result.Outcome = TestOutcome.Warning;
result.State = TestResultState.Completed;
results.Save(true);
}
testRun.Save();
testRun.Refresh();
}
Then you should be able to find the attachement in the test result you're working with in MTM. | |
d15258 | You want to use the function UTTypeCopyPreferredTagWithClass. Try this in a playground
import MobileCoreServices
import AVFoundation
let avTypes = AVURLAsset.audiovisualTypes()
let anAVType = avTypes[0]
if let ext = UTTypeCopyPreferredTagWithClass(anAVType as CFString,kUTTagClassFilenameExtension)?.takeRetainedValue() {
print("The AVType \(anAVType) has extension '\(ext)'")
}
else {
print("Can't find the extension for the AVType '\(anAVType)")
}
print("\n")
let avExtensions = avTypes.map({UTTypeCopyPreferredTagWithClass($0 as CFString,kUTTagClassFilenameExtension)?.takeRetainedValue() as String? ?? ""})
print("All extensions = \(avExtensions)\n")
avExtensions = avExtensions.filter { $0.isEmpty == false }
print("All extensions filtered = \(avExtensions)\n")
print("First extension = '\(avExtensions[0])'")
print("Second extension = '\(avExtensions[1])'")
print("Third extension = '\(avExtensions[2])'")
Result:
The AVType AVFileType(_rawValue: public.pls-playlist) has extension 'pls'
All extensions = ["pls", "", "aifc", "m4r", "", "", "wav", "", "", "3gp", "3g2", "", "", "", "flac", "avi", "m2a", "", "aa", "", "aac", "mpa", "", "", "", "", "", "m3u", "mov", "aiff", "ttml", "", "m4v", "", "", "amr", "caf", "m4a", "mp4", "mp1", "", "m1a", "mp4", "mp2", "mp3", "itt", "au", "eac3", "", "", "webvtt", "", "", "vtt", "ac3", "m4p", "", "", "mqv"]
All extensions filtered = ["pls", "aifc", "m4r", "wav", "3gp", "3g2", "flac", "avi", "m2a", "aa", "aac", "mpa", "m3u", "mov", "aiff", "ttml", "m4v", "amr", "caf", "m4a", "mp4", "mp1", "m1a", "mp4", "mp2", "mp3", "itt", "au", "eac3", "webvtt", "vtt", "ac3", "m4p", "mqv"]
First extension = 'pls'
Second extension = 'aifc'
Third extension = 'm4r' | |
d15259 | edited most of the code in the original post is not needed, so ...
With your set of files, named (in the same order) fullList.txt, list1.txt and list2.txt, this should work
@echo off
setlocal enableextensions disabledelayedexpansion
set "mainFile=.\fullList.txt"
for %%f in (list*.txt) do (
findstr /l /x /v /g:"%mainFile%" "%%~ff" >nul && (
echo %%f is not included
) || (
echo %%f is included
)
)
How does it work? It is just a findstr command for each file. It searchs the "small list" file for strings not contained in the "big list" file. If any line is found, the file is not contained, else it is contained
A: @ECHO OFF
SETLOCAL
SET "sourcedir=U:\sourcedir"
PUSHD "%sourcedir%"
SET "is_subset=Y"
FOR /f "delims=" %%a IN (subset_file.txt) DO (
FIND "%%a" "superset_file.txt">NUL
IF ERRORLEVEL 1 ECHO fail ON "%%a"&SET "is_subset=N"&GOTO done
)
:done
POPD
ECHO subset IN superset : %is_subset%
GOTO :EOF
This should work - but your question is unclear. This will check that the superset contains every member of the subset, but you may be searching for some members.
I used test files on my U: drive - you would need to modify to suit.
A: The solution below solve the problem in a very fast way. As additional data, it show the lines from the small file not contained in the large file:
@echo off
setlocal
for /F "delims=" %%a in (smallFile.txt) do set "line[%%a]=1"
for /F "delims=" %%a in (largeFile.txt) do set "line[%%a]="
set "anyLine="
for /F %%a in ('set line[ 2^>NUL') do set "anyLine=1"
if defined anyLine (
echo Large file does NOT "contain" the small file. Lines not contained:
echo/
for /F "tokens=2 delims=[]" %%a in ('set line[') do echo %%a
) else (
echo Large file "contain" the small file.
) | |
d15260 | The whole point of Optional is to avoid the situation where you return a bogus value (like null, or like returning -1 from String.indexOf in order to indicate it didn't find anything), putting the responsibility on the caller to know what is bogus and to know to check for it. Returning Optional lets the caller know it needs to check whether a valid result came back, and it takes away the burden of the caller code needing to know what return values aren't valid.
Optional was never meant as a wrapper for a bogus value.
And yes you can return anything. The api doesn't stop you from doing nonsensical things.
Also consider that flatMap doesn't know what to do with this weird thing, it will treat the contents as nonempty. So now how you can use this value is limited. You have a multistage validation process where it might be something you can handle with operations chained together using flatMap, but this fake file has made that harder. It's limited your options.
Return Optional.empty() if you don't have a valid value to return. | |
d15261 | You need to use a combination of the SUBSTRING and PATINDEX Functions
SELECT
SUBSTRING(SUBSTRING(fielda,PATINDEX('%[^a-z]%',fielda),99),PATINDEX('%[^0-9]%',SUBSTRING(fielda,PATINDEX('%[^a-z]%',fielda),99)),99) AS youroutput
FROM yourtable
Input
yourtable
fielda
SALV3000640PIX32BLU
SALV3334470A9CARBONGRY
TP3000620PIXL128BLK
Output
youroutput
PIX32BLU
A9CARBONGRY
PIXL128BLK
SQL Fiddle:http://sqlfiddle.com/#!6/5722b6/29/0
A: To do this you can use
PATINDEX('%[0-9]%',FieldName)
which will give you the position of the first number, then trim off any letters before this using SUBSTRING or other string functions. (You need to trim away the first letters before continuing with the next step because unlike CHARINDEX there is no starting point parameter in the PATINDEX function).
Then on the remaining string use
PATINDEX('%[a-z]%',FieldName)
to find the position of the first letter in the remaining string. Now trim off the numbers in front using SUBSTRING etc.
You may find this other solution helpful
SQL to find first non-numeric character in a string
A: Try this it may helps you
;With cte (Data)
AS
(
SELECT 'SALV3000640PIX32BLU' UNION ALL
SELECT 'SALV3334470A9CARBONGRY' UNION ALL
SELECT 'SALV3334470A9CARBONGRY' UNION ALL
SELECT 'SALV3334470B9CARBONGRY' UNION ALL
SELECT 'SALV3334470D9CARBONGRY' UNION ALL
SELECT 'TP3000620PIXL128BLK'
)
SELECT * , CASE WHEN CHARINDEX('PIX',Data)>0 THEN SUBSTRING(Data,CHARINDEX('PIX',Data),LEN(Data))
WHEN CHARINDEX('A9C',Data)>0 THEN SUBSTRING(Data,CHARINDEX('A9C',Data),LEN(Data))
ELSE NULL END AS DesiredResult FROM cte
Result
Data DesiredResult
-------------------------------------
SALV3000640PIX32BLU PIX32BLU
SALV3334470A9CARBONGRY A9CARBONGRY
SALV3334470A9CARBONGRY A9CARBONGRY
SALV3334470B9CARBONGRY NULL
SALV3334470D9CARBONGRY NULL
TP3000620PIXL128BLK PIXL128BLK | |
d15262 | You are using a static method, so the class itself is never created as an object, therefore never testing that ability.
A: I tried the lombok @UtilityClass, it helped to ignore the class name and code coverage was improved to be 100%.
A: Since also a class with a static function has a default no-argument constructor, your code coverage tool will complain. A good way to solve this, is by adding a private constructor.
private EmailValidator() {
} | |
d15263 | This error comes up because raise is not a primitive in F* but needs to be imported from FStar.Exn (see ulib/FStar.Exn.fst), which exposes this function -- raise. Simply opening this module should be sufficient. There is one more minor issue in the code that I have also fixed below.
Here's the version of the code that goes through:
module MinList
open FStar.Exn
exception EmptyList
val min_list: list int -> Exn int (requires True) (ensures (fun _ -> True))
let rec min_list l = match l with
| [] -> raise EmptyList
| single_el :: [] -> single_el
| hd :: tl -> min hd (min_list tl)
Notice that I also have added requires and ensures clauses. This is because the Exn effect expects a these clauses to reason about the code in it. If your use case however, has exactly the above clauses (i.e., true and true), then you can use the convenient synonym for this, Ex (see ulib/FStar.Pervasives.fst). Thus, the following code is also valid and will behave exactly the same as the above code.
module MinList
open FStar.Exn
exception EmptyList
val min_list: list int -> Ex int
let rec min_list l = match l with
| [] -> raise EmptyList
| single_el :: [] -> single_el
| hd :: tl -> min hd (min_list tl) | |
d15264 | Check this.
TimeZone.CurrentTimeZone.ToLocalTime(date);
https://msdn.microsoft.com/en-in/library/system.datetime.touniversaltime(v=vs.110).aspx
Convert UTC/GMT time to local time
A: You can get the timezone offset from the clients browser using Javascript.
function returnTimeDiff(postDateTime, spanid) {
var offset =(new Date().getTimezoneOffset() / 60)
}
Convert UTC time to Client browser's timezone using JavaScript in a MVC View | |
d15265 | Some authors do suggest not "wasting" the CI on an identity column if there is an alternative that would benefit range queries.
From MSDN Clustered Index Design Guidelines the key should be chosen according to the following criteria
*
*Can be used for frequently used queries.
*Provide a high degree of uniqueness.
*Can be used in range queries.
Your CourtOrderID column meets 2. Your PersonId meets 1 and 3. As most rows will end up with the uniqueifier added anyway you might as well just declare it as unique and use PersonId,CourtOrderID as this will be the same width but be more useful as the clustered index key is added to all NCIs as the row locator and this will allow them to cover more queries.
The main issue with using PersonId,CourtOrderID as the CI is that logical fragmentation will likely ensue (and this particularly affects the range queries you are trying to help) so you would need to monitor fill factor, and fragmentation levels and perform index maintenance more often.
A: It's explained in the following link: https://msdn.microsoft.com/en-us/ms190457.aspx
Clustered
*
*Clustered indexes sort and store the data rows in the table or view based on their key values. These are the columns included in the index definition. There can be only one clustered index per table, because the data rows themselves can be sorted in only one order.
*The only time the data rows in a table are stored in sorted order is when the table contains a clustered index. When a table has a clustered index, the table is called a clustered table. If a table has no clustered index, its data rows are stored in an unordered structure called a heap.
Nonclustered
*
*Nonclustered indexes have a structure separate from the data rows. A nonclustered index contains the nonclustered index key values and each key value entry has a pointer to the data row that contains the key value.
*The pointer from an index row in a nonclustered index to a data row is called a row locator. The structure of the row locator depends on whether the data pages are stored in a heap or a clustered table. For a heap, a row locator is a pointer to the row. For a clustered table, the row locator is the clustered index key.
*You can add nonkey columns to the leaf level of the nonclustered index to by-pass existing index key limits, 900 bytes and 16 key columns, and execute fully covered, indexed, queries.
A: I am by no means a SQL Expert...so take this as a developer's view rather than a DBA view..
Inserts on clustered (physically ordered) indexes that aren't in sequential order cause extra work for inserts/updates. Also, if you have many inserts happening at once and they are all occurring in the same location, you end up with contention. Your specific performance varies based on your data and how you access it. The general rule of thumb is to build your clustered index on the most unique narrow value in your table (typically the PK)
I'm assuming your PersonId won't be changing, so Updates don't come into play here. But consider a snapshot of a few rows with PersonId of
1
2
3
3
4
5
6
7
8
8
Now insert 20 new rows for PersonId of 3. First, since this is not a unique key, the server adds some extra bytes to your value (behind the scenes) to make it unique (which also adds extra space) and then the location where these will reside has to be altered. Compare that to inserting an auto-incrementing PK where the inserts happen at the end. The non technical explanation would likely come down to this: there is less 'leaf-shuffling' work to do if it's naturally progressing higher values at the end of the table versus reworking location of the existing items at that location while inserting your items.
Now, if you are having issues with Inserts then you are likely inserting a bunch of the same (or similar) PersonId values at once which is causing this extra work in various places throughout the table and the fragmentation is killing you. The downside of switching to the PK being clustered in your case, is if you are having insert issues today on PersonIds that vary in value spread throughout the table, if you switch your clustered index to the PK and all of the inserts now happen in one location then your problem may actually get worse due to increased contention concentration. (On the flip side, if your inserts today are not spread out all over, but are all typically bunched in similar areas, then your problem will likely ease by switching your clustered index away from PersonId to your PK because you'll be minimizing the fragmentation.)
Your performance problems should be analyzed to your unique situation and take these types of answers as general guidelines only. Your best bet is to rely on a DBA that can validate exactly where your problems lie. It sounds like you have resource contention issues that may be beyond a simple index tweak. This could be a symptom of a much larger problem. (Likely design issues...otherwise resource limitations.)
In any case, good luck!
A: The distinction between a clustered vs. non-clustered index is that the clustered index determines the physical order of the rows in the database. In other words, applying the clustered index to PersonId means that the rows will be physically sorted by PersonId in the table, allowing an index search on this to go straight to the row (rather than a non-clustered index, which would direct you to the row's location, adding an extra step).
That said, it's unusual for the primary key not to be the clustered index, but not unheard of. The issue with your scenario is actually the opposite of what you're assuming: you want unique values in a clustered index, not duplicates. Because the clustered index determines the physical order of the row, if the index is on a non-unique column, then the server has to add a background value to rows who have a duplicate key value (in your case, any rows with the same PersonId) so that the combined value (key + background value) is unique.
The only thing I would suggest is not using a surrogate key (your CourtOrderId) column as the primary key, but instead use a compound primary key of the PersonId and some other uniquely-identifying column or set of columns. If that's not possible (or not practical), though, then put the clustered index on CourtOrderId.
A: Some db with some nasty selects, joins in a stored procedure - only diffrence is the index
INDEXES - clustered vs nonclustered
891 rows
10 sec
NONCLUSTERED
OR
891 rows
14 sec
CLUSTERED | |
d15266 | It is require PECL pecl_http >= 0.1.0 | |
d15267 | Why would you want to render two forms at the same time? You simply cannot send two responses this way. Your script will end after first return.
If I understand you correctly, you want to serve two forms at the same time (meaning you want to join them)? If yes, then look at this example.
Basically you first render both forms
formone = render.formone('')
formtwo = render.formtwo('')
And then join them and send the response
return render.index(unicode(formone), unicode(formtwo))
If you don't want to serve them at the same time, this can't be done this way. You may do this via AJAX (ie event from webpage, send request and ask for second form after clicking on some element or whatever) or sending another standard non-async request (submitting first form). | |
d15268 | @RequestParam with List or array
@RequestMapping("/books")
public String books(@RequestParam List<String> authors,
Model model){
model.addAttribute("authors", authors);
return "books.jsp";
}
@Test
public void whenMultipleParameters_thenList() throws Exception {
this.mockMvc.perform(get("/books")
.param("authors", "martin")
.param("authors", "tolkien")
)
.andExpect(status().isOk())
.andExpect(model().attribute("authors", contains("martin","tolkien")));
}
A: Use Gson library to convert list into a json string and then put that string in content
Also put the @RequestBody annotation with the method parameter in the controller
public String updateActiveStatus(@RequestBody ArrayList<...
A: In case of using the parameter as List by the RequestParams:
In my case, it was a list on Enum values.
when(portService.searchPort(Collections.singletonList(TypeEnum.NETWORK))
.thenReturn(searchDto);
ResultActions ra = mockMvc.perform(get("/port/search")
.param("type", new String[]{TypeEnum.NETWORK.name()})); | |
d15269 | Normally, libraries are loaded in controller and accessed from there, not from models. You could just return the array of data from your model to controller and use email library from controller itself. If thats not feasible for you, then try doing:
public function __construct($id){
parent::__construct($id);
}
public function inviteEmailToClub($email, $club, $message){
$CI =& get_instance();
$CI->load->library('email');
.....
$CI->email->from($mainClubAdmin->User->getAccount()->email, $club->name);
....
} | |
d15270 | define a model
class ImageItemModel{
String imagePath;
String title;
ImageItemModel(this.imagePath, this.title)
///other things you want.
}
and use it like below
List<ImageItemModel> list = [
ImageItemModel("Url0","title0"),
ImageItemModel("Url1","title1"),
ImageItemModel("Url2","title2"),
];
and you can access each item like this
print(list[1].title) /// prints title1
so this part of your code will changes like this
FadeInImage.memoryNetwork(
placeholder: kTransparentImage,
image: list[index].imagePath,
fit: BoxFit.cover),
Text(
list[index].title,
) | |
d15271 | Untested, but what if you try this? (for your start date parameter):
= iif(
DatePart(DateInterval.Day, Today()) <> "16",
DateAdd(DateInterval.Month, -1, DateSerial(Year(Date.Now), Month(Date.Now), 1)),
DateSerial(Year(Date.Now), Month(Date.Now), 1)
) | |
d15272 | I was running into the same issue and thought I'd follow up with what ended up working for me. The connection is correct but you need to make sure that the worker pods have the same environment variables:
airflow:
image:
repository: airflow-docker-local
tag: 1.10.9
executor: Kubernetes
service:
type: LoadBalancer
connections:
- id: my_aws
type: aws
extra: '{"aws_access_key_id": "xxxx", "aws_secret_access_key": "xxxx", "region_name":"us-west-2"}'
config:
AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY: airflow-docker-local
AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG: 1.10.9
AIRFLOW__KUBERNETES__WORKER_CONTAINER_IMAGE_PULL_POLICY: Never
AIRFLOW__KUBERNETES__WORKER_SERVICE_ACCOUNT_NAME: airflow
AIRFLOW__KUBERNETES__DAGS_VOLUME_CLAIM: airflow
AIRFLOW__KUBERNETES__NAMESPACE: airflow
AIRFLOW__CORE__REMOTE_LOGGING: True
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER: s3://airflow.logs
AIRFLOW__CORE__REMOTE_LOG_CONN_ID: my_aws
AIRFLOW__CORE__ENCRYPT_S3_LOGS: False
AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__AIRFLOW__CORE__REMOTE_LOGGING: True
AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__AIRFLOW__CORE__REMOTE_LOG_CONN_ID: my_aws
AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER: s3://airflow.logs
AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__AIRFLOW__CORE__ENCRYPT_S3_LOGS: False
I also had to set the fernet key for the workers (and in general) otherwise I get an invalid token error:
airflow:
fernet_key: "abcdefghijkl1234567890zxcvbnmasdfghyrewsdsddfd="
config:
AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__AIRFLOW__CORE__FERNET_KEY: "abcdefghijkl1234567890zxcvbnmasdfghyrewsdsddfd="
A: Your remote log conn id needs to be an ID of a connection in the connections form/list. Not a connection string.
https://airflow.apache.org/docs/stable/howto/write-logs.html
https://airflow.apache.org/docs/stable/howto/connection/index.html | |
d15273 | Use ng-src
<td><img ng-src="{{(x.IS_LOCKED == 1) ? './images/padlock_locked.png' : './images/padlock_unlocked.png'}}" width=20 height=20></td> | |
d15274 | Try to combine Where() and Count():
var matches = new int[] { 1, 2, 3 };
var data = new List<int[]>
{
new int[] { 3, 2, 4 },
new int[] { 2, 16, 5 }
};
var result = data.Where(x => x.Count(matches.Contains) == 2);
A: since it's int[] you can use the .Intersect() directly. For example
from a in arrays where a.Intersect(Array1).Count() == 2 select a
//arrays contains Array2 and Array3 | |
d15275 | The API for http.begin(url) assumed standard port of 80 or 443 is used. In order to use port 8082, you need to pass in the port as function argument like this:
http.begin("192.168.1.54", 8082); | |
d15276 | I don't know if one can make general observations about this kind of decision, since it's really down to what you are trying to do and how high up the priority list NFRs like performance and response time are to your application.
If you have lots of users, uploading lots of binary files, with a system serving large numbers of those uploaded binary files then you have a situation where the costs of storing files in the database include:
*
*Large size binary files
*Costly queries
Benefits are
*
*Atomic commits
*Scaling comes with database (though w MySQL there are some issues w multinode etc)
*Less fiddly and complicated code to manage file systems etc
Given the same user situation where you store to the filesystem you will need to address
*
*Scaling
*File name management (user uploads same name file twice etc)
*Creating corresponding records in DB to map to the files on disk (and the code surrounding all that)
*Looking after your apache configs so they serve from the filesystem
We had a similar problem to solve as this for our Grails site where the content editors are uploading hundreds of pictures a day. We knew that driving all that demand through the application when it could be better used doing other processing was wasteful (given that the expected demand for pages was going to be in the millions per week we definitely didn't want images to cripple us).
We ended up creating upload -> file system solution. For each uploaded file a DB meta-data record was created and managed in tandem with the upload process (and conversely read that record when generating the GSP content link to the image). We served requests off disk through Apache directly based on the link requested by the browser. But, and there is always a but, remember that with things like filesystems you only have content per machine.
We had the headache of making sure images got re-synchronised onto every server, since unlike a DB which sits behind the cluster and enables the cluster behave uniformly, files are bound to physical locations on a server.
Another problem you might run up against with filesystems is folder content size. When you start having folders where there are literally tens of thousands of files in them, the folder scan at the OS level starts to really drag. To avert this problem we had to write code which managed image uploads into yyyy/MM/dd/image.name.jpg folder structures, so that no one folder accumulated hundreds of thousands of images.
What I'm implying is that while we got the performance we wanted by not using the DB for BLOB storage, that comes at the cost of development overhead and systems management.
A: Just as an additional suggestion: JCR (eg. Jackrabbit) - a Java Content Repository. It has several benefits when you deal with a lot of binary content. The Grails plugin isn't stable yet, but you can use Jackrabbit with the plain API.
A: Another thing to keep in mind is that if your site ever grows beyond one application server, you need to access the same files from all app servers. Now all app servers have access to the database, either because that's a single server or because you have a cluster. Now if you store things in the file system, you have to share that, too - maybe NFS.
A: Even if you upload file in filesystem, all the files get same permission, so any logged in user can access any other's file just entering the url (Since all of them get same permission). If you however plan to give each user a directory then a user permission of apache (that is what server has permission) is given to them. You should su to root, create a user and upload files to those directories. Again accessing those files could end up adding user's group to server group. If I choose to use filesystem to store binary files, is there an easier solution than this, how do you manage access to those files, corresponding to each user, and maintaining the permission? Does Spring's ACL help? Or do we have to create permission group for each user? I am totally cool with the filesystem url. My only concern is with starting a seperate process (chmod and stuff), using something like ProcessBuilder to run Operating Systems commands (or is there better solution ?). And what about permissions? | |
d15277 | .split will divide your your string into an array. Each space will create a new element, so if you have "I like food", it will become ["I","like","food"]. Now we can look at our .join. The .join is doing the reverse. So if we were to call ','.join(["I","like","food"]), we'd get "I,like,food". In your function, we are using .join to combine a series of list elements into a series of string arguments, then we invoke eval and it executes.
And, eval is not recommended. So, instead, we can write a chain for if and elif statements:
inpu = """12
insert 0 5
insert 1 10
insert 0 6
print
remove 6
append 9
append 1
sort
print
pop
reverse
print"""
li = []
for line in inpu.split("\n"):
args = line.split()
if args[0] == "insert":
li.insert(int(args[1]),int(args[2]))
elif args[0] == "print":
print(li)
elif args[0] == "remove":
li.remove(int(args[1]))
elif args[0] == "sort":
li.sort()
elif args[0] == "reverse":
li = li[::-1]
elif args[0] == "append":
li.append(int(args[1]))
elif args[0] == "pop":
li.pop() | |
d15278 | The answer above works great, but is hardcoded for SRID (CoordinateSystemId) 2193.
The Coordinate System Id can however be present in the serialised data as shown in the question, or it can be present in the WellKnownText "SRID=2193;POINT (0 0)". Also this method will only read a polygon, but the WellKnownText can be a lot of things, like Geometry Collections, Point, Linestring, etc. To retreive this the ReadJson method can be updated to use the more generic FromText method as shown below.
Here is the class above updated with a more generic Coordinate System, and also for any Geometry Type. I have also added the Geography version for reference.
public class DbGeometryConverter : JsonConverter
{
public override bool CanConvert(Type objectType)
{
return objectType.IsAssignableFrom(typeof(string));
}
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
{
JObject location = JObject.Load(reader);
JToken token = location["Geometry"]["WellKnownText"];
string value = token.ToString();
JToken sridToken = location["Geometry"]["CoordinateSystemId"];
int srid;
if (sridToken == null || int.TryParse(sridToken.ToString(), out srid) == false || value.Contains("SRID"))
{
//Set default coordinate system here.
srid = 0;
}
DbGeometry converted;
if (srid > 0)
converted = DbGeometry.FromText(value, srid);
else
converted = DbGeometry.FromText(value);
return converted;
}
public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
{
// Base serialization is fine
serializer.Serialize(writer, value);
}
}
public class DbGeographyConverter : JsonConverter
{
public override bool CanConvert(Type objectType)
{
return objectType.IsAssignableFrom(typeof(string));
}
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
{
JObject location = JObject.Load(reader);
JToken token = location["Geography"]["WellKnownText"];
string value = token.ToString();
JToken sridToken = location["Geography"]["CoordinateSystemId"];
int srid;
if (sridToken == null || int.TryParse(sridToken.ToString(), out srid) == false || value.Contains("SRID"))
{
//Set default coordinate system here.
//NOTE: Geography should always have an SRID, and it has to match the data in the database else all comparisons will return NULL!
srid = 0;
}
DbGeography converted;
if (srid > 0)
converted = DbGeography.FromText(value, srid);
else
converted = DbGeography.FromText(value);
return converted;
}
public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
{
// Base serialization is fine
serializer.Serialize(writer, value);
}
}
A: System.Data.Spatial.DbGeometry does not play nicely with Newtonsoft.Json
You need to create a JsonConverter to convert the DbGeometry
public class DbGeometryConverter : JsonConverter
{
public override bool CanConvert(Type objectType)
{
return objectType.IsAssignableFrom(typeof(string));
}
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
{
JObject location = JObject.Load(reader);
JToken token = location["Geometry"]["WellKnownText"];
string value = token.ToString();
DbGeometry converted = DbGeometry.PolygonFromText(value, 2193);
return converted;
}
public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
{
// Base serialization is fine
serializer.Serialize(writer, value);
}
}
Then on your property in your model add the attribute
[JsonConverter(typeof(DbGeometryConverter))]
public DbGeometry Shape { get; set; }
Now when you hit your BreezeController the deserialization will be handled by our new DbGeometryConverter.
Hope it helps.
A: I don't see why not. On the line with (DbGeometryWellKnownValue):
"$type": "System.Data.Entity.Spatial.DbGeometryWellKnownValue, EntityFramework",
should this be (DbGeometry.WellKnownValue)?
"$type": "System.Data.Entity.Spatial.DbGeometry.WellKnownValue, EntityFramework", | |
d15279 | You can use tickPositioner function, but you should use timestamps as x values not as categories:
xAxis: {
tickPositioner: function() {
var positions = [],
interval = 24 * 3600 * 1000 * 173;
for (var i = this.dataMin; i <= this.dataMax; i += interval) {
positions.push(i);
}
return positions;
},
labels: {
formatter: function() {
return Highcharts.dateFormat("%b %y", this.value);
},
}
}
Live demo: http://jsfiddle.net/BlackLabel/npq3x2bc/
API Reference: https://api.highcharts.com/highcharts/xAxis.tickPositioner
A:
Highcharts.setOptions({
time: {
useUTC: false
}
});
$('#container-chart').highcharts({
chart: {
type: 'line',
alignTicks: false,
},
title: {
text: "Title 1",
},
tooltip: {
formatter: function() {
return Highcharts.dateFormat('%b %e, %Y',new Date(this.x))+ '<br/>' + this.y;
}
},
xAxis: {
categories: [1391122800000,1393542000000,1396216800000,1398808800000,1401487200000,1404079200000,1406757600000,1409436000000,1412028000000,1414710000000,1417302000000,1419980400000,1422658800000,1425078000000,1427752800000,1430344800000,1433023200000,1435615200000,1438293600000,1440972000000,1443564000000,1446246000000,1448838000000,1451516400000,1454194800000,1456700400000,1459375200000,1461967200000,1464645600000,1467237600000,1469916000000,1472594400000,1475186400000,1477868400000,1480460400000,1483138800000,1485817200000,1488236400000,1490911200000,1493503200000,1496181600000,1498773600000,1501452000000,1504130400000,1506722400000,1509404400000,1511996400000,1514674800000,1517353200000,1519772400000,1522447200000,1525039200000,1527717600000,1530309600000,1532988000000,1535666400000,1538258400000,1540940400000],
// tickInterval: (24 * 3600 * 1000 * 173),
labels: {
formatter: function () {
return Highcharts.dateFormat("%b %y", this.value);
},
}
},
yAxis: {
title: { text: ''
},
tickInterval:(maxValue/10),
min: 0,
max: maxValue,
},
legend: {
enable: true,
},
plotOptions: {
series: {
marker: {
enabled: false
},
dataLabels: {
enabled: false,
allowOverlap: true,
},
lineWidth: 2,
states: {
hover: {
lineWidth: 3
}
},
threshold: null
}
},
series: [{
data: [0.57,0.41,0.51,0.35,0.16,0.16,0.05,0.19,0.27,0.57,0.45,0.59,0.49,0.77,0.56,0.25,0.3,0.28,0.27,0.33,0.45,0.62,0.62,0.46,0.46,0.45,0.68,0.18,0.22,0.28,0.29,0.41,0.34,0.59,0.67,0.69,0.65,0.57,0.73,-0.01,0.32,0.27,0.47,0.47,0.57,0.75,0.7,0.6,0.71,0.88,0.79,-0.11,0.22,0.15,0.07,0.09,0.09,0.09],
}],
exporting: {
sourceWidth: 1500
},
});
here I have added :
tickInterval:(maxValue/10),
min: 0,
max: maxValue,
it means you need to fist get max value and take the nearest value divisible by 10 and use , I hope it will help you | |
d15280 | Seems that the WebKit in WebEngine does not support it:
With the following javascript:
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia
|| navigator.mozGetUserMedia || navigator.msGetUserMedia || navigator.oGetUserMedia;
if (navigator.getUserMedia) {
alert("getUserMedia supported");
navigator.getUserMedia({video: true}, handleVideo, videoError);
} else {
alert("getUserMedia not supported");
}
and this Java code:
WebEngine webEngine = webView.getEngine();
webEngine.setOnAlert(event -> System.err.println(event.toString()));
the output shows:
WebEvent [source = javafx.scene.web.WebEngine@3dd89471, eventType = WEB_ALERT, data = getUserMedia not supported]
I tried setting a confirm handler in case that the WebEngine tries to ask for permission, but that is never called.
Tried it with Java 1.8.0_66 on OSX 10.11. | |
d15281 | You need to create a web.config file in your wwwroot folder and add the MIME config like below to serve the JSON and WOFF files on Azure. You can check this similar thread.
<?xml version="1.0" encoding="UTF-8" ?>
<configuration>
<system.webServer>
<staticContent>
<remove fileExtension=".json" />
<remove fileExtension=".woff" />
<remove fileExtension=".woff2" />
<mimeMap fileExtension=".json" mimeType="application/json" />
<mimeMap fileExtension=".woff" mimeType="application/font-woff" />
<mimeMap fileExtension=".woff2" mimeType="application/font-woff" />
</staticContent>
</system.webServer>
</configuration>
You can check angular deployment document fore more information, | |
d15282 | The reason is every devices has different screen dimensions, so either you have to re-size the ImageView based on the screen size or simply use different size Images for your ImageView which will be used automatically based on the screen dimension. To do so, check this,
A. Getting the screen dimension programmatically and settings the appropriate size into the ImageView,
Display display = getWindowManager().getDefaultDisplay();
DisplayMetrics outMetrics = new DisplayMetrics ();
display.getMetrics(outMetrics);
float density = getResources().getDisplayMetrics().density;
float dpHeight = outMetrics.heightPixels / density;
float dpWidth = outMetrics.widthPixels / density;
B. Adding different image sizes for different screens,
res/drawable-mdpi/my_icon.png // bitmap for medium density
res/drawable-hdpi/my_icon.png // bitmap for high density
res/drawable-xhdpi/my_icon.png // bitmap for extra high density
The following code in the Manifest supports all dpis.
<supports-screens android:smallScreens="true"
android:normalScreens="true"
android:largeScreens="true"
android:xlargeScreens="true"
android:anyDensity="true" />
A: Your Screen Resolution Represents that you are Setting the ImagevIew height and width for xxhdpi resolution device and the Second thing use dp for density pixel it cover the screen according to resolution use 150dpX150dp for ImageView use the following link for px to dp conversion http://pixplicity.com/dp-px-converter/ | |
d15283 | is "values" an ArrayList? if so use
dodaj.setText( (String)values.get( arg2 ) );
A: you will have to convert arraylist to array using arraylist.toarray()
A: With ArrayList you always use
arrayList.get(index);
Recommended way is to add a toString() method to your items/Objects so that it returns the preferred field.
public Student()
{
...
...
public String toString()
{
return firstName + " " + lastName;
}
}
Now you can use arrayList.get(int index). It will print our the output of toString(). | |
d15284 | Note: FTP is not a reliable protocol to upload/download data to/from a server with code. If you need a reliable solution, FTP is not for you. Try ssh instead.
To answer your question: Add a ProtocolCommandListener to your SocketClient (FTP extends this). Then you can at least print the messages in ProtocolCommandListener() and protocolReplyReceived() | |
d15285 | Simple Drag and Drop has worked for me.
To update the answer, there is now also an update option if you right click in the explorer view:
A: I was able to transfer a ton of training data from firebase to a codespace using firebase-admin.
in the codespace terminal:
pip install firebase-admin
pip install --ignore-installed <other firebase-admin dependencies>
run whatever script you use to download things from firebase
note that there are storage limits | |
d15286 | try add following into your build.gradle
println("HADOOP_HOME=$HADOOP_HOME")
compile files("$HADOOP_HOME/hadoop-core-0.20.203.0.jar")
println("System.env.HADOOP_HOME=$System.env.HADOOP_HOME")
compile files("$System.env.HADOOP_HOME/hadoop-core-0.20.203.0.jar")
A: It turned out I just had to add some of the jars from the $HADOOP_HOME/lib directory to the dependencies and it worked. Once I added the dependency for org/apache/commons/logging/LogFactory it cleared up the error for the hadoop Configuration class, and then I just had to add the other required jars later. | |
d15287 | Figured out how to do it:
when I do that:
$content = $_POST['form_old_data'];
$content = str_replace("\n", "", $content);
$content = str_replace("\r", "", $content);
CKEDITOR.instances.editor.insertHtml('<?=$content;?>');
it works. | |
d15288 | Middleman also provides the current_page variable. current_page.path is the source path of this resource (relative to the source directory, without template extensions) and current_page.url is the path without the directory index (so foo/index.html becomes just foo).
<%= current_page.path %>
# -> index.html
<%= current_page.url %>
# -> /
Details from Middleman's Middleman::Sitemap::Resource rubydoc.
http://rubydoc.info/github/middleman/middleman/Middleman/Sitemap/Resource
A: The solution is:
<%= request.path %> | |
d15289 | We convert the data.frame to matrix, use melt from reshape2 to get the dimnames as two columns along with the values as third column, then subset while using na.rm to remove the NA rows
library(reshape2)
melt(as.matrix(df1), na.rm = TRUE)
data
df1 <- structure(list(a = c(1L, NA, NA, NA), b = c(NA, 2L, NA, NA),
c = c(3L, NA, NA, NA), d = c(4L, 4L, NA, 4L)), class = "data.frame",
row.names = c("a",
"b", "c", "d")) | |
d15290 | The XmlSerializer allows you to specify attribute overrides dynamically at runtime. Let's suppose that you have the following static class:
public class Foo
{
public string Bar { get; set; }
}
and the following XML:
<?xml version="1.0" encoding="utf-8" ?>
<foo bar="baz" />
you could dynamically add the mapping at runtime without using any static attributes on your model class. Just like this:
using System;
using System.Xml;
using System.Xml.Serialization;
public class Foo
{
public string Bar { get; set; }
}
class Program
{
static void Main()
{
var overrides = new XmlAttributeOverrides();
overrides.Add(typeof(Foo), new XmlAttributes { XmlRoot = new XmlRootAttribute("foo") });
overrides.Add(typeof(Foo), "Bar", new XmlAttributes { XmlAttribute = new XmlAttributeAttribute("bar") });
var serializer = new XmlSerializer(typeof(Foo), overrides);
using (var reader = XmlReader.Create("test.xml"))
{
var foo = (Foo)serializer.Deserialize(reader);
Console.WriteLine(foo.Bar);
}
}
}
Now all that's left for you is to write some custom code that might read an XML file containing the attribute overrides and building an instance of XmlAttributeOverrides from it at runtime that you will feed to the XmlSerializer constructor. | |
d15291 | Is there a better way to both use Angular dependency injection and strong typing in an AMD project
This was an issue with angular 1.x dependency injection. The Angular module system was independent of the traditional Browser module systems in existance (e.g. AMD).
So you do need to import it twice. Once the Type, next the DI. | |
d15292 | If you want to use lodash just for this, which sounds like the case, I suggest that there may be a better way without it, only using newer vanilla JS:
...
computed: {
uniqueMemName() {
return [...new Set(this.members.map(m => m.mem_name))]
}
}
Sets are always unique, so mapping and converting to a set and then back to array gives us a unique array. | |
d15293 | You can adjust the code of Jekyll to your specific requirement.
Open lib\jekyll\Page.rb in jekyll folder and update the destination method:
module Jekyll
class Page
def destination(dest)
path = site.in_dest_dir(dest, URL.unescape_path(url))
path = File.join(path, "index") if url.end_with?("/")
path << output_ext unless path.end_with? output_ext
# replace index with title
path.sub! 'index', data['title'] if data['title']
path
end
end
end
Also update the destination method in lib\jekyll\Document.rb with the same line before returning path
A: Create a plugin and monkey patch those classes
Create _plugins/_my_custom_index.rb
module Jekyll
class Page
def destination(dest)
path = site.in_dest_dir(dest, URL.unescape_path(url))
path = File.join(path, "index") if url.end_with?("/")
path << output_ext unless path.end_with? output_ext
# replace index with title
path.sub! 'index', data['title'] if data['title']
path
end
end
end
module Jekyll
class Document
def destination(base_directory)
dest = site.in_dest_dir(base_directory)
path = site.in_dest_dir(dest, URL.unescape_path(url))
if url.end_with? "/"
path = File.join(path, "index.html")
else
path << output_ext unless path.end_with? output_ext
end
# replace index with title
path.sub! 'index', data['title'] if data['title']
path
end
end
end
A: You can change to permalink /posts/:title/:title.html
However, the post will be now accessible via http://server/post/my_first_post/my_first_post.html
If you want to change the default behavior, you should modify Jekyll.Page.destination
A: You can use permalink: /posts/:title/something:output_ext in the _config.yml or in post front matter | |
d15294 | that error only occurs when something is sent back to the browser before you try to send headers.
You have multiple php tags at the top of your page, there's white space between them. Also, you are sending text back in your header file.
copying your code straight out of your post, there is a space between your 2 sets of tags on line 3 after the ?>.
1. <?php
2. require_once("inc/header.php");
3. ?>
4. <?php
you dont need to open and close php tags around those lines.
<?php
session_start(); // start the session here so you dont forget
if (func::checkLoginState($conn))
{
require_once("inc/header.php"); // only include your top html template if you know you are good to go
echo "Welcome" . $_SESSION["username"] . "!";
} else
{
header("location:login.php");
}
require_once("inc/footer.php");
?>
this is perfectly fine.
edit based on your comments
youtube videos can be edited any way he wants it. If you forward to the end, hes removed the header redirect and put in a login form. So I would bet it wasnt working for him either.
A: Why we get this type error
Cannot modify header information - headers already sent by
*
*New line characters or spaces are before <?php
Solution: Remove everything before <?php
*BOM (Byte-Order-Mark ) in utf-8 encoded PHP files
Solution: Change the file encoding to "without BOM" (e.g. using notepad++) or remove the BOM before
*Echo before header
Solution: RemoveRemove the echo before header
Use ob_start(); and ob_end_flush(); for buffered output
Reference
A: You can use output buffering control to prevent header being sent out prematurely. The video you reference probably used it. Or else he has to use it to make it functional.
For example,
<?php
ob_start(); // start buffering all "echo" to an output buffer
?>
<html>
<?php
if (func::checkLoginState()) {
echo "Login";
} else {
header('Location: /login.php');
}
?>
</html>
<?php ob_end_flush(); ?>
The function ob_start can be put into header.php or config. The function call ob_end_flush can be put into the footer, or totally omit it.
Since all output are written to output buffer, and not directly to the browser, you can use header call after any echo, but before ob_end_flush.
A: So to sum up we couldn't figure out, why it works for the guy in the video. Thank you everyone for trying to figure it out. Theories are:
*
*He edited to video so it seemed it worked with this version of the code
*There is and ob_start(); and ob_end_flush(); somewhere in the code and it isn't visible in the video | |
d15295 | Try calling $myvar as a superglobal:
private static $container = $GLOBALS['myvar'];
Although, as Ron Dadon pointed out, this is generally bad practice in OOP.
EDIT:
I jumped the gun here. My above solution does not actually work, at least not for me. So, a better way to achieve this would be the following:
$myvar = array_merge($plantask_global_script, $plantask_global_config);
class Env implements ArrayAccess
{
private static $container = null;
public static function init($var)
{
self::$container = $var;
}
...
}
Env::init($myvar); | |
d15296 | If you're talking about the margins surrounding each <p> tag, that is inherent from the user agent style sheet.
By default paragraph tags have a surrounding margin. If you do something like:
p { margin: 0; padding: 0; }
you should be able to get rid of the margin/padding.
A: Paragraphs default to 1em vertical margin (top and bottom, but they can overlap). I guess you're talking about the div having a bottom margin - but it doesn't, that's the top margin of the p below it.
A: Now you added a test image, I know what you mean.
Use this CSS to fix the issue:
p { margin: 0; padding: 16px 0 }
In short, provide the same spacing between paragraphs using padding instead of margin.
A: Delete the --moz-border-radius: 0 0 10px 10px; command on your content body css and use
margin:0 px; padding based on your content.It will work, this is the reason for the reputation. Use firebug in your mozila browser we can easily find the bugs.
A: Your questions seems resolved already, but you might want to start using a CSS reset to override user agent stylesheet stuff for everything. Makes styling your web pages so they look the same on most browsers easier.
html, body, div, span, applet, object, iframe,
h1, h2, h3, h4, h5, h6, p, blockquote, pre,
a, abbr, acronym, address, big, cite, code,
del, dfn, em, img, ins, kbd, q, s, samp,
small, strike, strong, sub, sup, tt, var,
b, u, i, center,
dl, dt, dd, ol, ul, li,
fieldset, form, label, legend,
table, caption, tbody, tfoot, thead, tr, th, td,
article, aside, canvas, details, embed,
figure, figcaption, footer, header, hgroup,
menu, nav, output, ruby, section, summary,
time, mark, audio, video {
margin: 0;
padding: 0;
border: 0;
font-size: 100%;
font: inherit;
vertical-align: baseline;
}
/* HTML5 display-role reset for older browsers */
article, aside, details, figcaption, figure,
footer, header, hgroup, menu, nav, section {
display: block;
}
body {
line-height: 1;
}
ol, ul {
list-style: none;
}
blockquote, q {
quotes: none;
}
blockquote:before, blockquote:after,
q:before, q:after {
content: '';
content: none;
}
table {
border-collapse: collapse;
border-spacing: 0;
} | |
d15297 | I would probably advise against using the STL in Kernel development. STL will assume some form of standard library support of which there is none in your kernel. Also most memory allocation operations have no bounds for the time that they can take and are therefore unsuitable for use in interrupt handlers. Exceptions are another thing that can cause major headaches in the kernel
A: In order for STL to work you have to port several things like static initialization (for i.e. std::cin and std::cout) and stack unwinding...
you'd have to port i.e.: libsupc++ and have that in your kernel.
Basically all this stuff shouldn't be in the Kernel in the first place. DON'T use Vectors use static arrays because vectors might reallocate your data!
also all that stuff will bloat your kernel for nothing!
you can have a look what L4 allows itself to be used in kernel. they don't do memory allocation and they don't do exceptions (unpredictable) and they especially don't do STL.
The latter links shall give you an idea what you need to port to get c++ operating system support. Libsupc++ is part of gcc. it's purpose is to encapsulate all the parts where runtime code is needed.
Useful information about libsupc++
Useful information about c++ operating system support
A: I am not sure whether STL in kernel is actually good to have, but if you really want to try, it's very fun. I have written my own OS and when I had memory allocation in the kernel, the first thing I did was porting STLport (5.2.1). It was working well so far, although the kernel itself is still too preliminary.
Anyway, I can share some experience on porting it.
*Porting STLport requires no building and very few prerequisites, just include the headers and let the compiler know it's path (-I option for gcc). The template classes will be compiled with your cpp source files.
*STLport is configurable, you can disable what you can't afford and select what you want, such as iostream, debug, exception, RTTI and threading. Just checkout the documentation and then get to the configuration headers, it's very nicely commented (e.g. stlport/stl/config/user_config.h)
*As the most basic you'll need malloc and free, or maybe new, delete and variants. That's enough for porting std string, containers and algorithms, IIRC. But it's neither thread safe nor memory allocation optimized, you need to be very careful when you rely on it.
*You can do you own iostream, it's just template classes and global objects (BTW, I hacked ELF sections and manually initialized my global objects by calling the functions), but this needs more work. | |
d15298 | I got the same error message when I used the line import jwt from 'jsonwebtoken'
With var jwt = require('jsonwebtoken'); [1] instead it works fine:
var jwt = require('jsonwebtoken');
var tokenBase64 = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiZXhwIjoiMTUxMTk1MDcwMyIsImFkbWluIjp0cnVlfQ.wFC-9ZsqA9QuvLkRSkQmVYpUmgH9R-j8M4D0GECuPHY';
const token = jwt.decode(tokenBase64);
const tokenExpirationDate = token.exp
console.log(tokenExpirationDate);
[1] see also https://github.com/auth0/node-jsonwebtoken
A: The only way I found to user import is:
import { sign, verify } from 'jsonwebtoken';
sign('Hello', 'secret');
But I think the require method is better so that you don't have to explicitly import every single function.
A: As of jsonwebtoken 8.3, jsonwebtoken.decode() has the following type definitions:
export function decode(
token: string,
options?: DecodeOptions,
): null | { [key: string]: any } | string;
Since Typescript cannot infer the correct type and exp is not known, the simplest way out is to cast the result to any.
import jwt from 'jsonwebtoken'
const tokenBase64 = 'ey...' /* some valid token */
const token: any = jwt.decode(tokenBase64)
const tokenExpirationDate = token.exp
A: I think import * as jwt from 'jsonwebtoken'; should work as expected.
A: import * as jwt from 'jsonwebtoken'
const { authorization } = ctx.req.headers
const token = authorization.replace('Bearer ', '')
const decoded = jwt.verify(token, 'APP_SECRET')
const userId = (decoded as any).userId
Of course you can type decoded the way you use the token instead of any
A: The return type of jwt.verify and jwt.decode is 'string | object'.
In your case, you have some additional information that Typescript does not have about the type of the return type. So you can add it like this:
const token = jwt.decode(tokenBase64) as {exp: number}
const tokenExpirationDate = token.exp
Of course you can add any other value in the object as well.
While it's reasonable to assume that exp is present, other keys might not be present. Make sure that the token you are decoding actually includes them or add it as an optional value: ({exp: number; random?: string})
A: This is how I am using decode with TS
import jwt from 'jsonwebtoken';
export const isTokenExpired = (token: string): boolean => {
try {
const { exp } = jwt.decode(token) as {
exp: number;
};
const expirationDatetimeInSeconds = exp * 1000;
return Date.now() >= expirationDatetimeInSeconds;
} catch {
return true;
}
};
Not needed but here you go as well
import 'jest';
import jwt from 'jsonwebtoken';
import { isTokenExpired } from 'path-to-isTokenExpired/isTokenExpired';
describe('isTokenExpired', () => {
it('should return true if jwt token expired', () => {
const currentTimeInSecondsMinusThirtySeconds = Math.floor(Date.now() / 1000) - 30;
const expiredToken = jwt.sign({ foo: 'bar', exp: currentTimeInSecondsMinusThirtySeconds }, 'shhhhh');
expect(isTokenExpired(expiredToken)).toEqual(true);
});
it('should return false if jwt token not expired', () => {
const currentTimeInSecondsPlusThirtySeconds = Math.floor(Date.now() / 1000) + 30;
const notExpiredToken = jwt.sign({ foo: 'bar', exp: currentTimeInSecondsPlusThirtySeconds }, 'shhhhh');
expect(isTokenExpired(notExpiredToken)).toEqual(false);
});
it('should return true if jwt token invalid', () => {
expect(isTokenExpired('invalidtoken')).toEqual(true);
});
});
A: I found myself creating a helper for this (class based solution - can be used as separate function of course):
import { JwtPayload, verify } from "jsonwebtoken";
export class BaseController {
// ...
static decodeJWT = <T extends { [key: string]: any }>(token: string) => {
return verify(token, process.env.JWT_ACCESS_TOKEN!) as JwtPayload & T;
// this typing allows us to keep both our encoded data and JWT original properties
};
}
used in controllers like:
import { BaseController } from "./BaseController";
export class UserController extends BaseController {
static getUser = async (
// ...
) => {
// get token
// username may or may not be here - safer to check before use
const payload = this.decodeJWT<{ username?: string }>(token);
// no error here, we can extract all properties defined by us and original JWT props
const { username, exp } = payload;
// do stuff...
};
} | |
d15299 | Add to the identity transform a special template to handle QUOTE:
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
<xsl:template match="QUOTE">
<xsl:value-of select="@ID"/>
</xsl:template>
</xsl:stylesheet>
The identity transformation will copy everything to the output XML, and the special QUOTE template will copy over the value of its @ID attribute in place of the QUOTE element. | |
d15300 | I guess you are a newcomer to python.
You should read https://docs.python.org/3/tutorial/classes.html to get better understanding about python classes.
class CustomUserPermisions(BasePermission, allowed_groups):
means that the class CustomUserPermisions inherits from the classes BasePermission and allowed_groups. allowed_groups is not a class, so you get an error.
You should create a class attribute for allowed group :
class CustomUserPermisions(BasePermission):
def __init__(self, allowed_groups):
self.allowed_groups = allowed_groups
self.message = "Ooops! You do not have permissions to access this particular site"
def has_permission(self, request, view):
user1 = Employee.objects.filter(user__email=request.user).first()
user_groups = user1.user_group.all()
for group in user_groups:
if group.title in self.allowed_groups:
return True
return False
and then call the class constructor :
permissions = CustomUserPermisions(['group_hr', 'super_admin', 'employee']) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.