id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_23532800
|
I am trying to understand optimization, ZORDER and data skipping. I want to use ZORDER BY on the business date column : request_date_id (data type is integer). I understood that for implementing zorder, the targeted column should have delta statistics.
While my column request_date_id is in field number 7, it seems that my table does not have collected statistics on it
The table does have different distinct request_date_id. Why no statistics have been collected ?
It seems to me that data skipping options have been removed from databricks 7 onwards. But according to what i understood, it is still default. Delta lake table should still be collecting statistics.
Any suggestion or guidance would be very much appreciated. Many thanks in advance !
A: What was removed in DBR 7.x is a special type of indexes implemented on the SQL level. But the Delta has built-in data skipping independent of if data registered as SQL table or just saved on disk. You can check that by looking into _delta_log/NNNNNN.json files, for each added file, there will be stats field, with min/max values for indexed columns. For example: {"add": {"path": "part-00002-d0b474c0-6b00-4bef-9fbb-de97e157e199-c000.snappy.parquet", "partitionValues": {}, "size": 134637, "modificationTime": 1628835148000, "dataChange": true, "stats": "{\"numRecords\":33333, \"minValues\":{\"id\":1}, \"maxValues\":{\"id\":99997}, \"nullCount\": {\"id\":0}}"}}
| |
doc_23532801
|
I have setup a Hetzner server with docker-machine and have successfully switched to that docker server. Docker commands are now executed on my Hetzner machine.
I'd like to expand this setup to be able to run Kubernetes on this remote docker server.
Is that possible? Do I need to install Kubernetes on the VM running on Hetzner or can I run a local Kubernetes instance that simply uses the remote server?
I'd like my setup to be as close as possible to a local installation.
A: Kubernetes requires a control-plane (at least one server) and agent 'machines' (at least one). For a full installation it also requires an etcd database. For a minimum production configuration you'll need at least 9 separate servers.
Having said that, you can just run Rancher's K3D on the remote machine, which like minikube and Kind (and others), is a kubernetes installation that runs in docker on a single machine. Personally, I think K3D is the best option, but opinions vary. Be aware though that the resources required to run kubernetes are quite high, and the server you rent should have sufficient power. For a simple learning type workload I'd recommend at least 2 cores and 8gb of ram.
There are many ways to configure this and access it from your local machine. You'll need to open ports on the remote control-plane to allow access from your local, but it sounds like you've already done that for docker so you should be able to do it for kubernetes. You'll also need to use kubectl, but it's best to run that in a docker container, rather than installing it locally.
A: The way I understand it, it's not as easy as that. Some Kubernetes worker must be installed on the remote machine in order for my local Kubernetes server to be able to use it.
The easiest solution for me will be to rent a complete Kubernetes cluster from Digital Ocean or Google and use that. It's more expensive, but it will be the least amount of effort to get it running.
Another solution would be to install a full Kubernetes setup (with Minikube for example) on the remote machine, and use it via SSH.
| |
doc_23532802
|
I am getting this error:
>>> Case.objects.all()[0].show_pro()
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/john/mysite/../mysite/cases/models.py", line 23, in show_pro
return self.objects.filter(argument__side__contains='p')
File "/usr/lib/python2.5/site-packages/django/db/models/manager.py", line 151, in __get__
raise AttributeError, "Manager isn't accessible via %s instances" % type.__name__
AttributeError: Manager isn't accessible via Case instances
here is the code:
from django.db import models
from django.contrib.auth.models import User
import datetime
SIDE_CHOICES = (
('p', 'pro'),
('c', 'con'),
('u', 'undecided'),
)
class Case(models.Model):
question = models.CharField(max_length=200)
owner = models.ForeignKey(User)
pub_date = models.DateTimeField('date published')
rating = models.IntegerField()
def __unicode__(self):
return self.question
def was_published_today(self):
return self.put_date.date() == datetime.date.today()
def show_pro(self):
return self.objects.filter(argument__side__contains='p')
class Argument(models.Model):
case = models.ForeignKey(Case)
reason = models.CharField(max_length=200)
rating = models.IntegerField()
owner = models.ForeignKey(User)
side = models.CharField(max_length=1, choices=SIDE_CHOICES)
def __unicode__(self):
return self.reason
A: Try:
def show_pro(self):
return self.argument_set.filter(side='p')
Basically, you need to do a reverse lookup on the ForeignKey relationship and then filter that to get the related Argument objects that have side='p'.
In the show_pro function, self is not a QuerySet—it refers to the object itself.
A: You can't call self.objects, objects is a class member, not an field on the instance. THink about it this way, would it make sense to do:
c0 = Case.objects.all()[0]
c1 = c0.objects.all()[1]
Not really. That's what using self does.
Instead, you need to access the instance fields. As Tyson suggested:
class Case(models.Model):
...
def show_pro(self):
return self.argument_set.filter(side='p')
...
| |
doc_23532803
|
partitions n m l = gfunct n m l
where
gfunct n m l
| n == 0 && m == 0 = 0
| n < m || m == 0 = 1
| otherwise = -- here the Summation
This summation shall be an implementation of what one would write in maths like
As far as I understand Haskell should support that style of programming.
(I am a confused newbie in Haskell, and I mostly seek for the solution and not so much to dig deep in functional programming) I have tried out implementing it with fold or scan but I always failed.
A: If you want to sum something for $h$ from 0 to $n$ then you can just write that:
sum [ gfunct (n-h) (m-1) (h-1) | h <- [0..n] ]
| |
doc_23532804
|
I try to make this type dashboard page in my code but it's not work. Anybody please help me out. And check my code below.
My js code
return (
<div className="dashboard">
<div className="dashboard__container">
Logged in as
<div>{name}</div>
<div>{user?.email}</div>
<button className="dashboard__btn" onClick={logout}>
Logout
</button>
</div>
</div>
);
}
And my css code
.dashboard {
height: 100vh;
width: 100%;
background: #3e295a40;
}
.dashboard__container {
display: flex;
flex-direction: row;
justify-content: space-between;
text-align: left;
background-color: #15b88357;
padding: 1em 20px 1em 20px;
color: white;
font-weight: bold;
}
.dashboard__btn {
margin-top: -5px;
font-size: 18px;
padding: 0.25em 0.5em;
border: none;
color: white;
background-color: #402958;
}
.center { text-align: center; }
Please help me out.
A: I am using bootstrap for "css only" to design my website generally. Here is how I do it.
/public/index.html
You can see that I have linked to my local bootstrap css file inside head tag.
<!DOCTYPE html>
<html lang="en">
<head>
<script>
if(window.screen.height < window.screen.width) {
window.stop()
// window.location = "https://google.com"
}
</script>
<script src="init.js" ></script>
<meta charset="utf-8" />
<link rel="icon" href="%PUBLIC_URL%/favicon.ico" />
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no" />
<meta name="theme-color" content="#000000" />
<meta
name="description"
content="Web site created using create-react-app"
/>
<link rel="apple-touch-icon" href="%PUBLIC_URL%/logo192.png" />
<link rel="stylesheet" href="%PUBLIC_URL%/bootstrap.css" />
<link rel="stylesheet" href="%PUBLIC_URL%/style.css" />
<link rel="manifest" href="%PUBLIC_URL%/manifest.json" />
<title>React App</title>
</head>
<body style="background-color: #efefef">
<div id="root"></div>
</body>
</html>
/src/App.js
I tried to demonstrate how I used css of bootstrap inside my reactjs component. You can also use className. Also take care while adding inline style. It must be object.
const Main = () => {
return(
<div class="d-flex justify-content-center" style={{width:"100vw", height:"100vh"}}>
<div>1</div>
<div class="flex-grow-1">2</div>
<div>
<button class="btn-primary p-2">Abe !</button>
</div>
</div>
)
}
export default Main;
If you get any error. Search for the error and try to resolve with help of google. Those error are common and can be easily fixed. You will learn alot from those errors.
| |
doc_23532805
|
First:
I have four lists of different variable values (hereafter referred to elevation values) each of which has a length of 28. These four lists are one set of 5 different latitude values of which are one set of the 24 different time values.
So 24 times...each time with 5 latitudes...each latitude with four lists...each list with 28 values.
I want to create an array with the following dimensions (elevation, latitude, time, variable)
In words, I want to be able to specify which of the four lists I access,which index in the list, and specify a specific time and latitude. So an index into this array would look like this:
array(0,1,2,3) where 0 specifies the first index of the the 4th list specified by the 3. 1 specifies the 2nd latitude, and 2 specifies the 3rd time and the output is the value at that point.
I won't include my code for this part since literally the only things of mention are the lists
list1=[...]
list2=[...]
list3=[...]
list4=[...]
How can I do this, is there an easier structure of the array, or is there anything else I a missing?
Second:
I have created a netCDF file with variables with these four dimensions. I need to set those variables to the array structure made above. I have no idea how to do this and the netCDF4 documentation does a 1-d array in a fairly cryptic way. If the arrays can be made directly into the netCDF file bypassing the need to use numpy first, by all means show me how.
Thanks!
A: After talking to a few people where I work we came up with this solution:
First we made an array of zeroes using the following argument:
array1=np.zeros((28,5,24,4))
Then appended this array by specifying where in the array we wanted to change:
array1[:,0,0,0]=list1
This inserted the values of the list into the first entry in the array.
Next to write the array to a netCDF file, I created a netCDF in the same program I made the array, made a single variable and gave it values like this:
netcdfvariable[:]=array1
Hope that helps anyone who finds this.
| |
doc_23532806
|
Thanks!
A: Simply cast the string to bytea:
SELECT CAST('astring' AS bytea);
| |
doc_23532807
|
Points with the same gid(Group ID) are part of a unique flight sequence. I would like all shared gid's to be a separate line string.
head and tail of data
Thanks in advance for any help.
| |
doc_23532808
|
The css
ol.printplanitemlist
{
list-style-type:decimal;
margin-top:1em;
margin-bottom:1em;
padding-left:2.5em;
border:0;
line-height:100%;
}
ol.printplanitemlist li
{
/* margin-bottom:1em;*/
margin-bottom:1em;
position:relative;
}
.ActivityPromptText {
}
ul.PlanItemDisplay li
{
list-style-type:none;
margin:0;
}
ul.PlanItemDisplay
{
display:block;
margin:0;
padding:0;
list-style:none;
}
.MedicationTitle, .MedicationDescription, .MedicationName, .MedicationClass, .MedicationStrength, .MedicationForm, .MedicationHowOften, .MedicationMoreInfo {
top:-17px;
}
ul, li
{
padding:0;
margin:0;
border:0;
}
.printLbl {
font-size:1em;
/*line-height:1.375em;*/
font-weight:bold;
vertical-align:baseline;
margin:0;
border:0;
margin-bottom:0.188em;
}
li.ActivityTitle, li.AnnouncementTitle, li.MeasurementTitle, li.MedicationTitle,
li.QuestionTitle
{
color:#999999; font-size:0.75em; font-weight:bold; line-height:1.125em; margin-bottom:-0.188em; margin-top:0.5em; font-style:normal; display:block;
}
.AnnouncementPlanItemDisplay, .MeasurementPlanItemDisplay, .QuestionPlanItemDisplay{
top:-0.969em;
position:relative;
line-height:1.375em;
}
ul#ActivityDisplay {
position:relative;
top:-1.188em;
}
<FORM id=form1 method=post name=form1 action=PrintCarePlan.aspx>
<DIV><INPUT id=__EVENTTARGET type=hidden name=__EVENTTARGET> <INPUT id=__EVENTARGUMENT type=hidden name=__EVENTARGUMENT> <INPUT id=__VIEWSTATE value=/wEPDwUKMTAzNzk4NTgxNA9kFgICAw9kFggCBQ8WAh4EVGV4dAUBIGQCCQ8WAh8ABQkxLzEzLzIwMTFkAgsPFgIeC18hSXRlbUNvdW50AgUWCgIBD2QWBAIBDxYCHwAFIEFkZGl0aW9uYWwgaW5mb3JtYXRpb24gZm9yIHRvZGF5ZAIDDxYCHwECBBYIZg9kFgICCQ8PFgIeB1Zpc2libGVnZBYIAgEPFgIfAmcWAmYPDxYEHwAFDVRFRCBzdG9ja2luZ3MfAmdkZAIDD2QWAmYPDxYCHwAFA1RFRGRkAgUPDxYCHwAFblBsZWFzZSB3aGVyZSB5b3VyIFRFRCAoVGhyb21ibyBFbWJvbGljIERldGVycmVudCkgc3RvY2tpbmdzIHRocm91Z2hvdXQgdGhlIGRheS4gIFlvdSBtYXkgcmVtb3ZlIHRoZW0gYXQgbmlnaHQuZGQCBw9kFgJmDw8WBB8AZR4LTmF2aWdhdGVVcmxlFgIeB29uY2xpY2sFG2RvUG9wdXAodGhpcyk7cmV0dXJuIGZhbHNlO2QCAQ9kFgICCQ8PFgIfAmdkFggCAQ8WAh8CZxYCZg8PFgQfAAUfcGFpbiBtZWRpY2F0aW9uIGJlZm9yZSBleGVyY2lzZR8CZ2RkAgMPZBYCZg8PFgIfAAUYcGFpbiBtZWQgYmVmb3JlIGV4ZXJjaXNlZGQCBQ8PFgIfAAVKUGxlYXNlIHRha2UgeW91ciBwYWluIG1lZGljYXRpb24gMzAgbWludXRlcyBiZWZvcmUgeW91IHN0YXJ0IHlvdXIgZXhlcmNpc2VkZAIHD2QWAmYPDxYEHwBlHwNlFgIfBAUbZG9Qb3B1cCh0aGlzKTtyZXR1cm4gZmFsc2U7ZAICD2QWAgIJDw8WAh8CZ2QWCAIBDxYCHwJnFgJmDw8WBB8ABRF0b3dlbCB1bmRlciBhbmtsZR8CZ2RkAgMPZBYCZg8PFgIfAAURdG93ZWwgdW5kZXIgYW5rbGVkZAIFDw8WAh8ABVZJdCBpcyBpbXBvcnRhbnQgdGhhdCB5b3UgcGxhY2UgYSB0b3dlbCByb2xsIHVuZGVyIHlvdXIgYW5rbGUgd2hlbiB5b3UgYXJlIGx5aW5nIGluIGJlZGRkAgcPZBYCZg8PFgQfAGUfA2UWAh8EBRtkb1BvcHVwKHRoaXMpO3JldHVybiBmYWxzZTtkAgMPZBYCAgkPDxYCHwJnZBYIAgEPFgIfAmcWAmYPDxYEHwAFCmJsb29kIGNsb3QfAmdkZAIDD2QWAmYPDxYCHwAFA0RWVGRkAgUPDxYCHwAF+AFJZiB5b3UgaGF2ZSBhbnkgb2YgdGhlc2Ugc3ltcHRvbXMsIHBsZWFzZSBjb250YWN0IHRoZSBvZmZpY2UgaW1tZWRpYXRlbHkuCiAgICAqIENoYW5nZXMgaW4gc2tpbiBjb2xvciAocmVkbmVzcykgaW4gb25lIGxlZwogICAgKiBJbmNyZWFzZWQgd2FybXRoIGluIG9uZSBsZWcKICAgICogTGVnIHBhaW4gaW4gb25lIGxlZwogICAgKiBMZWcgdGVuZGVybmVzcyBpbiBvbmUgbGVnCiAgICAqIFN3ZWxsaW5nIChlZGVtYSkgb2Ygb25lIGxlZ2RkAgcPZBYCZg8PFgQfAGUfA2UWAh8EBRtkb1BvcHVwKHRoaXMpO3JldHVybiBmYWxzZTtkAgIPZBYEAgEPFgIfAAUOTXkgTWVkaWNhdGlvbnNkAgMPFgIfAQIBFgJmD2QWAgIFDw8WAh8CZ2QWEAIBDxYCHwJnFgJmDw8WBB8ABQ1wYWluIHJlbGlldmVyHwJnZGQCAw9kFgJmDw8WAh8ABQlpYnVwcm9mZW5kZAIFDw8WAh8ABQlpYnVwcm9mZW5kZAIHDw8WAh8ABQdjYXBsZXRzZGQCCQ8PFgIfAAUGODAwIG1nZGQCCw8PFgIfAGRkZAINDw8WAh8ABSdUYWtlIDEgY2FwbGV0IGV2ZXJ5IDQtNiBob3VycyBhcyBuZWVkZWRkZAIPD2QWAmYPDxYEHwBlHwNlFgIfBAUbZG9Qb3B1cCh0aGlzKTtyZXR1cm4gZmFsc2U7ZAIDD2QWBAIBDxYCHwAFDU15IEFjdGl2aXRpZXNkAgMPFgIfAQIGFgxmD2QWAgIBDw8WAh8CZ2QWCAIBD2QWAmYPDxYCHwAFGHdhbGtpbmcgYXJvdW5kIHRoZSBob3VzZWRkAgMPZBYCZg8PFgIfAAUYd2Fsa2luZyBhcm91bmQgdGhlIGhvdXNlZGQCCQ9kFgJmDw8WBB8AZR8DZRYCHwQFG2RvUG9wdXAodGhpcyk7cmV0dXJuIGZhbHNlO2QCDQ8PFgIfAAWEAVBsZWFzZSB3YWxrIHdpdGggeW91ciBhc3Npc3RpdmUgZGV2aWNlICh3YWxrZXIvY2FuZSBvciBjcnV0Y2hlcykgb24gYSBmbGF0IHN1cmZhY2VzIHdpdGhpbiB5b3VyIGhvdXNlIGZvciAxNSBtaW51dGVzIDUgdGltZXMgcGVyIGRheWRkAgEPZBYCAgEPDxYCHwJnZBYIAgEPZBYCZg8PFgIfAAURd2Fsa2luZyB1cCBzdGFpcnNkZAIDD2QWAmYPDxYCHwAFEXdhbGtpbmcgdXAgc3RhaXJzZGQCCQ9kFgJmDw8WBB8AZR8DZRYCHwQFG2RvUG9wdXAodGhpcyk7cmV0dXJuIGZhbHNlO2QCDQ8PFgIfAAWNAVBsZWFzZSB3YWxrIHVwIGFuZCBkb3duIHN0YWlycyAzIHRpbWVzIHRvZGF5IFdJVEggQVNTSVNUQU5DRSBPTkxZLiBBbHdheXMgdXNlIGEgZGV2aWNlIGFuZCBhIHJhaWwuICBJZiB5b3UgZG8gbm90IGhhdmUgc3RhaXJzLCBkaXNyZWdhcmQgdGhpc2RkAgIPZBYCAgEPDxYCHwJnZBYIAgEPZBYCZg8PFgIfAAUDaWNlZGQCAw9kFgJmDw8WAh8ABQ1pY2UgYXMgbmVlZGVkZGQCCQ9kFgJmDw8WBB8AZR8DZRYCHwQFG2RvUG9wdXAodGhpcyk7cmV0dXJuIGZhbHNlO2QCDQ8PFgIfAAVOUGxlYXNlIHVzZSBpY2UgYXMgbmVlZGVkIHRvZGF5LiAgQmUgY2VydGFpbiB0byB1c2UgaXQgYWZ0ZXIgZXhlcmNpc2luZyBhcyB3ZWxsZGQCAw9kFgICAQ8PFgIfAmdkFggCAQ9kFgJmDw8WAh8ABQ50aXNzdWUgbWFzc2FnZWRkAgMPZBYCZg8PFgIfAAUOdGlzc3VlIG1hc3NhZ2VkZAIJD2QWAmYPDxYEHwBlHwNlFgIfBAUbZG9Qb3B1cCh0aGlzKTtyZXR1cm4gZmFsc2U7ZAINDw8WAh8ABUBQbGVhc2UgbWFzc2FnZSB0aGUgYXJlYSBhcm91bmQgeW91ciBpbmNpc2lvbiBzZXZlcmFsIHRpbWVzIHRvZGF5ZGQCBA9kFgICAQ8PFgIfAmdkFggCAQ9kFgJmDw8WAh8ABRlzdGF0aW9uYXJ5IHJlY29tYmVudCBiaWtlZGQCAw9kFgJmDw8WAh8ABRlzdGF0aW9uYXJ5IHJlY29tYmVudCBiaWtlZGQCCQ9kFgJmDw8WBB8AZR8DZRYCHwQFG2RvUG9wdXAodGhpcyk7cmV0dXJuIGZhbHNlO2QCDQ8PFgIfAAWNAVBsZWFzZSB1c2UgdGhlIHN0YXRpb25hcnkgcmVjb21iZW50IGJpa2UgdG9kYXkgZm9yIDEwIG1pbnV0ZXMuICBTdGFydCB3aXRoIHBhcnRpYWwgcmV2b2x1dGlvbnMgYW5kIHByb2dyZXNzIHRvIGZ1bGwgcmV2b2x1dGlvbnMgYXMgdG9sZXJhdGVkLmRkAgUPZBYCAgEPDxYCHwJnZBYKAgEPZBYCZg8PFgIfAAUHQ0FNT3BlZGRkAgMPZBYCZg8PFgIfAGVkZAIFDxYCHwJnFgJmDw8WBB8ABQdDQU1PcGVkHwJnZGQCCQ8WAh8CaBYCZg8PFgYfAAUsaHR0cDovL3d3dy5jYW1vcGVkLmNvbS9kZS92aWRlb21hdGVyaWFsLmh0bWwfAwUsaHR0cDovL3d3dy5jYW1vcGVkLmNvbS9kZS92aWRlb21hdGVyaWFsLmh0bWwfAmgWAh8EBRtkb1BvcHVwKHRoaXMpO3JldHVybiBmYWxzZTtkAg0PDxYCHwAFJ1VzZSBDQU1PcGVkIHR3aWNlIGEgZGF5IGZvciAzMCBtaW51dGVzLmRkAgQPZBYEAgEPFgIfAAUMTXkgUXVlc3Rpb25zZAIDDxYCHwECARYCZg9kFgICBw8PFgIfAmdkFggCAQ8WAh8CZxYCZg8PFgQfAAUaSG93IGFyZSB5b3UgZmVlbGluZyB0b2RheT8fAmdkZAIDD2QWAmYPDxYCHwAFGkhvdyBhcmUgeW91IGZlZWxpbmcgdG9kYXk/ZGQCBQ8PFgIfAAUaSG93IGFyZSB5b3UgZmVlbGluZyB0b2RheT9kZAIHD2QWAmYPDxYEHwBlHwNlFgIfBAUbZG9Qb3B1cCh0aGlzKTtyZXR1cm4gZmFsc2U7ZAIFD2QWBAIBDxYCHwAFD015IE1lYXN1cmVtZW50c2QCAw8WAh8BAgMWBmYPZBYCAgMPDxYCHwJnZBYIAgEPFgIfAmcWAmYPDxYEHwAFC0tuZWUgTW90aW9uHwJnZGQCAw9kFgJmDw8WAh8AZWRkAgUPDxYCHwAFIEhvdyBtdWNoIGNhbiB5b3UgYmVuZCB5b3VyIGtuZWU/ZGQCBw9kFgJmDw8WBB8AZR8DZRYCHwQFG2RvUG9wdXAodGhpcyk7cmV0dXJuIGZhbHNlO2QCAQ9kFgICAw8PFgIfAmdkFggCAQ8WAh8CZxYCZg8PFgQfAAUKcGFpbiBsZXZlbB8CZ2RkAgMPZBYCZg8PFgIfAAUQcGFpbiBsZXZlbCB0b2RheWRkAgUPDxYCHwAFIFBsZWFzZSBsb2cgeW91ciBwYWluIGxldmVsIHRvZGF5ZGQCBw9kFgJmDw8WBB8AZR8DZRYCHwQFG2RvUG9wdXAodGhpcyk7cmV0dXJuIGZhbHNlO2QCAg9kFgICAw8PFgIfAmdkFggCAQ8WAh8CZxYCZg8PFgQfAAUPcmFuZ2Ugb2YgbW90aW9uHwJnZGQCAw9kFgJmDw8WAh8ABQNST01kZAIFDw8WAh8ABUZQbGVhc2UgbWVhc3VyZSB5b3VyIGFiaWxpdHkgdG8gYmVuZCBhbmQgc3RyYWlnaHRlbiB5b3VyIG9wZXJhdGl2ZSBrbmVlZGQCBw9kFgJmDw8WBB8AZR8DZRYCHwQFG2RvUG9wdXAodGhpcyk7cmV0dXJuIGZhbHNlO2QCEQ8PFgIfAAUEMjAxMWRkGAEFHl9fQ29udHJvbHNSZXF1aXJlUG9zdEJhY2tLZXlfXxYBBQRsb2dvnyiOzN32/TnV8y+DJDacLzmKIWo= type=hidden name=__VIEWSTATE> </DIV>
<SCRIPT type=text/javascript>
//<![CDATA[
var theForm = document.forms['form1'];
if (!theForm) {
theForm = document.form1;
}
function __doPostBack(eventTarget, eventArgument) {
if (!theForm.onsubmit || (theForm.onsubmit() != false)) {
theForm.__EVENTTARGET.value = eventTarget;
theForm.__EVENTARGUMENT.value = eventArgument;
theForm.submit();
}
}
//]]>
</SCRIPT>
<SCRIPT type=text/javascript src="/WebResource.axd?d=1HwPIkddnYckUN2xUQU95T2VKatY6mt9Dg990zejInCszK3pN-A9sNz55sulwawon9MvfVMYNaagWXGXXyUS4KFjvzU1&t=634208670757546466"></SCRIPT>
<DIV><INPUT id=__PREVIOUSPAGE value=RYBDuzinhGYBVohg1mfRtZqBqjrEl1IOfB1y5sMq5HRkm5gPpG_IN9UuYjxeqh4EtESSJfV_5g0lFzfooz8hR_0lGvGxoJWHmR-0aPBDV6VBLTZm0 type=hidden name=__PREVIOUSPAGE> <INPUT id=__EVENTVALIDATION value=/wEWAwLr0OzxDgKL1Z6VCgKyq56pB3ifZrQiaUnVJyYeWgMXmhiu4qtR type=hidden name=__EVENTVALIDATION> </DIV>
<DIV class=container>
<DIV class=header><INPUT style="BORDER-RIGHT-WIDTH: 0px; BORDER-TOP-WIDTH: 0px; BORDER-BOTTOM-WIDTH: 0px; BORDER-LEFT-WIDTH: 0px" id=logo onclick='javascript:WebForm_DoPostBackWithOptions(new WebForm_PostBackOptions("logo", "", false, "", "Home.aspx", false, false))' src="../Images/logo.png" type=image name=logo> </DIV>
<DIV>
<P></P>
<DIV class=masterHeading>Care Plan For on 1/13/2011<BR></DIV><BR><BR>
<P></P><BR></DIV>
<DIV>
<TABLE style="MARGIN-LEFT: auto; MARGIN-RIGHT: auto" border=0 cellSpacing=5 cellPadding=0 width="90%">
<TBODY>
<TR vAlign=top>
<TD><LABEL class=printLbl>Additional information for today:</LABEL><BR>
<OL class=printplanitemlist>
<LI>
<UL class="PlanItemDisplay AnnouncementPlanItemDisplay">
<LI id=rItemHeaders_ctl01_rPlanItems_ctl00_ccAnnouncement_liTitle class=AnnouncementTitle><SPAN id=rItemHeaders_ctl01_rPlanItems_ctl00_ccAnnouncement_Title>TED stockings</SPAN>
<LI class=AnnouncementMsg><SPAN id=rItemHeaders_ctl01_rPlanItems_ctl00_ccAnnouncement_AnnouncementMsg>Please where your TED (Thrombo Embolic Deterrent) stockings throughout the day. You may remove them at night.</SPAN> </LI></UL>
<LI>
<UL class="PlanItemDisplay AnnouncementPlanItemDisplay">
<LI id=rItemHeaders_ctl01_rPlanItems_ctl01_ccAnnouncement_liTitle class=AnnouncementTitle><SPAN id=rItemHeaders_ctl01_rPlanItems_ctl01_ccAnnouncement_Title>pain medication before exercise</SPAN>
<LI class=AnnouncementMsg><SPAN id=rItemHeaders_ctl01_rPlanItems_ctl01_ccAnnouncement_AnnouncementMsg>Please take your pain medication 30 minutes before you start your exercise</SPAN> </LI></UL>
<LI>
<UL class="PlanItemDisplay AnnouncementPlanItemDisplay">
<LI id=rItemHeaders_ctl01_rPlanItems_ctl02_ccAnnouncement_liTitle class=AnnouncementTitle><SPAN id=rItemHeaders_ctl01_rPlanItems_ctl02_ccAnnouncement_Title>towel under ankle</SPAN>
<LI class=AnnouncementMsg><SPAN id=rItemHeaders_ctl01_rPlanItems_ctl02_ccAnnouncement_AnnouncementMsg>It is important that you place a towel roll under your ankle when you are lying in bed</SPAN> </LI></UL>
<LI>
<UL class="PlanItemDisplay AnnouncementPlanItemDisplay">
<LI id=rItemHeaders_ctl01_rPlanItems_ctl03_ccAnnouncement_liTitle class=AnnouncementTitle><SPAN id=rItemHeaders_ctl01_rPlanItems_ctl03_ccAnnouncement_Title>blood clot</SPAN>
<LI class=AnnouncementMsg><SPAN id=rItemHeaders_ctl01_rPlanItems_ctl03_ccAnnouncement_AnnouncementMsg>If you have any of these symptoms, please contact the office immediately. * Changes in skin color (redness) in one leg * Increased warmth in one leg * Leg pain in one leg * Leg tenderness in one leg * Swelling (edema) of one leg</SPAN> </LI></UL></LI></OL><BR></TD></TR>
<TR vAlign=top>
<TD><LABEL class=printLbl>My Medications:</LABEL><BR>
<OL class=printplanitemlist>
<LI>
<UL class="PlanItemDisplay MedicationPlanItemDisplay">
<LI id=rItemHeaders_ctl02_rPlanItems_ctl00_ccMedication_liTitle class=MedicationTitle><SPAN id=rItemHeaders_ctl02_rPlanItems_ctl00_ccMedication_Title>pain reliever</SPAN>
<LI class=MedicationName><SPAN id=rItemHeaders_ctl02_rPlanItems_ctl00_ccMedication_Name>ibuprofen</SPAN>
<LI class=MedicationClass>(<SPAN id=rItemHeaders_ctl02_rPlanItems_ctl00_ccMedication_MedicationClass>caplets</SPAN>)
<LI class=MedicationStrength><SPAN id=rItemHeaders_ctl02_rPlanItems_ctl00_ccMedication_Strength>800 mg</SPAN>
<LI class=MedicationForm><SPAN id=rItemHeaders_ctl02_rPlanItems_ctl00_ccMedication_Form></SPAN>,
<LI class=MedicationHowOften><SPAN id=rItemHeaders_ctl02_rPlanItems_ctl00_ccMedication_HowOften>Take 1 caplet every 4-6 hours as needed</SPAN> </LI></UL></LI></OL><BR></TD></TR>
<TR vAlign=top>
<TD><LABEL class=printLbl>My Activities:</LABEL><BR>
<OL class=printplanitemlist>
<LI>
<UL style="TEXT-ALIGN: left; MARGIN-LEFT: 0px; CLEAR: left; LEFT: 0px" id=ActivityDisplay class="PlanItemDisplay MedicationPlanItemDisplay">
<LI class=ActivityPromptText><SPAN id=rItemHeaders_ctl03_rPlanItems_ctl00_ccActivity_ActivityText>Please walk with your assistive device (walker/cane or crutches) on a flat surfaces within your house for 15 minutes 5 times per day</SPAN> </LI></UL>
<LI>
<UL style="TEXT-ALIGN: left; MARGIN-LEFT: 0px; CLEAR: left; LEFT: 0px" id=ActivityDisplay class="PlanItemDisplay MedicationPlanItemDisplay">
<LI class=ActivityPromptText><SPAN id=rItemHeaders_ctl03_rPlanItems_ctl01_ccActivity_ActivityText>Please walk up and down stairs 3 times today WITH ASSISTANCE ONLY. Always use a device and a rail. If you do not have stairs, disregard this</SPAN> </LI></UL>
<LI>
<UL style="TEXT-ALIGN: left; MARGIN-LEFT: 0px; CLEAR: left; LEFT: 0px" id=ActivityDisplay class="PlanItemDisplay MedicationPlanItemDisplay">
<LI class=ActivityPromptText><SPAN id=rItemHeaders_ctl03_rPlanItems_ctl02_ccActivity_ActivityText>Please use ice as needed today. Be certain to use it after exercising as well</SPAN> </LI></UL>
<LI>
<UL style="TEXT-ALIGN: left; MARGIN-LEFT: 0px; CLEAR: left; LEFT: 0px" id=ActivityDisplay class="PlanItemDisplay MedicationPlanItemDisplay">
<LI class=ActivityPromptText><SPAN id=rItemHeaders_ctl03_rPlanItems_ctl03_ccActivity_ActivityText>Please massage the area around your incision several times today</SPAN> </LI></UL>
<LI>
<UL style="TEXT-ALIGN: left; MARGIN-LEFT: 0px; CLEAR: left; LEFT: 0px" id=ActivityDisplay class="PlanItemDisplay MedicationPlanItemDisplay">
<LI class=ActivityPromptText><SPAN id=rItemHeaders_ctl03_rPlanItems_ctl04_ccActivity_ActivityText>Please use the stationary recombent bike today for 10 minutes. Start with partial revolutions and progress to full revolutions as tolerated.</SPAN> </LI></UL>
<LI>
<UL style="TEXT-ALIGN: left; MARGIN-LEFT: 0px; CLEAR: left; LEFT: 0px" id=ActivityDisplay class="PlanItemDisplay MedicationPlanItemDisplay">
<LI id=rItemHeaders_ctl03_rPlanItems_ctl05_ccActivity_liEquipment class=ActivityEquipment><SPAN id=rItemHeaders_ctl03_rPlanItems_ctl05_ccActivity_Equipment>CAMOped</SPAN>
<LI class=ActivityPromptText><SPAN id=rItemHeaders_ctl03_rPlanItems_ctl05_ccActivity_ActivityText>Use CAMOped twice a day for 30 minutes.</SPAN> </LI></UL></LI></OL><BR></TD></TR>
<TR vAlign=top>
<TD><LABEL class=printLbl>My Questions:</LABEL><BR>
<OL class=printplanitemlist>
<LI>
<UL class="PlanItemDisplay QuestionPlanItemDisplay">
<LI id=rItemHeaders_ctl04_rPlanItems_ctl00_ccQuestion_liTitle class=QuestionTitle><SPAN id=rItemHeaders_ctl04_rPlanItems_ctl00_ccQuestion_Title>How are you feeling today?</SPAN>
<LI class=QuestionPromptText><SPAN id=rItemHeaders_ctl04_rPlanItems_ctl00_ccQuestion_PromptText>How are you feeling today?</SPAN> </LI></UL></LI></OL><BR></TD></TR>
<TR vAlign=top>
<TD><LABEL class=printLbl>My Measurements:</LABEL><BR>
<OL class=printplanitemlist>
<LI>
<UL class="PlanItemDisplay MeasurementPlanItemDisplay">
<LI id=rItemHeaders_ctl05_rPlanItems_ctl00_ccMeasurement_liTitle class=MeasurementTitle><SPAN id=rItemHeaders_ctl05_rPlanItems_ctl00_ccMeasurement_Title>Knee Motion</SPAN>
<LI class=MeasurementPromptText><SPAN id=rItemHeaders_ctl05_rPlanItems_ctl00_ccMeasurement_PromptText>How much can you bend your knee?</SPAN> </LI></UL>
<LI>
<UL class="PlanItemDisplay MeasurementPlanItemDisplay">
<LI id=rItemHeaders_ctl05_rPlanItems_ctl01_ccMeasurement_liTitle class=MeasurementTitle><SPAN id=rItemHeaders_ctl05_rPlanItems_ctl01_ccMeasurement_Title>pain level</SPAN>
<LI class=MeasurementPromptText><SPAN id=rItemHeaders_ctl05_rPlanItems_ctl01_ccMeasurement_PromptText>Please log your pain level today</SPAN> </LI></UL>
<LI>
<UL class="PlanItemDisplay MeasurementPlanItemDisplay">
<LI id=rItemHeaders_ctl05_rPlanItems_ctl02_ccMeasurement_liTitle class=MeasurementTitle><SPAN id=rItemHeaders_ctl05_rPlanItems_ctl02_ccMeasurement_Title>range of motion</SPAN>
<LI class=MeasurementPromptText><SPAN id=rItemHeaders_ctl05_rPlanItems_ctl02_ccMeasurement_PromptText>Please measure your ability to bend and straighten your operative knee</SPAN> </LI></UL></LI></OL><BR></TD></TR></TBODY></TABLE><!--- indicates the date that the plan is currently displaying ---><INPUT id=hidDisplayedDate type=hidden name=hidDisplayedDate> </DIV>
<DIV class=footer>Copyright © 2008 - <SPAN id=lblCopyrightEndYear>2011</SPAN> iGetBetter, Inc. All rights reserved. </DIV></DIV></FORM>
A: Maybe I misunderstood your post but if the issue only happens when you print the page, then you could try creating a print-only stylesheet:
<link rel="stylesheet" type="text/css" media="print" href="print.css">
You may be able to solve this issue that way by providing some sort of different styling to the element (and possibly the element to the left or above that may be masking the number).
A: I tried it on jsFiddle, and the ordered lists are displayed correctly - http://jsfiddle.net/LwtkW/
A: removed all the attached css and start rebuilding the styles from scratch. Apparently there was a few styles that were overriding other styles and adding additional properties to the page (darn coders!), but everything is sorted. Thanks for all who responded. it was very much appreciated :D
| |
doc_23532809
|
[('Jun-07', 10),
('Jun-08', 15),
('Jun-09', 16),
('Nov-07', 17),
('Nov-08', 16),
('Nov-09', 14),
('May-11', 16),
('May-10', 18),
('May-13', 14),
('May-12', 14),
('May-14', 12),
('Jun-14', 10),
('Jun-11', 14),
('Jun-10', 19),
('Jun-13', 13),
('Jun-12', 14),
('Feb-09', 10),
('Nov-14', 10),
('Nov-13', 12),
('Nov-12', 13)]
A: If you want the histogram relative to the months, you should do something like :
import calendar
dmonths = dict((v,k) for k,v in enumerate(calendar.month_abbr))
import numpy as np
from matplotlib import pyplot as plt
list1 = [('Jun-07', 10),
('Jun-08', 15),
('Jun-09', 16),
('Nov-07', 17),
('Nov-08', 16),
('Nov-09', 14),
('May-11', 16),
('May-10', 18),
('May-13', 14),
('May-12', 14),
('May-14', 12),
('Jun-14', 10),
('Jun-11', 14),
('Jun-10', 19),
('Jun-13', 13),
('Jun-12', 14),
('Feb-09', 10),
('Nov-14', 10),
('Nov-13', 12),
('Nov-12', 13)]
list2 = [dmonths[x[0][:3]] for x in list1]
list3 = [x[1] for x in list1]
plt.hist(np.array(list2), bins=np.array(range(1,12)), weights=np.array(list3))
The first two lines give a lookup table from months to integers. Afterwards, you only need to extract the names of the months, convert them into integers, and plot the histogram with the values as weights.
A: import matplotlib.pyplot as plt
def histogram():
set_list = set(link_list)
return [(idx, link_list.count(idx)) for idx in set_list]
xx, yy = [datetime.strptime(idx[0],'%b-%y') for idx in his_sorted],
[idx[1] for idx in his_sorted]
plt.bar(xx, yy, width = 50)
| |
doc_23532810
|
Fiddle ng-repeat filter
I have given condition for both table and message so that they should be displayed as and when the clicks happen.
<div ng-show="name===null">No results</div>
The above message should be displayed if there are no satisfying data in the table based on link clicks,together the table should be hidden.
I tried to give conditions based on property name but its not working.
A: Here you go! Updated fiddle
You had a few issues in there:
*
*your ng-controller was on a div but you were setting name outside controller
*<div ng-show="name===null">No results</div> Here you were comparing name with null but you were setting it to empty string in clear filter
Hope it helps!
Edit: On clear filter it was not showing all the items. Fixed and updated fiddle
A: Try to use this code
Demo
<body ng-app="app" ng-controller="main">
<a ng-click="name = 'Fruit'">Fruit</a>
<a ng-click="name = 'Nut'">Nut</a>
<a ng-click="name = 'Seed'">Seed</a>
<a ng-click="name = ''">clear filter</a>
<br> <br> <br>
<div ng-show="name ==''">No results</div>
<table class="table" ng-show="name!=''">
<thead>
<tr>
<th>Target</th>
<th>Level</th>
<tr>
<tbody>
<tr ng-repeat="link in links | filter:name">
<td>
{{link.name}}
</td>
<td>
{{link.category}}
</td>
</tr>
</tbody>
</table>
var app = angular.module('app', []);
app.controller('main', function($scope) {
$scope.filters = { };
$scope.name = '';
$scope.links = [
{name: 'Apple', category: 'Fruit'},
{name: 'Pear', category: 'Fruit'},
{name: 'Almond', category: 'Nut'},
{name: 'Mango', category: 'Fruit'},
{name: 'Cashew', category: 'Nut'}
];
});
A: Check your fiddle http://jsfiddle.net/w0L4o8jm/6/
By default I am making filter with Fruit. You can change it in the controller.
Coming to the answer, Calculate the filtered items length according to the filter. If length 0 or name '', then show no results. Otherwise show results in the table. Just copy paste the below code in your fiddle and check it out.
<html ng-app="app">
<a ng-click="name = 'Fruit'">Fruit</a>
<a ng-click="name = 'Nut'">Nut</a>
<a ng-click="name = 'Seed'">Seed</a>
<body ng-controller="main">
<a ng-click="name = ''">clear filter</a>
<br> <br> <br>
<div ng-show="(name=='' || !filtered.length)">No results</div>
<div ng-repeat="link in filtered = (links|filter:name)"></div>
<table class="table" ng-show="(filtered.length != 0 && name!='')">
<thead>
<tr>
<th>Target</th>
<th>Level</th>
<tr>
<tbody>
<tr ng-repeat="link in links|filter:name">
<td>
{{link.name}}
</td>
<td>
{{link.category}}
</td>
</tr>
</tbody>
</table>
</body>
Controller code
var app = angular.module('app', []);
app.controller('main', function($scope) {
$scope.filters = { };
$scope.name='Fruit';
$scope.links = [
{name: 'Apple', category: 'Fruit'},
{name: 'Pear', category: 'Fruit'},
{name: 'Almond', category: 'Nut'},
{name: 'Mango', category: 'Fruit'},
{name: 'Cashew', category: 'Nut'}
];
});
For Angular prior to 1.3
Assign the results to a new variable (e.g. filtered) and access it:
<div ng-repeat="link in filtered = (links|filter:name)"></div>
For Angular 1.3+
Use an alias expression (Docs: Angular 1.3.0: ngRepeat, scroll down to the Arguments section):
<div ng-repeat="link in links|filter:name as filtered"></div>
| |
doc_23532811
|
I am trying to write a method that reads values from the first element in each row of a two-dimensional array, determines if the element is a numeric type, and sets the element of the numericArray equal to 0 if it is not a numeric type, and equal to 1 if it is a numeric type.
The code works correctly in determining whether it is a numeric type or not, but does not assign the correct values to the numericArray elements.
As you can see, the 0, 1, and 5 elements of the numericArray should be 0, while the 2, 3, and 4 elements should be 1. But that is not what I am getting.
void DataFrame::isNumeric() {
string str11;
for (int aa = 0; aa < noCols ; aa ++) {
str11 = data[0][aa];
for (int ab = 0; ab < 1; ab++) {
if (isdigit(str11[ab]) == 0) {
cout << "Is digit: " << isdigit(str11[ab]) << endl;
numericArray[ab] = 0;
}
else {
cout << "Is digit: " << isdigit(str11[ab]) << endl;
numericArray[ab] = 1;
}
}
}
for (int i = 0; i < noCols; i++) {
cout << "numeric[" <<i<< "] " << numericArray[i] << endl;
}
}
The output I get is:
Is digit: 0
Is digit: 0
Is digit: 4
Is digit: 4
Is digit: 4
Is digit: 0
So the numericArray should be [0,0,1,1,1,0], but am getting
[0,-572662307,-1707333723, 41882, 14172912, 14142640]
A: for (int ab = 0; ab < 1; ab++) {
This loop iterates exactly once, with ab initialized to 0. On the next iteration ab gets incremented exactly once, and since ab < 1 is now false, that's it. End result: ab is always 0.
numericArray[ab] = ...
The shown code only initializes numericArray[ab], and nothing else. As we've just discovered ab is always 0.
Therefore, the shown code only initializes numericArray[0], after all is said and done. All other values in the numericArray are left as uninitialized random garbage. Perhaps both assignments should be to numericArray[aa] instead, but this can't be stated authoritatively; additionally, the loop's purpose is unclear, it doesn't really accomplish anything by iterating in this fashion; but in any case this is the reason for the observed results.
| |
doc_23532812
|
Scenario
I have a trusted federation server that is used for single-sign-on for a number of microsites. Each microsite has a specific purpose. E.g. a Product Portal (used for the creation and management of products) and a Billing Portal (used for the creation, viewing and payment of invoices).
The system is used by many people: Administrators that have full reign over the whole system. The internal finance department is only concerned with the billing portal. Likewise the internal product team is only interested in the product portal. And finally a customer, that needs to access both portals but without any backend permissions.
Federated authentication
As I understand it, when the user successfully authenticates with the federation server, the federation server provides the requesting microsite with claims about that user's identity. Such claims might include :
*
*The user email address : bob@example.com
*The user name : Bob
*The user type : Product team
*Anything else that relates to the users identity
It does not provide anything that relates to what the person can do (this might be my first misconception). Note that the user type is effectively a claim to a role. Is this terminology correct ? Is a claim in this context different to a permission claim ?
Microsite authorisation
Once the user is authenticated, the microsite needs to know what that user is allowed to do. Whilst the site is handed a user type (which is a claim to a roll), I would prefer to use a claims based approach. This would give finer granularity on permissions. E.g :
*
*Bob should be able to edit prices but not create products.
*Bill (also in the products team) should only be able to add products.
*Ben should only be allowed to delete products.
*Alice (finance) should have no permissions.
Using claims, as apposed to rolls gives the microsites extra flexibility to grant specific permissions.
The question(s)
*
*Where should these claims to allowed actions be stored ?
*Should each microsite provide it's own federation identity -to- microsite claim mapping service ?
*Should each microsite cache these claims ? What if Bob moves from the Products team to the Finance team ?
The reason I'm asking is because we've had a debate in our development department and our most senior developer is of the opinion that the federation server should provide all the claims that each microsite needs. This seems to me like it would tightly couple the federation service to each microsite.
A: So... there are several pieces at work here. Let's get to work.
First, a Central Authentication Service is a great way to authenticate Users in a distributed system scenario, but might not be the best place to handle authorization for all its connected microsites and/or microservices.
Claims-based authentication
To make it clear, in the CAS (Central Authentication Service) is where you add all relevant claims, be them roles, flags, and any other information you might need later on.
But here is where I would advice you to get familiar with Resource based authorization too (if you haven't already).
Resource-based authorization
The concept here is to determine access to parts of your systems (microsites/microservices) and to specific functionality by asking the Claims Principal whether the User has access to a Resource instead of a Role.
Now, here is where people usually get lost in translation. Roles are represented as Claims; Resources are also represented as Claims.
To figure out access to the Product Portal, you can check whether the ProductPortal Claim exists (could be named whatever). The important part is that you're not checking whether the User has an Admin Role or not, but rather checking against the precense of a Resource Claim. So, when you decide down the road that not only Users with an Admin Role should access the portal, you could add the ProductPortal Resource Claim to any other users you which to bestowed it upon based on any criteria you need, e.g.: Roles, Flags, et. al.
But, this represents a problem down the road. If you add a claim for every resource you'll need in every microsite/microservice in your distributed system, you'll end up with a huge load of claims. Not only that, but since some resources would be relevant on some microsite/microservice yet not in others, you're carrying around a loaded bag where only a handful of claims are relevant to a given microsite/microservice at any given time.
Fret not however, as there is an elegant solution for that too.
Claims Transformation
This is a process which you would usually register as a middleware or filter to intercept every request, read the claims (and hence its roles and flags) and enrich the incoming Claims Principal with the Resource claims relevant for the microsite/microservice you're entering.
This way, you're not carrying a huge bag of claims around, and on top of that you get to decouple Authentication from Authorization, pushing the logic that translates normal claims into Resource claims to the system that knows best about the Resources it would need to control access and permissions for sections and functionalities down to a granular level if you wish it so.
To the point
*
*Where should these claims to allowed actions be stored ?
In the Claims Principal, which you can be passing around as an Access Token, a Session Variable or what not
*
*Should each microsite provide it's own federation identity -to- microsite claim mapping service?
The Claims Transformation (or Claims Mapping) would reside in each microsite; the place that knows the most about the Resources that would map its functionality surface best.
*
*Should each microsite cache these claims ? What if Bob moves from the Products team to the Finance team ?
No, no caching. The Access Token (or session) you pass around would hold the bare minimums in the form of claims. Since the enrichment of claims happens in the Claims Transformation process in the microsite, the moment Bob moves from Products team to the Finance team, he would lose some Role Claims and get others, which would properly be translated to the Resource Claims relevant for the Finance microsite at the time of accessing it and give him the access he needs to the sections and functionality you deem appropriate.
Hope I didn't make it too confusing.
| |
doc_23532813
|
Here is my Table Structure,
SQL Fiddle
I want to display as
*alpha london
*alpha newyork
*beta delhi
*beta sydney
I mean, the second coloumn (name) should be in Ascending Order and the third coloumn (place) should be in Descending Order.
How i want is
*
*alpha london
*alpha newyowk
*beta delhi
*beta sydney
The Name should be in Asc Order and then to the right, the Place should be in Desc Order
What i have tried so far is
How can i do this ??
<?php
include ('conn.php');
$sql="SELECT * FROM test";
$result=mysql_query($sql);
$rows=mysql_fetch_array($result);
while($rows=mysql_fetch_array($result))
{
foreach($result as $k=>$v)
{
echo $k;
}
}
?>
It displays the result as Invalid argument supplied to for each. What is the mistake i am doing and how can achieve my output
A: To fix the sorting, just add ORDER BY name, place
Then there's several more issues preventing this from working:
*
*You shouldn't call mysql_fetch_array outside the loop (this would discard the first row, alpha london in your example).
*You need to iterate over $rows, not $result (this is where the "invalid argument" error is coming from).
*You're not echoing the value; only the key. So you wouldn't be displaying the name and place at all; only the words "name" and "place".
You might want to do something like this to fix these:
<?php
include('conn.php');
$sql = "SELECT * FROM test ORDER BY name, place";
$result = mysql_query($sql);
while ($rows = mysql_fetch_array($result)) {
foreach ($rows as $k => $v) {
echo "$k is $v. ";
}
echo "<br/>";
}
?>
A: So why don't you do an order in your query itself like
SELECT * FROM test order by name;
A: You can do this only modifying query.
Use this query:
SELECT * FROM test ORDER BY name ASC, place DESC
A: For your error, you are using wrong variable as foreach.
Replace this -
foreach($result as $k=>$v)
{
echo $k;
}
with this -
foreach($rows as $k=>$v)
{
echo $k;
}
Buy why are using foreach inside while loop. You can get your values like -
while($rows = mysql_fetch_array($result)) {
echo $rows['name'].' '.$rows['place'].'<br/>';
}
and don't use mysql_* function. Use instead mysqli_*
And for your expected output, try using following query -
SELECT * FROM `test` order by name asc, place desc
A: This is the kind of thing you can fix using array_multisort.
Try the below example (which I have not tested yet, but ought to work)
<?php
include ('conn.php');
$sql="SELECT * FROM test";
$result=mysql_query($sql);
$totals = array();
$row = array();
$places = array();
while($row=mysql_fetch_array($result)){
$totals[] = $row
$names[] = $row['name'];
$places[] = $row['place'];
}
array_multisort( $names, SORT_ASC, $places, SORT_DESC, $totals );
// now you can use $totals, which is sorted as you want
print_r( $totals );
| |
doc_23532814
|
On my website, I have a logo centered at the top of the site. I want to add some text to the left side of the website INLINE with the image without moving the image position.
LOGO
LOGO
LOGO
Text here LOGO
|------------------------------| (Page width)
The text CANNOT be above or below the logo line because it messes up the rest of the page.
EDIT: I decided to remove "What I have now" because it is confusing out of context.
So how would you tackle this scenerio?
A: Sorry, but is this what you want? :) http://jsfiddle.net/ex65xetj/
<div class="wide">
<img src="path/to/image"></img>
<span>Text here</span>
</div>
Simply wrap the logo section in a wrapper.
Make it position relative(so children absolute positioning is relative to this element and not the page).
Then position absolute the text.
Make the left and bottom position value of the text 0, so it sticks to the bottom left.
A: <div class="logo-div">
<img src="" alt="logo"/>
<div class="text-block">Your text goes here</div>
</div>
css
.logo-div{
border:1px solid #ccc;
padding:40px;
text-align:center;
background-color:#f6f6f6;
}
.text-block{
top:70px;
left:190px;
position:absolute;
}
working fiddle: http://jsfiddle.net/raghuchandrasorab/jy9n4hp2/
| |
doc_23532815
|
I have a lot of issues with the frontal cache,
Here is the list of all the modules and/or caching systems used in the server :
*
*Varnish : Now disabled,
*Redis : Only in module, not in server,
*Cache Expiration Module,
*OpCache seems activated, but only for 1mn caching according to
sysadmin,
*MemCache activated also,
What did try : Clearing all Drupal cache using drush cc all,
Even tried verbose and it just returned 2 lines saying some temp folder being rewritten ? and of course a success response,
I asked the sysadmin to reload/reboot Apache to see if anything was in the server side, but the more I dig the more I think it is the Drupal cache which is not being cleared like it should be,
Result of php drush.php -v cc all :
Executing mysql --defaults-extra-file=/tmp/drush_HSxa1g --database=[DB]--host=[HOST] --port=[PORT]--silent < /tmp/drush_jis8o8
Executing: mysql --defaults-extra-file=/tmp/drush_BFZzky --database=[DB]--host=[HOST] --port=[PORT]--silent < /tmp/drush_99YRFp
'all' cache was cleared. [success]
Command dispatch complete [notice]
I update any template file, let's say node--[content-type].tpl.php,
I deploy, and clear the cache, either using the back-office button or the drush cc, (usually both).
The page displays with the new code, but whenever I just refresh it, it goes into a random display, sometimes the old one, sometimes the new one,
Some pages display node template version that are one or two week old.
For example, every thing that is not being visited by any anonymous user seems to be not caching, like XML feeds, they do get renewed and not cached.
| |
doc_23532816
|
Each object in cart array have (id, item_id, quantity, name, price).
But when my map function find that specific cart, instead of changing only quantity of that object, it replaces the whole object with the quantity value.
Please tell me what I am doing wrong.
action.payload
{
id: "a6e1868f-e1bc-4180-abc6-328fdd8e922f",
quantity: 7
}
state.cart.map(cartItem => cartItem.id === action.payload.id ? cartItem.quantity = action.payload.quantity : cartItem)
A:
state.cart.map(cartItem => cartItem.id === action.payload.id ?
cartItem.quantity = action.payload.quantity : cartItem)
when you return some element say x at some index from callback of map, the corresponding element which was being mapped is replaced with this x, that is how map works.
You are returning this:
cartItem.quantity = action.payload.quantity // btw don't mutate state items like this
let x;
console.log(x=9); // logs 9
console.log([1,2].map(y=>y=9)) // logs [9,9]
You want immutable approach since you are iterating state items:
let newVal=99;
let res = [{
a: 1,
b: 2
}].map(cartItem =>
cartItem.a === 1 ? {
...cartItem,
quantity: newVal
} : cartItem
);
console.log(res)
A: You can use spread operator to copy all the properties of item object, and then, you can replace quantity with action.payload.quantity
let cartItem = {
id: "a6e1868f-e1bc-4180-abc6-328fdd8e922f",
quantity: 5
}
let action = {
payload: {
id: "a6e1868f-e1bc-4180-abc6-328fdd8e922f",
quantity: 7
}
}
let cart = [cartItem]
let result = cart.map(item=>{
if(item.id === action.payload.id){
return {...item, quantity: action.payload.quantity};
} else{
return item;
}
})
console.log(result)
A: Lets look at how Map function works:
Map function gets an Array .. goes through it while using function you define to map every element of 1st array to 2nd (new array).
You return just a number if id's are same so new element of new array will be just number:
|are id's same| ? YEP return number : NOPE return <object>
It should be:
|are id's same| ? YEP return <object> : NOPE return <object>
Fix of original code:
state.cart.map(cartItem => cartItem.id === action.payload.id ? { ...cartItem, quantity: action.payload.quantity} : cartItem)
A: The main issue is that you are using arrow functions "without a body" (in other words, without using {}). This means that your arrow function is implicitly returning the resulting value of any expression succeeding the => symbol. In this case it will be action.payload.quantity or cartItem depending if the item ID is matched or not.
Additionally, I assume that you are using the result of the map operation (a new array with the returned items) to update your state.
If you are using Redux Toolkit or something else to ensure immutability (I assume you do because your code however is not respecting immutability as inner items are being returned as references and not as new values), you could replace your map function with a for loop or other loop structure (like Array.prototype.find) that allows you to interrupt the loop after finding the item you was looking for.
If you want to fix your code and leave it as it is (not recommended), you can do as follows:
state.cart.map(cartItem => {
if (cartItem.id === action.payload.id) {
cartItem.quantity = action.payload.quantity;
}
return cartItem;
});
A: There are some problems in
previous: state.cart.map(cartItem => cartItem.id === action.payload.id ? cartItem.quantity = action.payload.quantity : cartItem)
you can check this:
correct: state.cart.map(cartItem => cartItem.id === action.payload.id ? {...cartItem,quantity:action.payload.quantity} : cartItem)
| |
doc_23532817
|
define('charts/bar', ['charts/base','constants'], (BaseModel, Constants) ->
class BarChart extends BaseModel
...
We are looking to move to TypeScript so we can make use of refactoring and all the other goodness that comes with it. The problem that we are running into is that we have not been able to figure out how to compile all of our TypeScript files into a single file.
For example, assume the following files:
*
*charts/
*
*bar.ts
*line.ts
*base.ts
*users/
*
*profile.ts
*security.ts
*friends.ts
*data/
*
*geo.ts
*population.ts
*voting.ts
With all of these files, we want to put them into a single file called utils.js and reuse this js file across multiple pages. One page might only use the users/security class, while the next page might use the data/geo class along with some of the various chart classes.
How can we construct this using requirejs?
Note: One thing to keep in mind is that we don't have a single entry point to this class. It's purpose is solely to be used for helper classes and base implementations that we can extend in other classes.
A:
How can we construct this using requirejs?
Use import/require and compile with option --module amd.
More
https://basarat.gitbooks.io/typescript/content/docs/project/modules.html
PS: highly recommend moving to commonjs. But that is opinionated
| |
doc_23532818
|
using var channel = Grpc.Net.Client.GrpcChannel.ForAddress("path to service");
//...do stuff...
await channel.ShutdownAsync();
I'm wrapping this in a .NET dependency injected service using IServiceCollection, should this be...
*
*Per-method - create and dispose the channel inside each method call.
*AddTransient - a new channel every time my service is requested, but only as long as it's needed.
*AddScoped - a new channel for each request, but keeping the channel open until the request is done.
*AddSingleton - a single new channel for the app.
I think AddSingleton is out as I'm not sure how one GrpcChannel will handle lots of parallel requests at the same time and I'd like to pass the CancellationToken for the current request.
Which puts the choice between AddScoped vs AddTransient vs per-method. Without a load of testing (and stumbling into all the pitfalls) I'm not sure what the best practice here is (I'm new to gRPC). Should I be closing the channel as soon as possible, or keeping it open and sharing it between calls?
A: According to Microsoft
https://learn.microsoft.com/en-us/aspnet/core/grpc/client?view=aspnetcore-5.0#client-performance :
A channel represents a long-lived connection to a gRPC service.
and
Channel and client performance and usage:
*
*Creating a channel can be an expensive operation. Reusing a channel for gRPC calls provides performance benefits.
*gRPC clients are created with channels. gRPC clients are lightweight objects and don't need to be cached or reused.
*Multiple gRPC clients can be created from a channel, including different types of clients.
*A channel and clients created from the channel can safely be used by multiple threads.
*Clients created from the channel can make multiple simultaneous calls. need to be cached or reused. Multiple gRPC clients can be
created from a channel, including different types of clients. A
channel and clients created from the channel can safely be used by
multiple threads. Clients created from the channel can make
multiple simultaneous calls.
According to grpc https://grpc.github.io/grpc/csharp-dotnet/api/Grpc.Net.Client.GrpcChannel.html :
Class GrpcChannel Represents a gRPC channel. Channels are an
abstraction of long-lived connections to remote servers. Client
objects can reuse the same channel. Creating a channel is an expensive
operation compared to invoking a remote call so in general you should
reuse a single channel for as many calls as possible.
As always, it depends on what you are trying to do, but probably only single channel should be created by a Singleton. Also, if you really need to handle a heavy load, try using SocketsHttpHandler.EnableMultipleHttp2Connections (https://learn.microsoft.com/en-us/aspnet/core/grpc/performance?view=aspnetcore-5.0#connection-concurrency)
A: Grcp clients should be registred with the GrcpClientFactory
builder.Services.AddGrpcClient<Greeter.GreeterClient>(o =>
{
o.Address = new Uri("https://localhost:5001");
});
(Surprisingly, this registers them as transient)
See:
https://learn.microsoft.com/en-us/aspnet/core/grpc/clientfactory?view=aspnetcore-6.0
| |
doc_23532819
|
Then I referenced this dll I made in another project and received this error message:
error C1083: Cannot open include file: 'openssl\ssl.h': No such file or directory
this .h file is used inside the dll, I would think that by referencing the dll I should not have to include this file directly...
Shouldn't a dll have all the files needed for it's purpose "inside it"?
A:
Shouldn't a dll have all the files needed for it's purpose "inside it"?
No. A DLL contains machine code.
The main difference between .c and .h files is that .c files contain Code and .h files contain Headers (i.e. that's what they're supposed to, although they're not bound to). You need those header files in order for the compiler to know what to look for in the DLL. After your program has been compiled and linked, the header files are not required anymore.
That's why the authors of libraries written in C or C++ which are not open source usually provide precompiled binaries as well as header files.
A file format containing machine code and headers would be possible, but to my knowledge, no such format exists, and it would be really bad if it did, because for a lot of programs that would mean huge executable files.
A: No, because:
*
*A .dll is a compiled, binary file that can be dynamically loaded at runtime by .exe programs.
*A .h (or .hpp) file contains source code definitions of function prototypes or data structures for your C/C++ program, which are used during compilation.
To compile your source code, you'd need to:
*
*#include the header file(s) so that the rest of your code knows what the data structures and function signatures stored in the DLL look like.
*Link with the .lib or .a file equivalent of the .dll file.
If all goes well, then the .exe file generated by the compilation process will be able to dynamically load in and use the (already compiled) functions stored in the .dll file.
| |
doc_23532820
|
*
*words(word_id, value);
*word_map(sno(auto_inc), wm_id, service_id, word_id, base_id, root_id);
*
*in which sno is auto incremented just for indexing.
*wm_id is the actual id which are unique for each service like
(serviceid, wm_id together form a unique key).
*base_id and root_id are referenced to wm_id i.e., I store the values of respective wm_id of new word being inserted.
My Requirement now is I want to delete the records from this table where, a words's base_id or root_id does not exists in the table
For example,
A new word with tr_id = 4, its base_id = 2 and root_id = 1 then There must two other records with tr_id s 2 and 1 if not we can call it as an orphan and that record with wm_id = 4 must be deleted, then records with other wm_ids having this 4 as base_id or root_id must also be deleted as they r also now orphans if 4 gets deleted and so on.
Can anybody suggest me the solution for the problem.
What I tried:
I tried write a procedure using while in which it has a query like,
delete from words_map where base_id not in (select wm_id from words_map) or root_id not in (select wm_id from words_map)
But deleting/ or updating on same table using this kind of nested queries is not possible, So I am searching for an alternate way.
What I doubt is :
*
*I thought of reading these wm_ids into an array then reading one by one deleting based on that, but I dont think we have arrays in stored
procedures.
*Is Cursor an alternative for this sitution.
*or any other best solution for this problem.
EDIT 1: Please go through this http://sqlfiddle.com/#!2/a4b6f/15 for clear experimental data
Any and early help would be appreciated
| |
doc_23532821
|
incorrect code :
ifstr;
YAML::Parser parser(ifstream("items9.yml"));
correct code :
ifstream ifstr("items9.yml");
YAML::Parser parser(ifstr);
The person told me it should not have compiled, I'm using visual C++ 10. Is this normal behaviour and should I be aware of it, or is the library wrongly designed or visual C++ wrongly accepting the code ?
A: This is a known issue in VS, that (unlike the standard) allows binding of non-const references to rvalues. The same can be tested with this code:
struct test {};
test f() { return test(); }
int main() {
test & r = f(); // Should be an error
}
| |
doc_23532822
|
adb cat sys/class/thermal/thermal_zone0/temp
adb cat sys/class/thermal/thermal_zone1/temp
adb cat sys/class/thermal/thermal_zone2/temp
...
adb cat sys/class/thermal/thermal_zone20/temp
| |
doc_23532823
|
{
"access_token" : "Abca09zzzza2o2kelmzlli3ijlka",
"token_type" : "bearer",
"refresh_token" : "lkmLIAmooa898nm20jannnnnxaww",
"expire_in" : 3600
}
But this is what the server actually gives:
{
"access_token": "uArVqRgpGKv98aNJpziSmTQiFaX2Ebrz",
"token_type": "bearer",
"expires_in": 86399
}
There is no refresh_token.
Here is what my controller looks like:
class DemoChannel extends ApplicationChannel {
ManagedContext context;
AuthServer authServer;
@override
Future prepare() async {
logger.onRecord.listen((rec) => print("$rec ${rec.error ?? ""} ${rec.stackTrace ?? ""}"));
final config = WordConfig(options.configurationFilePath);
final dataModel = ManagedDataModel.fromCurrentMirrorSystem();
final persistentStore = PostgreSQLPersistentStore.fromConnectionInfo(
config.database.username,
config.database.password,
config.database.host,
config.database.port,
config.database.databaseName);
context = ManagedContext(dataModel, persistentStore);
final authStorage = ManagedAuthDelegate<User>(context);
authServer = AuthServer(authStorage);
}
@override
Controller get entryPoint {
final router = Router();
router
.route('/register')
.link(() => RegisterController(context, authServer));
router
.route('/auth/token')
.link(() => AuthController(authServer));
router
.route('/words/[:id]')
.link(() => Authorizer.bearer(authServer))
.link(() => WordsController(context));
return router;
}
}
My AuthController is just the standard one that comes with Aqueduct. I didn't even see any parameters to adjust in the source code.
How do I make the server send back a refresh token?
A: It sounds like you are authenticating a public OAuth 2 client. By rule, a public client cannot have a refresh token. You must use a confidential client. A client is confidential when it has a secret. Use the —secret option when creating your client.
| |
doc_23532824
|
For example, if I make a jmeter test file as...
and result jtl file looks like this..
My question is
*
*As you can see from the jtl file, thread number start from 6 not 1. If thread group's state is 6, why does the counted rows of thread group '6' is 1? For all I know, each thread groups represents virtual users, so I expect if thread groups are 6 there will be 6 requests be sent.
*What's the difference between grpThreads and all Threads? I'm going to draw response time over active thread groups(virtual users), for example, which value should I use?
A: Both are numbers of currently active users
*
*grpThreads - for current Thread Group
*allThreads - for all Thread Groups (not applicable for your case because you have only one Thread Group)
the number of active threads starts from 6 due to your ramp-up settings, according to your configuration JMeter starts 20 threads in 5 seconds
Looking into .jtl file doesn't give you the full picture I'd recommend loading the file into a Listener, for example Active Threads Over Time is a good choice for tracking concurrency (it can be installed using JMeter Plugins Manager)
Also you can generate HTML Reporting Dashboard which provides nice apdex tables and charts
| |
doc_23532825
|
Upload Image Code (Android)
mStorageRef = FirebaseStorage.getInstance().getReference();
mStorageRef.child("thumbnails")
.child("thumbnail")
.putFile(file)
.addOnSuccessListener(new OnSuccessListener<UploadTask.TaskSnapshot>() {
@Override
public void onSuccess(UploadTask.TaskSnapshot snapshot) {
//Successfully Got a Thumbnail URL
mThumbnailURL = snapshot.getMetadata().getDownloadUrl().toString();
//Upload link to firebase
FirebaseDatabase.getInstance.getReference()
.child("thumnail_links")
.push()
.putValue(mThumbnailURL);
}
});
How Firebase suggests downloading the Image
mStorageRef.child("thumbnails")
.child("thumbnail")
.getDownloadLink()
.addOnSuccessListener(new OnSuccessListener<Uri>() {
@Override
public void onSuccess(Uri uri) {
Picasso.with(MainActivity.this)
.load(uri.toString())
.into(mMyImageView);
}
});
How I want to download the image
FirebaseDatabase.getInstance().getReference()
.child("thumbnials")
.child(pushKey).addValueEventListener(new ValueEventListener() {
@Override
public void onDataChange(DataSnapshot dataSnapshot) {
if(dataSnapshot.exists()){
String url = (String) dataSnapshot.getValue();
Picasso.with(MainActivity.this)
.load(url)
.into(mMyImageView);
}
}
//...
});
Is there anything wrong with the approach I am using? i.e. do the download URLs change frequently e.t.c...?
| |
doc_23532826
|
Maximum update depth exceeded. This can happen when a component
repeatedly calls setState inside componentWillUpdate or
componentDidUpdate. React limits the number of nested updates to
prevent infinite loops.
My code is:
class Login extends React.Component {
constructor(props){
super(props);
this.state = {
firstName: '',
password: '',
isLogged: false
}
this.handleSubmit = this.handleSubmit.bind(this);
this.handleChangeFirstName = this.handleChangeFirstName.bind(this);
this.handleChangePassword = this.handleChangePassword.bind(this);
}
handleChangeFirstName(event) {
this.setState({firstName: event.target.value})
}
handleChangePassword(event) {
this.setState({password: event.target.value})
}
handleSubmit(event) {
console.log('login');
console.log('first name:' ,this.state.firstName);
console.log('password:' ,this.state.password);
this.props.loginRequest(this.state.firstName, this.state.password);
// console.log('login',this.props)
}
componentDidUpdate() {
if(this.props.request.message === 'ok'){
console.log('ok');
this.setState({isLogged: true});
this.props.history.push('/');
//path is ok!!!
console.log('path from history: ', this.props.history);
}
}
render() {
return(
<div>
{this.state.isLogged ? <App />
:
<div className="container">
<div className="row">
<div className="col-sm-5 log-style">
<div className="log-title">
<h2>Admin Login</h2>
<p>Please enter you login details</p>
</div>
<div className="row p-2">
<input
type="text"
id="fname"
onChange={this.handleChangeFirstName}
className="form-control input-sm"
placeholder="First name"
style={{'border':'none'}} required/>
</div>
<div className="row p-2">
<input
type="password"
id="pass"
onChange={this.handleChangePassword}
className="form-control input-sm"
placeholder="Password"
style={{'border':'none'}} required/>
</div>
<div className="row" style={{'marginTop':'40px'}}>
<div className="col-sm-6" style={{'padding': '2px'}}>
<input type="checkbox"
style={{
'float': 'left',
'marginTop': '10px',
'marginLeft': '13px'
}} />
<label
style={{
'marginTop': '7px',
'marginLeft': '9px'
}}>Remember me </label>
</div>
<div className="col-sm-6"
style={{
'paddingTop': '7px',
'textAlign': 'right'
}}>
<a href="#"
style={{
'color': 'purple',
'fontSize': '13px'
}}>
Forgot password?</a>
</div>
</div>
<div className="row" style={{'justifyContent': 'center'}}>
<div className="btn btn-sm borderBtn"
style={{
'backgroundColor':'purple'}}
onClick={() => this.handleSubmit(event)}>
<span className="fi-account-login">Login</span>
</div>
</div>
</div>
</div>
</div>
}
</div>
)
}
}
const mapStateToProps = state => (
{ user: state.userReducer.user, request: state.userReducer.request }
);
const mapDispatchToProps = dispatch =>
bindActionCreators({loginRequest}, dispatch);
export default withRouter(connect(mapStateToProps, mapDispatchToProps)(Login));
I have another component which uses Router to switch pages, and a mail component in which includes Login page.
A: The problem is every time you component updates, you are updating it again without any kind of break condition. This creates an infinite loop. You should check that this.state.isLogged is NOT true before updating it here:
componentDidUpdate() {
if(this.props.request.message === 'ok'){
console.log('ok');
if (!this.state.isLogged) {
this.setState({
isLogged: true
});
}
this.props.history.push('/');
//path is ok!!!
console.log('path from history: ', this.props.history);
}
}
This prevents updating the isLogged state when it's already true.
| |
doc_23532827
|
Unfortunately over the year people went a little mad with it all and now we have sass with 15 levels of specificity. This has meant our compiled compressed CSS is enormous.
As we are about to embark on a complete rebuild of our site in redux, management don't want to spend the resources to clean up our CSS which is understandable.
What I was wondering as a quick fix is there a plug in for gulp/webpack that could programmatically clean up the specificity where not needed as part of the build pipeline?
A: I've wrote something that should help with that problem :].
https://github.com/felixmosh/postcss-decrease-specificity
This postcss plugin reduces the amount of decented class selectors that are nested more then options.depth (defaults to 3).
This plugin supports several scenarios:
*
*.a .b .c .d -> .b .c .d
*tag .a .b .c .d -> tag .b .c .d
*#id .a .b .c .d -> #id .b .c .d
*.a .b > .c -> .a .b > .c
*.a .b .c .d .e > .f -> .c .d .e > .f
For more supported cases checkout the tests.
I know that this plugin is not covering all the cases, and it is not perfect, but with PR's we can make it more suitable.
⚠️ Use this plugin with caution, it may break your design.
| |
doc_23532828
|
[tag1=val1] [tag2=val2] [tag3=val3]
to an array:
tag1=val1
tag2=val2
tag3=val3
...
n
there is the regex which works on web regex testing services
\[([^\[\]]*)\]
but when I try to use it in java I get empty result:
"[tag1=val1] [tag2=val2] [tag3=val3]".split("\\[([^\\[\\]]*)\\]"); \\empty
A: Your current regex works well if you match the strings using Matcher#find():
String s = "[tag1=val1] [tag2=val2] [tag3=val3]";
Pattern p = Pattern.compile("\\[([^\\[\\]]*)]");
Matcher m = p.matcher(s);
List<String> results = new ArrayList<>();
while(m.find()) {
results.add(m.group(1)); // Get Group 1 value only (note you may trim it here)
}
System.out.println(results); // => [tag1=val1, tag2=val2, tag3=val3]
See the online Java demo.
The pattern works indeed:
*
*\[ - matches a [
*([^\[\]]*) - is a capturing group with ID 1 and matches 0+ chars other than [ and ] - note that [ and ] inside a character class in Java regex must be escaped
*] - a literal ] char (does not have to be escaped outside a character class).
A: If you want to split you'd need to split on the ] [ between tags (that's your delimiter) and strip the leading and trailing bracket. Assuming your list always is in the form you posted, you can try this:
String input = "[tag1=val1] [tag2=val2] [tag3=val3]";
String[] tags = input.substring( 1, input.length() - 1 ).split( "\\]\\s*\\[" );
Here substring() is used to strip the first and last character on the assumption that those are [ and ]. Then the result is split on ] [ with any amount of whitespace between the brackets.
However, it might be better to just use your regex and look for the matches between brackets in a loop, as the others have already suggested. Assuming you might want to get tag name and values separately you could use the following expression: \[(.*?)=(.*?)\]. That would match as little as possible and collect the tag name into group 1 of the match and the value into group 2.
If you want to make that expression even safer, you could disallow brackets and the equal sign similar your expression, i.e. \[([^\[\]=]*)=([^\[\]=]*)\]
Short breakdown:
*
*[^\[\]=] is a negative character class matching anything but [ , ] and =
*([^\[\]=]*) captures a match of arbitrary length of the class above into a group
*\[([^\[\]=]*)=([^\[\]=]*)\] matches two groups of your class (see above) separated by = and enclosed by [ and ]
A: As you want to get the result as a list of Strings the use of String.split seems to be a good solution. Try the following regex:
"[tag1=val1] [tag2=val2] [tag3=val3]".split("(?:\\] *)?\\[|\\]$");
| |
doc_23532829
|
Thanks in advance.
A: An extremely easy way to do this, is by setting a id and tag to the parent layout and in your onCreate(), you can findViewById() and getTag().
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:id="@+id/my_activity_view"
android:tag="big_screen" >
// misc views~
</RelativeLayout>
And then, in your onCreate method,
if(findViewById(R.id.my_activity_view).getTag().equals("big_screen")) {
// do stuff
}
A: Following code will help you. I just printed corresponding screen size or density category. U can do whatever you want!
//Determine screen size
if ((getResources().getConfiguration().screenLayout &Configuration.SCREENLAYOUT_SIZE_MASK) == Configuration.SCREENLAYOUT_SIZE_LARGE)
{
Log.d("Screen Size: ", "LARGE");
}
else if ((getResources().getConfiguration().screenLayout & Configuration.SCREENLAYOUT_SIZE_MASK) == Configuration.SCREENLAYOUT_SIZE_NORMAL) {
Log.d("Screen Size: ", "NORMAL");
}
else if ((getResources().getConfiguration().screenLayout & Configuration.SCREENLAYOUT_SIZE_MASK) == Configuration.SCREENLAYOUT_SIZE_SMALL) {
Log.d("Screen Size: ", "SMALL");
}
else if ((getResources().getConfiguration().screenLayout & Configuration.SCREENLAYOUT_SIZE_MASK) == Configuration.SCREENLAYOUT_SIZE_XLARGE) {
Log.d("Screen Size: ", "XLARGE");
}
else {
Log.d("Screen Size: ","UNKNOWN_CATEGORY_SCREEN_SIZE");
}
//Determine density
DisplayMetrics metrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(metrics);
int density = metrics.densityDpi;
if (density==DisplayMetrics.DENSITY_HIGH) {
Log.d("Screen Density: ","HIGH");
}
else if (density==DisplayMetrics.DENSITY_MEDIUM) {
Log.d("Screen Density: ","MEDIUM");
}
else if (density==DisplayMetrics.DENSITY_LOW) {
Log.d("Screen Density: ","LOW");
}
else if (density==DisplayMetrics.DENSITY_XHIGH) {
Log.d("Screen Density: ","XHIGH");
}
else if (density==DisplayMetrics.DENSITY_XXHIGH) {
Log.d("Screen Density: ","XXHIGH");
}
else {
Log.d("Screen Density: ","UNKNOWN_CATEGORY");
}
A: I had the same problem. If you're using layout/layout-sw600dp/layout-sw720dp, the following ended working for me:
Configuration config = activity.getResources().getConfiguration();
if (config.smallestScreenWidthDp >= 720) {
// sw720dp code goes here
}
else if (config.smallestScreenWidthDp >= 600) {
// sw600dp code goes here
}
else {
// fall-back code goes here
}
A: I was searching for the same thing! I found 2 similar questons/answers:
Getting ScreenLayout
Using ScreenLayout Bitmask
While all three questions are essentially the same, you really need both answers to get your result.
I used this code to get the layout size inside my Activity:
int layoutSize = getResources().getConfiguration().screenLayout;
layoutSize = layoutSize & Configuration.SCREENLAYOUT_SIZE_MASK;
The first line will return the layout size, but it will be a number that is almost the maximum size of an int.
The second line will AND the layout size with the given mask. The mask has a value of 15 in decimal, or F in hex.
The value returned is between 0 and 3. The values corresponding to screen sizes, as well as the value of the size mask, are defined as constants in the android.content.res.Configuration class: Android Configruation Documentation.
All you need to do after the previous 2 lines is to have some kind of a switch statement comparing the returned layoutSize with Configuration's pre-defined values for SCREENLAYOUT_SIZE_XXX.
Hope this is not too late for you to use it. My problem was that I did not know how to ask the question in exact Android language. That is probably why the same question was, and probably should be, asked so many different ways.
A: That's an old question, but I would like to share my way of solving this issue.
Create a values-* variant matching each setup you wish to confirm programmatically, inside which you'll define the proper values describing the current configuration:
As you can see, the boolean value is_landscape exists both in values and values-land, therefore it is possible to safely access it via R.bool.is_landscape.
The only difference, though, is the value:
getResources().getBoolean(R.bool.is_landscape); evaluates to true in landscape mode, and false otherwise.
Note: of course that it is possible to use getResources().getConfiguration().orientation in order to get the current device orientation, though I chose to use this simple familiar case to make my example clear and straight forward.
It is important to understand that this technique is versatile, using which one can define a more complex values resource directory, ( e.g. values-ldrtl-sw600dp-v23 ) to support a more specific configuration.
A: ViewGroup view = (ViewGroup)getWindow().getDecorView();
LinearLayout content = (LinearLayout)view.getChildAt(0);
int id = content.getId();
The id tells you which layout you're using (assuming you set the id tag in the XML).
| |
doc_23532830
|
However I can't get my code to work for any / all namespaces.
Listening to a defined namespace works with the following code:
@Operator(
informer = Informer(
apiType = V1Pod::class,
apiListType = V1PodList::class,
resourcePlural = "pods",
namespace = "my_namespace"
)
) //
class NetpolReconciler : ResourceReconciler<V1Pod> {
override fun reconcile(
request: Request,
lister: OperatorResourceLister<V1Pod>,
): Result {
...
Setting namespace to Informer#ALL_NAMESPACES or "" results in a NullPointerException when receiving a request:
09:30:43.942 [OperatorV1Pod-controller-12] [traceId:,spanId:] ERROR i.k.c.e.controller.DefaultController - Reconciler aborted unexpectedly
java.lang.NullPointerException: Cannot invoke "io.kubernetes.client.informer.SharedIndexInformer.getIndexer()" because "sharedIndexInformer" is null
at io.micronaut.kubernetes.client.operator.OperatorResourceLister.get(OperatorResourceLister.java:66)
at operator.NetpolReconciler.reconcile(NetpolReconciler.kt:31)
at io.micronaut.kubernetes.client.operator.controller.DefaultControllerBuilder.lambda$build$1(DefaultControllerBuilder.java:108)
at io.kubernetes.client.extended.controller.DefaultController.worker(DefaultController.java:207)
at io.kubernetes.client.extended.controller.DefaultController.lambda$run$1(DefaultController.java:154)
Can this even be achieved with micronaut-kubernetes-operator?
| |
doc_23532831
|
ReferenceError: DataTypes is not defined
Here's my code
var Sequelize = require('sequelize');
var sequelize = new Sequelize('uppersphere', '****', '***', {
logging: false
});
...
var Peak = sequelize.define('peak', {
id: {
type: DataTypes.UUID,
defaultValue: DataTypes.UUIDV1,
primaryKey: true
},
Here's the documentation
sequelize.define('model', {
uuid: {
type: DataTypes.UUID,
defaultValue: DataTypes.UUIDV1,
primaryKey: true
}
})
The obvious answer is that it's not my code, but there's some require() that I need. However, I don't see any documentation on what to require to get DataTypes.
A: The Data Types can be accessed via the Sequelize object:
var Sequelize = require('sequelize');
var Peak = sequelize.define('peak', {
id: {
type: Sequelize.UUID,
defaultValue: Sequelize.UUIDV1,
primaryKey: true
},
DataTypes is a just a convenience class which you can import directly if needed:
var DataTypes = require('sequelize/lib/data-types');
Also, the model files are imported with DataTypes as a second argument
| |
doc_23532832
|
This works for all classes that extend the Archiveable class directly. Classes that extend a non MappedSuperClass that extends the Archiveable class do not inherit the customizer. See my example below:
For what it's worth, I am using EclipseLink 2.5.1.
/**
* Base Abstract class that implements archiving functionality
*/
@MappedSuperClass
@Customizer(ArchiveableCustomizer.class)
public abstract class Archiveable {
@NotNull
@Column(name = "DELETED)
private Boolean archived = false;
// omitted: getters/setters
}
/**
* The EclipseLink customizer implementation (followed their docs)
*/
public class ArchiveableCustomizer extends DescriptorEventAdapter
implements DescriptorCustomizer {
@Override
public void customize(ClassDescriptor descriptor) throws Exception {
// omitted: override delete string
// this is the piece not working
descriptor.getQueryManager().setAdditionalCriteria("this.archived = false");
}
}
/**
* Base class for entities that are stored in the Catalog table. All are archived
*/
@Entity
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
@DiscriminatorColumn(name = "TYPE", DiscriminatorType.STRING)
public abstract class Catalog extends Archiveable {
// omitted: shared columns, not relevant.
}
/**
* This class does <b>NOT</b> work
*/
@Entity
@DiscriminatorValue("CAR_TYPE")
public class CarType extends Catalog {
// omitted: class details, not relevant
}
/**
* Class that does not extend the Catalog class but the Archiveable
* class directly. This class can be queried correctly with the
* additional criteria
*/
@Entity
public class Car extends Archiveable {
// omitted: clas details, not relevant
}
In summary, the class CarType does not pick up the ArchiveableCustomizer. The class Car does pick up the ArchiveableCustomizer.
A: Within the internals of EclipseLink, MappedSuperClass classes do not have descriptors themselves, so all settings are pulled from them and applied to subclass entity descriptors - including customizers.
With inheritance, each entity in the hierarchy has its own descriptor, and while mappings and other settings are inherited, descriptor customizers are only executed on the descriptor they assigned to - the root in this case. Also note that some changes in the root entity will be seen in subclasses; inheritance graphs all use the same cache for instance, and changes to common mappings might be visible on child descriptors though I'm unsure of that point
| |
doc_23532833
|
Huggingface datasets package advises using map() to process data in batches. In their example code on pretraining masked language model, they use map() to tokenize all data at a stroke before the train loop.
The corresponding code:
with accelerator.main_process_first():
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
desc="Running tokenizer on every text in dataset"
)
2. The problem
Thus, when I try the same pertaining code with a much larger corpus, it takes quite a long time to tokenize.
Also, we can choose to tokenize data in data-collator. In this way, the program only tokenizes one batch in the next training step and avoids getting stuck in tokenization.
3. My question
As described above, my questions are:
*
*Which is better? Process in map() or in data-collator
*Why huggingface advises map() function? There should be some advantages to using map()
Thanks for your answers!
| |
doc_23532834
|
std::string s(size,0);
but it's just slightly wasteful, it's basically like using calloc() when all i need is malloc(), so the question is, how do i construct a string of X uninitialized bytes?
(using reserve()+push is not an option because im giving the string to a C-api taking char*,size to do the actual initialization)
edit: this thread seems to about the same issue/related (but with vectors instead of strings): Value-Initialized Objects in C++11 and std::vector constructor
A: You can't do it with std::string. But it can be achieved different way, like using std::unique_ptr<char[]>.
auto pseudo_string = std::unique_ptr<char[]>(new char[size]);
Or if your compiler supports C++20
auto pseudo_string = std::make_unique_for_overwrite<char[]>(size);
| |
doc_23532835
|
OkHttpClient baseClient = unidentifiedAccess.isPresent() ? connectionHolder.getUnidentifiedClient() : connectionHolder.getClient();
OkHttpClient okHttpClient = baseClient.newBuilder()
.connectionSpecs(Util.immutableList(ConnectionSpec.MODERN_TLS, ConnectionSpec.CLEARTEXT//CLEARTEXT
))
.build();
Server is located on my MAC OS.
Server runs simple jar, dropwizard framework.
Running on:
http://asylzat.com:8080/v1/directory/test
And what i am trying to do is to make it https instead of http: because okHttp don't works other way.
I bought ssl certificate, from Reg.ru, they send me two files:
*
*www.asylzat.com.key
*www.asylzat.com.csr
And there is instruction only for their hosting server and apachi.
What am i supposed to do?
This ssl certificate is turned to me a huge, unsolvable problem, i am trying to figure out and solve it nearly four days, trying to learn more about certificates.
dropwizard ssl
generating ssl certificate
No result, there is a lot of staff, csr, crt, keystore, pk12, jsr it all goes forever, i don't know is it to hard to configure it, as if you have to learn all about it, and kinda be system administrator, but ssl administrator
Anyone who does knows, Please help, I need to be able to run dropwizard jar on https, which is only demo, just because adroid okhttp receives only https. Such a small staff, became a huge problem for me, PLEASE HELP!!!
REG.RU Just send me an email:
_globalsign-domain-verification=mcd5zjYtpb7PT...
I don't know what to do with it, I asked reg.ru but they answered that "your hosting server is not from ours, so that we can't help you".
In .yml file there is a .keystore or .jrs is used, how are they made?
A: Note: too long for a comment
Working with certificates and the related stuff requires a little bit of learning. Quote from the Jetty's documentation:
Configuring SSL can be a confusing experience of keys, certificates,
protocols and formats, thus it helps to have a reasonable
understanding of the basics.
Jetty's documentation tries to be helpful with this: Start from Understanding Cerificates and keys http://www.eclipse.org/jetty/documentation/current/configuring-ssl.html#understanding-certificates-and-keys
Next check Requesting a trusted certificate http://www.eclipse.org/jetty/documentation/current/configuring-ssl.html#requesting-trusted-certificate There you will learn how the private certificate key helps you to create a CSR (certificate signing request).
This CSR is processed from your certificate provider to create a certificate (file).
In the Loading keys and certificates you will learn the basic steps and commands to process the above files in format suitable for deployment with your Java application. Just read carefully.
A: Did you tried to start server on self signed certificate?
keytool -keystore keystore -alias jetty -genkey -keyalg RSA -sigalg SHA256withRSA
.yml file
server:
# softNofileLimit: 1000
# hardNofileLimit: 1000
applicationConnectors:
- type: http
port: 8080
- type: https
port: 8443
keyStorePath: example.keystore
keyStorePassword: example
| |
doc_23532836
|
https://codesandbox.io/s/silly-dubinsky-of2mu
Basically I have two routes with a sub-route each:
first/first and second/first. In the FirstFirst component there's a link to the second route:
<Link to="/second/first">go to second first</Link>
The problem is when clicking that link the url changes, but the correspondent component won't load. I can only see it after refreshing.
How can I fix this?
A: You only need one Router component in your application
You also likely had infinite redirects due to the redirect being applied every render.
I removed the extra Router components, and then wrapped your First and Second components with a Switch component so that only one part of it would match. The Route would try to match, and if it failed, the redirect would occur.
https://codesandbox.io/s/trusting-wood-nbsg2?file=/src/Second.js
export default function Second() {
return (
<div>
<Switch>
<Route path="/second/first" component={SecondFirst} />
<Redirect
to={{
pathname: "/second/first"
}}
/>
</Switch>
</div>
);
}
A: I followed the link you shared, but it wasn't rendering anything on the browser. Anyways, I am guessing you are running this app on your local machine and using localhost:3000 or 127.0.0.1:3000 or any some another port.
If you configured your app with the localhost, make sure you use the localhost on the browser to view your app. If you use 127.0.0.1 as configuration, you should use this URL on the browser. I know they are both the local host and so on. But somehow it can cause the problem you are having, especially with server-side rendered apps.
| |
doc_23532837
|
UnityAds.init(this, "xxxxxxx", null);
The initialization was successful and the log shows that the ad was downloaded.
Initializing Unity Ads version 1508 with gameId xxxxxxx
Requesting Unity Ads ad plan from https://xxxxxxx
Unity Ads initialized with 3 campaigns and 2 zones
Unity Ads cache: File /storage/xxxxxxx/yyyyyyy.mp4 of 1445875 bytes downloaded in 9102ms
I try to show the ad:
if (UnityAds.canShow()) {
UnityAds.show();
}
Then this error message appears:
Unity Ads cannot show ads: webapp not initialized
What am I missing?
A: The error is that the IUnityAdsListener (third initialization parameter) is required and cannot be null.
The fix is to add the listener to the init method like below:
UnityAds.init(this, "xxxxxxx", new IUnityAdsListener() {
@Override
public void onHide() {
}
@Override
public void onShow() {
}
@Override
public void onVideoStarted() {
}
@Override
public void onVideoCompleted(String s, boolean b) {
}
@Override
public void onFetchCompleted() {
}
@Override
public void onFetchFailed() {
}
});
| |
doc_23532838
|
I think of something like http://err404.example.com/ resulting Error 404.
This could be quite handy to create test cases for test driven development of a download tool.
A: There are plenty of tools available to do this as part of your local development cycle, e.g. wiremock.
The benefit you'll get from this is that it's now under your control and not somebody elses
| |
doc_23532839
|
my __init__.py file:
from __future__ import absolute_import, unicode_literals
from .celery import app as celery_app
__all__ = ('celery_app',)
my celery.py file:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'new_todo_app.settings')
app = Celery('new_todo_app')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
@app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
my tasks.py file:
from celery import Celery
from celery import shared_task
app = Celery('tasks', broker='pyamqp://guest@localhost//')
@shared_task
def progress_bar():
print("Executed every minute")
and my settings.py file
CELERY_BROKER_URL = 'amqp://localhost'
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TIMEZONE = 'Asia/Baku'
CELERY_ENABLE_UTC = True
CELERY_BEAT_SCHEDULE = {
'progress-bar': {
'task': 'app1.tasks.progress_bar',
'schedule': 5.0,
},
}
I run celery beat worker by writing:
#celery -A new_todo_app beat -l info
celery beat starts, but tasks don`t execute.I tried DEBUG logging mode and I get:
Configuration ->
. broker -> amqp://guest:**@localhost:5672//
. loader -> celery.loaders.app.AppLoader
. scheduler -> celery.beat.PersistentScheduler
. db -> celerybeat-schedule
. logfile -> [stderr]@%DEBUG
. maxinterval -> 5.00 minutes (300s)
[2019-12-04 19:35:24,937: DEBUG/MainProcess] Setting default socket timeout to 30
[2019-12-04 19:35:24,938: INFO/MainProcess] beat: Starting...
[2019-12-04 19:35:24,975: DEBUG/MainProcess] Current schedule:
<ScheduleEntry: progress-bar app1.tasks.progress_bar() <freq: 5.00 seconds>
<ScheduleEntry: celery.backend_cleanup celery.backend_cleanup() <crontab: 0 4 * * * (m/h/d/dM/MY)>
[2019-12-04 19:35:24,975: DEBUG/MainProcess] beat: Ticking with max interval->5.00 minutes
[2019-12-04 19:35:24,977: DEBUG/MainProcess] beat: Waking up in 5.00 minutes.
I just started learning celery, and feel like maybe something is wrong with my configurations.
Thanks beforehand
| |
doc_23532840
|
if (!(marketVo.getAbsoluteUrl() != null && marketVo.getAbsoluteUrl().equals(absoluteUrlToRedirect))) {
logger.info("---WILL REDIRECT TO ABS URL: " + absoluteUrlToRedirect);
final FacesContext context = FacesContext.getCurrentInstance();
context.responseComplete();
try {
final HttpServletResponse response = (HttpServletResponse) context.getExternalContext().getResponse();
if (context.getViewRoot() != null) {
// this step will clear any queued events
context.getViewRoot().processDecodes(context);
}
response.sendRedirect(absoluteUrlToRedirect);
} catch (final Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Well, it throws an exception:
14:24:35,579 INFO [CmwSessionHelperBean] ---WILL REDIRECT TO ABS URL: http://hi
tachi.mygravitant.com
14:24:35,580 ERROR [STDERR] java.lang.IllegalStateException
14:24:35,582 ERROR [STDERR] at org.apache.catalina.connector.ResponseFacade.
sendRedirect(ResponseFacade.java:435)
14:24:35,590 ERROR [STDERR] at com.example.cloud.common.jsf.core.beans.Cmw
SessionHelperBean.createCmwUserSession(CmwSessionHelperBean.java:269)
Can you please give me a suggestion to avoid this exception to occure? Please note that the redirect is done, but because of this exception, when I come back to my portal, it is no longer working properly...
A: You should be using ExternalContext#redirect() to perform a redirect in a JSF-safe way.
public void createCmwUserSession() throws IOException {
if (!(marketVo.getAbsoluteUrl() != null && marketVo.getAbsoluteUrl().equals(absoluteUrlToRedirect))) {
logger.info("---WILL REDIRECT TO ABS URL: " + absoluteUrlToRedirect);
FacesContext.getCurrentInstance().getExternalContext().redirect(absoluteUrlToRedirect);
}
}
This method will also implicitly call FacesContext#responseComplete(), you don't need to do it yourself.
Further you need to make sure that you are not calling the redirect method multiple times on the same response, or are performing a navigation afterwards.
| |
doc_23532841
|
Thanks!
A: To record the max value of a variable over a run, you should create a global variable to store it. Initialise it in the setup step (or set it in the first timestep to the current value). Within each time step (eg during the go procedure) you simply compare the current value to the existing stored max value and replace it if the current value is higher.
If you want to compare a value across runs, you need to use the BehaviorSpace tool to run the model multiple times and store output. You can have it report the max value calculated as above, or if it is a variable that never decreases, you don't need to calculate the max, you can simply report the value it has at the end of the run (one of the BehaviorSpace settings - every tick or end of run). Then analyse in your data package of choice.
| |
doc_23532842
|
Basically there are around 40 achievements that can be awarded while using the app. (if while playing a game, and even while pressing some menu buttons).
I'm not sure how should I implement it, so I'll let you know what are the options I thought of so far :
*
*Writing an AM (Achievements Manager) class which will have a addAchievment() function.
Inside every function in my app that can grant achievement I can allocate an Achievemnt object and call addAchievment() with that object.
What I don't like about this approach is first, you have do add lot's of achievments code to many many parts of the app. (and also checking not add the same achievement more than once).
One way to improve it would be maybe to call addAchievment() with a certain enum, and then inside addAchievment() implementation to check each enum and allocate the appropriate achievment object - however, a function with a switch of 40 cases doesn't sound good either.
2.
For every class that can report achievements I can write a function per achievement which return if that achievement should be granted.
for example is class A can report 2 achivments I can write 2 functions :
-(BOOL) shouldGrantA1
-(BOOL) shouldGrantA2
When I init class A, I call the achievements manger and add those 2 function to an array of function the AM will hold.
Every time I want to check if I should grant achievments I just call the AM CheckAchievements() and what it will do is run through all the function and add achievements
where the function return TRUE.
Problem with this approach - Let's say in class A I reach a place where I change a value that I know can grant achievemetn. I can call AM's CheckAchievements() but that will go through all the achivements functions, even though probably currently only class A's achivement would be granted. seems like a bit overhead.
Any way to solve that ?
I would love to here other suggestion as well.
Thanks!!
A: I would not add any achievement like code to your existing game classes. No booleans or whatsoever because this creates too tight a coupling between your game classes and your achievement system. Better to create a separate "AchievementManager" that manages several AchievementListeners, these listen to the state of objects and when a relevant state changes the unlock condition is checked. I think this idea is best illustrated in code.
For example if you have the achievement "Player walks 100 kilometers". the PlayerWalksAchievementListener would look like this.
private AchievementManager manager;
private Player player.
private Vector2 previousPlayerPosition;
private float distanceWalked;
Update()
{
float dist = Vector2.Distance(player.Position, previousPlayerPosition);
if(dist > 0)
{
distanceWalked += dist;
CheckUnlockCondition();
}
}
CheckUnlockCondition()
{
if(distanceWalked * conversionFactor > 100) { manager.UnlockAchivement(achievementID); }
}
| |
doc_23532843
|
So far, the approach I made is by using the switch case.
private void addDataAList(AuthorList[] aL, String iN) {
char nD = Character.toUpperCase(iN.charAt(0));
switch(nD){
case 'A':
AuthorList[0] = iN;
break;
case 'B':
AuthorList[1] = iN;
break;
//and so on
}
}//addData
is there a more efficient way to do this?
A: Assuming that AuthorList class may look like this:
private class AuthorList{
private LinkedList<String> nameList;
public AuthorList() {
}
public AuthorList(LinkedList<String> nameList) {
this.nameList = nameList;
}
public LinkedList<String> getNameList() {
return nameList;
}
public void setNameList(LinkedList<String> nameList) {
this.nameList = nameList;
}
@Override
public String toString() {
final StringBuilder sb = new StringBuilder("AuthorList{");
sb.append("nameList=").append(nameList);
sb.append('}');
return sb.toString();
}
}
I'd make it like this:
private static void addDataAList(AuthorList[] aL, String iN) {
int index = Character.toUpperCase(iN.trim().charAt(0)) - 'A';
try {
AuthorList tmpAuthorList = aL[index];
if(tmpAuthorList == null) aL[index] = tmpAuthorList = new AuthorList(new LinkedList<>());
if(tmpAuthorList.getNameList() == null) tmpAuthorList.setNameList(new LinkedList<>());
tmpAuthorList.getNameList().add(iN);
} catch (ArrayIndexOutOfBoundsException aioobe){
throw new IllegalArgumentException("Name should start with character A - Z");
}
}
And additional main method for test purposes:
public static void main (String[] args){
AuthorList[] aL = new AuthorList[26];
addDataAList(aL, " dudeman");
for (AuthorList list : aL) System.out.println(list);
}
| |
doc_23532844
|
It give callback url issue.
A: We have released a new version of our Meteor Package which implements redirect mode and uses the latest version of Lock. You can see the source code and an example here https://github.com/auth0/meteor-auth0 and you can download it from Atmosphere https://atmospherejs.com/auth0/lock
A: You should be using redirect mode on mobile apps per:
https://github.com/meteor/meteor/wiki/OAuth-for-mobile-Meteor-clients
I've submitted a pull request to update meteor auth0 package in order to work with latest meteor oauth library and support redirect mode.
It was merged but no new release was done. So you could manually add it to your project for now:
https://github.com/auth0/meteor-auth0
| |
doc_23532845
|
-Xmx2048m -ea -XX:+HeapDumpOnOutOfMemoryError -Xverify:none -Xbootclasspath/a:../lib/boot.jar
When I open up IDEA, I still see the maximum memory to be 711m.
jps -v shows my VMOptions has been loaded but it's replaced by the following options.
29388 **-Xmx2048m** -ea -XX:+HeapDumpOnOutOfMemoryError -Xverify:none -Xbootclasspath/a:../lib/boot.jar -Xms128m **-Xmx800m** -XX:MaxPermSize=350m -XX:ReservedCodeCacheSize=64m -XX:+UseCodeCacheFlushing -XX:+UseCompressedOops -Didea.paths.selector=IdeaIC12 -Dsun.java2d.noddraw=true -Didea.max.intellisense.filesize=2500 -Didea.dynamic.classpath=false -Didea.jars.nocopy=false -Dsun.java2d.d3d=false -Dapple.awt.fullscreencapturealldisplays=false -Dapple.laf.useScreenMenuBar=true -Djava.endorsed.dirs= -Dswing.bufferPerWindow=false -Didea.fatal.error.notification=enabled -Didea.cycle.buffer.size=1024 -Didea.popup.weight=heavy -Didea.xdebug.key=-Xdebug -Dapple.awt.graphics.UseQuartz=true -Dsun.java2d.pmoffscreen=false -Didea.no.launcher=false -DCVS_PASSFILE=~/.cvspass -Didea.use.default.antialiasing.in.editor=false -Dcom.apple.mrj.application.live-resize=false -Didea.smooth.progress=false
29392 Jps -Dapplication.home=/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home -Xms8m
Where does -Xmx800 come from? I need to remove it.
A: Current version: Help | Change Memory Settings:
Since IntelliJ IDEA 15.0.4 you can also use: Help | Edit Custom VM Options...:
This will automatically create a copy of the .vmoptions file in the config folder and open a dialog to edit it.
Older versions:
IntelliJ IDEA 12 is a signed application, therefore changing options in Info.plist is no longer recommended, as the signature will not match and you will get issues depending on your system security settings (app will either not run, or firewall will complain on every start, or the app will not be able to use the system keystore to save passwords).
As a result of addressing IDEA-94050 a new way to supply JVM options was introduced in IDEA 12:
Now it can take VM options from
~/Library/Preferences/<appFolder>/idea.vmoptions and system properties
from ~/Library/Preferences/<appFolder>/idea.properties.
For example, to use -Xmx2048m option you should copy the original .vmoptions file from /Applications/IntelliJ IDEA.app/bin/idea.vmoptions to ~/Library/Preferences/IntelliJIdea12/idea.vmoptions, then modify the -Xmx setting.
The final file should look like:
-Xms128m
-Xmx2048m
-XX:MaxPermSize=350m
-XX:ReservedCodeCacheSize=64m
-XX:+UseCodeCacheFlushing
-XX:+UseCompressedOops
Copying the original file is important, as options are not added, they are replaced.
This way your custom options will be preserved between updates and application files will remain unmodified making signature checker happy.
Community Edition: ~/Library/Preferences/IdeaIC12/idea.vmoptions file is used instead.
A: go to that path "C:\Program Files (x86)\JetBrains\IntelliJ IDEA 12.1.4\bin\idea.exe.vmoptions"
and change size to -Xmx512m
-Xms128m
-Xmx512m
-XX:MaxPermSize=250m
-XX:ReservedCodeCacheSize=64m
-XX:+UseCodeCacheFlushing
-ea
-Dsun.io.useCanonCaches=false
-Djava.net.preferIPv4Stack=true
hope its will work
A: As for the intellij2018 version I am using the following configuration for better performance
-server
-Xms1024m
-Xmx4096m
-XX:MaxPermSize=1024m
-XX:ReservedCodeCacheSize=512m
-XX:+UseCompressedOops
-Dfile.encoding=UTF-8
-XX:+UseConcMarkSweepGC
-XX:+AggressiveOpts
-XX:+CMSClassUnloadingEnabled
-XX:+CMSIncrementalMode
-XX:+CMSIncrementalPacing
-XX:CMSIncrementalDutyCycleMin=0
-XX:-TraceClassUnloading
-XX:+TieredCompilation
-XX:SoftRefLRUPolicyMSPerMB=100
-ea
-Dsun.io.useCanonCaches=false
-Djava.net.preferIPv4Stack=true
-Djdk.http.auth.tunneling.disabledSchemes=""
-XX:+HeapDumpOnOutOfMemoryError
-XX:-OmitStackTraceInFastThrow
-Xverify:none
-XX:ErrorFile=$USER_HOME/java_error_in_idea_%p.log
-XX:HeapDumpPath=$USER_HOME/java_error_in_idea.hprof
A: More recent versions of IntelliJ (certainly WebStorm and PhpStorm) have made this change even easier by adding a Help >> Change Memory Settings menu item that opens a dialog where the memory limit can be set.
A: OSX 10.9, if you dont bother about signed application you might just change
/Applications/IntelliJ\ IDEA\ 12\ CE.app/bin/idea.vmoptions
A: On my machine this only works in bin/idea.vmoptions, adding the setting in ~/Library/Preferences/IntelliJIdea12/idea.vmoptions causes the IDEA to hang during startup.
A: It looks like IDEA solves this for you (like everything else). When loading a large project and letting it thrash, it will open a dialog to up the memory settings. Entering 2048 for Xmx and clicking "Shutdown", then restarting IDEA makes IDEA start up with more memory. This seems to work well for Mac OS, though it never seems to persist for me on Windows (not sure about IDEA 12).
A: Some addition to the top answer here https://stackoverflow.com/posts/13581526/revisions
*
*Change memory as you wish in .vmoptions
*Enable memory view as told here https://stackoverflow.com/a/39563251/5515861
And you'll have something like this in the bottom right
A: For IDEA 13 and OS X 10.9 Mavericks, the correct paths are:
Original: /Applications/IntelliJ IDEA 13.app/Contents/bin/idea.vmoptions
Copy to: ~/Library/Preferences/IntelliJIdea13/idea.vmoptions
A: [Updated Aug 2021 since the JetBrains UI has changed]
Helpful trick I thought I'd share on this old thread.
You can see how much memory is being used and adjust things accordingly using the Memory Indicator
Right click in the bottom most taskbar area and select the Memory Indicator item
It shows up in the lower right of the window.
A: Here is a link to the latest documentation as of today http://www.jetbrains.com/idea/webhelp/increasing-memory-heap.html
A: I use Mac and Idea 14.1.7. Found idea.vmoptions file here: /Applications/IntelliJ IDEA 14.app/Contents/bin
details
A: I edited the config file from the editor and on the next reboot IntelliJ would not open even after updating to the newest version. After opening IntelliJ manually using /Applications/IntelliJ\ IDEA.app/Contents/MacOS/idea in terminal the output gave me additional insight on where the .vmoptions copy is stored.
➜ ~ /Applications/IntelliJ\ IDEA.app/Contents/MacOS/idea
2022-04-21 13:01:55.189 idea[1288:14841] allVms required 1.8*,1.8+
2022-04-21 13:01:55.192 idea[1288:14845] Current Directory: /Users/richardmiles
2022-04-21 13:01:55.192 idea[1288:14845] parseVMOptions: IDEA_VM_OPTIONS = (null)
2022-04-21 13:01:55.192 idea[1288:14845] fullFileName is: /Applications/IntelliJ IDEA.app/Contents/bin/idea.vmoptions
2022-04-21 13:01:55.192 idea[1288:14845] fullFileName exists: /Applications/IntelliJ IDEA.app/Contents/bin/idea.vmoptions
2022-04-21 13:01:55.192 idea[1288:14845] parseVMOptions: /Applications/IntelliJ IDEA.app/Contents/bin/idea.vmoptions
2022-04-21 13:01:55.192 idea[1288:14845] parseVMOptions: /Applications/IntelliJ IDEA.app.vmoptions
2022-04-21 13:01:55.195 idea[1288:14845] parseVMOptions: /Users/richardmiles/Library/Application Support/JetBrains/IntelliJIdea2022.1/idea.vmoptions
2022-04-21 13:01:55.195 idea[1288:14845] parseVMOptions: platform=17 user=1 file=/Users/richardmiles/Library/Application Support/JetBrains/IntelliJIdea2022.1/idea.vmoptions
Invalid maximum heap size: -Xmx2048m -Drun.processes.with.pty=true
Invalid maximum heap size: -Xmx2048m -Drun.processes.with.pty=true
2022-04-21 13:01:55.266 idea[1288:14845] JNI_CreateJavaVM (/Applications/IntelliJ IDEA.app/Contents/jbr) failed: -6
Note when you work with the path remember to quote properly like so!
vim "/Users/richardmiles/Library/Application Support/JetBrains/IntelliJIdea2022.1/idea.vmoptions"
| |
doc_23532846
|
A: Node.js is event based in comparison to most other systems, used to build web applications. This lead to it being very controversial (some are very fond of it, others hate it…).
Aside of that controversy it can very well be used to build all kinds of web applications, and even big companies have done that so far (LinkedIn IIRC).
Felix, (former) node.js core contributor, wrote this guide, which shows quite well what node can/should (not) be used for.
| |
doc_23532847
|
*
*Production - com.domain.standard - Points to the production server.
*Development - com.domain.evv - Points to the development server.
We are finding that users can only have one version installed on their phone at a time. For example:
*
*I attempt to install the development version via Google Play. All is good.
*I attempt to install the production version via Google Play. I get error code "-505".
*I uninstall the development version.
*I attempt to install the production version via Google Play. All is good.
I've done all I can to rule out device-specific causes; we are seeing this on multiple devices. To my knowledge, there is no device currently running both versions.
For reference, the full error is:
"APP NAME" can't be installed. Try again, and if the problem continues, get help troubleshooting. (Error code: -505)
We have gone through all the troubleshooting tips and none appear to remedy the issue.
A: Error code -505 usually means a signature mismatch between APK which is already on device and the one being installed.
However, if the package names are different, this can't be the issue. My guess would be PackageManager is giving a STATUS_FAILURE_CONFLICT - PackageManager actually uses this code for lots of things.
*
*Already exists (obviously), but also
*INSTALL_FAILED_UPDATE_INCOMPATIBLE
*INSTALL_FAILED_SHARED_USER_INCOMPATIBLE
*INSTALL_FAILED_REPLACE_COULDNT_DELETE
*INSTALL_FAILED_CONFLICTING_PROVIDER
*INSTALL_FAILED_DUPLICATE_PERMISSION
I don't know exactly what all of these mean (but I could carry on looking in the source code to find out), but is it possible one of them applies to your APK? My top guess would be this code, I wonder if your debug package and your release package have conflicting providers?
A: Make sure any other versions / development versions are uninstalled from the phone for All Users. Do this by going into Settings -> Apps and making sure the app is uninstalled for all users.
If you delete the app from the home screen, you will likely delete it only for the current user.
| |
doc_23532848
|
On page load, I append 3 hidden fields to the form and setup the eventListener
setupMonerisHiddenFormElements()
{
$('<input type="hidden">').attr({
id: 'responseCode',
name: 'responseCode',
}).appendTo('form');
$('<input type="hidden">').attr({
id: 'dataKey',
name: 'dataKey',
}).appendTo('form');
$('<input type="hidden">').attr({
id: 'errorMessage',
name: 'errorMessage',
}).appendTo('form');
}
window.onload = function()
{
is_production = Drupal.settings.moneris_payment_processor.is_production;
qa_gateway_url = Drupal.settings.moneris_payment_processor.qa_gateway_url;
prod_gateway_url = Drupal.settings.moneris_payment_processor.prod_gateway_url;
gateway_url = ((is_production) ? qa_gateway_url : prod_gateway_url);
if (window.addEventListener)
{
window.addEventListener("message", processMonerisResponse, false);
}
else
{
if (window.attachEvent)
{
window.attachEvent("onmessage", processMonerisResponse);
}
}
setupMonerisHiddenFormElements();
}
I can see these fields are being added on the form. This is not the problem.
When the form gets submitted the function submitMonerisIframe() is called by this Drupal form attribute:
$form['#attributes'] = array('OnSubmit' => 'submitMonerisIframe();');
Here i am posting the message and can see that my processMonerisResponse() function is being called (see below)
function submitMonerisIframe()
{
var monFrameRef = document.getElementById('monerisFrame').contentWindow;
monFrameRef.postMessage('',gateway_url);
}
I have added my debugging. When I submit the form, I can see that the hidden values are being set correctly.
function processMonerisResponse(e)
{
console.log(e.data);
this.respData = eval("(" + e.data + ")");
$('#responseCode').attr('value',respData.responseCode);
$('#errorMessage').attr('value',respData.errorMessage);
$('#dataKey').attr('value',respData.dataKey);
console.log("---------- RESPONSE -----------------");
console.log("responseCode: " + $('#responseCode').attr('value'));
console.log("errorMessage: " + $('#errorMessage').attr('value'));
console.log("dataKey: " + $('#dataKey').attr('value'));
console.log("-------------------------------------");
e.preventDefault();
}
Result from the console:
{"responseCode":"001","dataKey":"ot-QQHTTg4EDGgBa4mkc0KrCGkPJ","bin":"378734"}
---------- RESPONSE -----------------
responseCode: 001
errorMessage:
dataKey: ot-QQHTTg4EDGgBa4mkc0KrCGkPJ
But when I inspect the $_POST[] data the hidden fields are there but are empty.
I spent hours trying different things but nothing seems to work.
Thanks in advance!
A: Setting attribute value is not the same as setting the property value.
When you submit the form it is the value property that gets submitted.
Use val() instead of attr() to set or get the value.
DEMO
A: Since your code is very fragmented i build up a testfile.php:
<?php
if(isset($_POST)) {
print_r($_POST);
}
?>
<script src="js/jquery.min.js"></script>
<script>
function setupMonerisHiddenFormElements()
{
$('<input type="hidden">').attr({
id: 'responseCode',
name: 'responseCode',
}).appendTo('form');
$('<input type="hidden">').attr({
id: 'dataKey',
name: 'dataKey',
}).appendTo('form');
$('<input type="hidden">').attr({
id: 'errorMessage',
name: 'errorMessage',
}).appendTo('form');
}
$(document).ready(function() {
setupMonerisHiddenFormElements();
$('#responseCode').val('response for responseCode');
$('#dataKey').val('response for dataKey');
$('#errorMessage').val('response for errorMessage');
});
</script>
<form action="file.php" method="POST">
<input type="submit" name="submitbutton" value="submit"/>
</form>
Result of the $_POST array:
Array (
[submitbutton] => submit
[responseCode] => response for responseCode
[dataKey] => response for dataKey
[errorMessage] => response for errorMessage
)
Can you show us the code for your post handling?
| |
doc_23532849
|
Except from link
> more ex12.txt
[displays file here]
>
When i try to use the 'more' command, i get the error:
the term 'more.com' is not recognized as the name of a cmdlet,
functoin, script file, or operable program.Check the spelling of the
name, or if a path was included, verify that the path is correct and
try again.
I'm not sure what is going on and would appreciate any help!
A: There must be something wrong with your PC's configuration because more.com should work just fine in PowerShell e.g.:
PS C:\> more.com .\foo.ini
foo=this is foo section
bar=this is the bar section
In fact, PowerShell does use more.com. Take a look at the definition of the more function (which the help function pipes into). In the more function you will see these two lines:
Get-Content $file | more.com
...
$input | more.com
So when you execute "more" you are executing a PowerShell function that invokes more.com. Have you tried more.com ex12.txt?
Also, more.com comes in 32-bit and 64-bit flavors:
PS C:\> Get-PEHeader c:\windows\system32\more.com
Type : PE64
LinkerVersion : 11.0
Get-PEHeader is from PSCX.
A: powershell doesn't use more.com, it's a 16 bit binary which I believe is no longer included in x64 based windows. on 32bit yes we may find one. However in powershell you can pipe the output to out-file -paging cmdlet and it should do the job
get-content .\pylongcode.py | out-host -Paging
A: You probably need to set your path variable.
to check the path variable you can type PS>> echo $env:path
If you do not see a very long string of C:\Windows.... separated by ";" that is a problem I would recommend you resolve with MS support if you are not able to get help from here or your computer manufacturer.
A simple way to set your path is
PS>> ${NewFileName.txt} = $env:path
PS>> NewFileName.txt
File should open in notepad add new paths
[notepad] C:\WINDOWS\system32;C:\WINDOWS;
Save
PS>> $env:path = ${NewFileName.txt}
PS>> $env:path
Double Check that the path now has the contents of that file. You can then delete the file as it is only used to aid in editing the path variable. There are other ways to do this I prefer this way because I prefer using a text editor for editing over a terminal. Having those two entries will get less and more working.
| |
doc_23532850
|
glStencilFunc(GL_EQUAL, 1, 0xFF);
and then nothing will be rendered.
if i set the line code as
glStencilFunc(GL_NOTEQUAL, 1, 0xFF);
my objs will be rendered correctly.
I wonder why? Is it that the original stencil value is zero not one?
Some code below:
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_EQUAL, 1, 0xFF);
//glStencilFunc(GL_NOTEQUAL, 1, 0xFF);
//glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
while (!glfwWindowShouldClose(window))
{
glfwPollEvents();
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
shader.use();
glm::mat4 model = glm::mat4(1.0f);
glm::mat4 view = g_camera.GetViewMatrix();
glm::mat4 projection = glm::perspective(g_camera.Zoom, (float)AppGlobal::getInstance()->windowWidth / (float)AppGlobal::getInstance()->windowHeight, 0.1f, 100.0f);
glUniformMatrix4fv(glGetUniformLocation(shader.m_program, "view"), 1, GL_FALSE, glm::value_ptr(view));
glUniformMatrix4fv(glGetUniformLocation(shader.m_program, "projection"), 1, GL_FALSE, glm::value_ptr(projection));
// Floor
glBindVertexArray(planeVAO);
glBindTexture(GL_TEXTURE_2D, floorTexture);
model = glm::mat4(1.0f);
glUniformMatrix4fv(glGetUniformLocation(shader.m_program, "model"), 1, GL_FALSE, glm::value_ptr(model));
glDrawArrays(GL_TRIANGLES, 0, 6);
// some other objects
glBindVertexArray(0);
glfwSwapBuffers(window);
}
A:
[...] Is it that the original stencil value is zero not one?
By default the clear value for the stencil buffer is 0.
The stencil buffer is cleared when glClear(GL_STENCIL_BUFFER_BIT) is called. The clear value for the stencil buffer can be specified by glClearStencil. The initial value is 0.
| |
doc_23532851
|
Also, it won't react on an onClick event.
Here's the java code for my Activity
public class MainActivity extends Activity {
public static final String TAG = "MainActivity";
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
BattleShipsGameBoard gb = (BattleShipsGameBoard) findViewById(R.id.gameboard);
Tile tile = new Tile(this);
tile.setImageResource(R.drawable.tile_hit);
tile.setGameObjectType(BattleShipsGameBoard.LayoutParams.LAYOUT_TYPE_TILE);
tile.setPosition(new Point(50, 50));
tile.setWidth(90);
tile.setHeight(90);
gb.addView(tile);
}
}
and my custom view
public class Tile extends ImageView {
@SuppressWarnings("unused")
private static final String TAG = "Tile";
public int tag;
public int gameObjectType;
public Point position = new Point(0, 0);
public int mWidth = 1;
public int mHeight = 1;
public boolean isSelected = false;
public Tile(Context context) {
super(context);
setLayoutParams(new BattleShipsGameBoard.LayoutParams(
BattleShipsGameBoard.LayoutParams.WRAP_CONTENT,
BattleShipsGameBoard.LayoutParams.WRAP_CONTENT));
}
public Tile(Context context, AttributeSet attrs) {
super(context, attrs);
}
public Tile(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
}
public void confirmChangesInLayout() {
BattleShipsGameBoard.LayoutParams lp = (BattleShipsGameBoard.LayoutParams) this
.getLayoutParams();
lp.setPosition(this.position);
lp.setWidth(this.mWidth);
lp.setHeight(this.mHeight);
setLayoutParams(lp);
invalidate();
requestLayout();
}
//... getters and setters, the setters all call confirmChangesInLayout()
}
my simple custom ViewGroup:
public class BattleShipsGameBoard extends ViewGroup {
public static class LayoutParams extends MarginLayoutParams {
public LayoutParams(Context c, AttributeSet attrs) {
super(c, attrs);
}
public LayoutParams(int width, int height) {
super(width, height);
}
public Point position = new Point(0, 0);
public int type = 0;
public int height = 0;
public int width = 0;
//getters and setters
}
public BattleShipsGameBoard(Context context) {
super(context);
}
public BattleShipsGameBoard(Context context, AttributeSet attrs) {
super(context, attrs);
}
public BattleShipsGameBoard(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
}
private float unitWidth;
private float unitHeight;
private int parentWidth;
private int parentHeight;
/**
* count of units the screen estate is divided by
*/
public static int unitCount = 100;
/**
* Rectangle in which the size of a child is temporarily stored
*/
private Rect mTmpChildRect = new Rect();
/**
* lays out children
*/
@Override
protected void onLayout(boolean changed, int left, int top, int right, int bottom) {
Log.d(TAG, "-------------STARTING LAYOUT, " + getChildCount() + " children -------------");
int count = getChildCount();
for (int i = 0; i < count; i++) {
final View child = getChildAt(i);
if (child.getVisibility() != GONE) {
LayoutParams lp = (LayoutParams) child.getLayoutParams();
Point pos = lp.getPosition();
int height = lp.getHeight();
int width = lp.getWidth();
measureChild(child, parentWidth, parentHeight);
mTmpChildRect.left = (int) ((pos.x - (width / 2)) * unitWidth);
mTmpChildRect.right = (int) ((pos.x + (width / 2)) * unitWidth);
mTmpChildRect.top = (int) ((pos.y + (height / 2)) * unitHeight);
mTmpChildRect.bottom = (int) ((pos.y - (height / 2)) * unitHeight);
child.layout(mTmpChildRect.left, mTmpChildRect.top, mTmpChildRect.right, mTmpChildRect.bottom);
Log.d(TAG,
}
}
}
@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
parentHeight = MeasureSpec.getSize(heightMeasureSpec);
parentWidth = MeasureSpec.getSize(widthMeasureSpec);
unitHeight = parentHeight / unitCount;
unitWidth = parentWidth / unitCount;
for (int i = 0; i < getChildCount(); i++) {
View child = getChildAt(i);
if (child.getVisibility() != View.GONE) {
child.measure(widthMeasureSpec, heightMeasureSpec);
}
}
setMeasuredDimension(parentWidth, parentHeight);
}
/**
* Any layout manager that doesn't scroll will want this.
*/
@Override
public boolean shouldDelayChildPressedState() {
return false;
}
}
A: I just found the problem.
In the onLayout() method I mixed up mTmpChildRect.top and mTmpChildRect.bottom which is why it looked like it was laid out correctly but nothing could be drawn.
| |
doc_23532852
|
emcc -o3 app.cpp -o app.js
to compile into javascript. I'm sure I'm missing something obvious, but I've been through the docs and can't seem to realize what I'm passing over. This is kind of silly to me, but I guess I just need to ask.
I appreciate any help.
A: Yup, it was obvious. The command is case sensitive. Needs to be a capital 'O'. I don't have much experience with the command line. This is what happens.
| |
doc_23532853
|
When I run the reports seperately and print them they both come out as expected: fitting into the width constraint of the page.
However, when I include one report as a subreport in the other and then run and print the "master" report I start to experience problems. Even though both reports appear I get extra blank pages appearing every other page in the output.
I'm sure I'm missing a simple trick - probably with the page sizes of the two reports but I can't figure it out - any pointers?
I don't mind changing the setup of the subreport as it will never be run as a seperate eport in the wild, I only included that step to prove that it did indeed fit within the page!
A: One thing that might help is to set your right margin to 0 in your master report. I've noticed this myself with a subreport that appears correctly in a master report, and adding it as a subreport in another master causes it to insert extra page breaks. I agree with TheTXI, its very frustrating, I wish they had a debug/inspection mode like firebug does for web pages.
Edit:
Also you should check your fields to make sure they aren't set to expand automatically (unintended). Right click, properties, set .CanGrow = False
A: There is probably an overhang somewhere with your page size or margins that is causing it to spill over into blank pages. They can be infuriating to find sometimes.
A: Try setting the report's ConsumeContainerWhitespace property to true for all report acting in master report!
A: See page 12 of this document (.doc download).
A: I realize this is an old thread, but I recently had this problem and found another reason why you might end up with blank pages- if there is any whitespace in the master report, the PDF renderer creates a blank page.
So, if you have not found the solution in the above measures, make sure that your master report has no white space. The size of each doesn't seem to matter. What matters is that your subreport is the same size as your master report and that the subreport starts at coordinates 0,0.
A: I was having this issue where every second page was blank on my report. I tried all the suggestions I could find with no relief. I reduced the column widths until the report only took up half the page, still had blank pages, which also told me it wasn't column widths causing my problems.
Finally, by setting the right margin to 0, the problem is now resolved. Thanks Snives! It makes no sense, but that's not important right now!
| |
doc_23532854
|
In the original program, they defined a small function which allocates aligned memory. If they want to free it, they just call free(). I used _aligned_malloc() to allocate but when I want to free it, I also need to use _aligned_free(). But I have to find all the calls of the function free() which correspond to the aligned allocations. Not all of the allocations are aligned, so I cannot simply replace all the free()'s with _aligned_free().
My question is: is there any tool in Visual Studio which can find the malloc() / free() pairs?
Any advice?
I am also new to Visual Studio.
A: In C maloc() is guaranteed to return memory aligned for any purpose.
I think you can replace all _aligned_malloc() calls with malloc() ones. Just drop the alignment parameter ...
#define _align_malloc(size, alignment) malloc(size) /* ignore alignment */
| |
doc_23532855
|
A: Well you should definitely go through some tutorials before continuing since you don't seem to know even very basic stuff, but to at least point you in the right direction, you would do something like this (assuming C#, not UnityScript):
void OnCollisionEnter(Collision collision)
{
int numberOfCollidedObject = collision.gameObject.GetComponent<objectsScriptNameHere>().variableNameHere;
Debug.Log(numberOfCollidedObject);
}
How did I know how to do that? I looked at the documentation. I can see that when OnCollisionEnter is called it's passed a variable of type Collision. It's hyperlinked in the documentation, so I clicked on Collision and found that it contains a variable called gameObject that contains a reference to the game object of the collider we just hit. I happen to know that to get into another script, you called GetComponent<scriptName>(), and from there any public variables and functions can be accessed.
A: if you got two collider (player and object who collide the player) you can convex the collider and set isTrigger to true
Then call the function OnTriggerEnter()
void OnTriggerEnter(Collider other) {
Debug.Log(other.name);
}
| |
doc_23532856
|
QPushButton *testFewCommands = new QPushButton("&start testing few commands", this);
and now, I want to add OnClick function
In the designer this button doesn't appear,
how can I do it?
| |
doc_23532857
|
I'm at manjaro linux, after some failed attempts, seems I have successfully installed PyQt4 and sis. But now, when trying to import, I can't.
When I run: import PyQt4
Works perfectly.
When I run: from PyQt4 import *
Same, but I can't use the classes
But, when I run: from PyQt4 import QtGui, QtCore
This doesn't work.
Message error:
Traceback (most recent call last):
File "ejemplo_GUI.pyw", line 2, in <module>
from PyQt4 import QtGui
ImportError: cannot import name 'QtGui' from 'PyQt4' (unknown location)
I attempted to do the istallation process again, but the same error.
Now, I download the data in site-package/PyQt4 from another source and I pasted it into the folder. It raises this errror:
>>> from PyQt4 import QtGui
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'QtGui' from 'PyQt4' (/usr/lib/python3.7/site-packages/PyQt4/__init__.py)
| |
doc_23532858
|
Can this be done?
A: No. Reflection works against static assembly metadata. Dynamic parameters in powershell are added at runtime by the command or function itself.
A: Perhaps this helps:
1: Defintion of the dynamic parameters
#===================================================================================
# DEFINITION OF FREE FIELDS USED BY THE CUSTOMER
#-----------------------------------------------------------------------------------
# SYNTAX: @{ <FF-Name>=@(<FF-Number>,<isMandatory_CREATE>,<isMandatory_UPDATE>); }
$usedFFs = @{
"defaultSMTP"=@(2,1,0); `
"allowedSMTP"=@(3,1,0); `
"secondName"=@(100,1,0); `
"orgID"=@(30001,1,0); `
"allowedSubjectTypeIDs"=@(30002,1,0); `
}
# FF-HelpMessage for input
$usedFFs_HelpMSG = @{ 2="the default smtp domain used by the organizaiton. Sampel:'algacom.ch'"; `
3="comma seperated list of allowed smtp domains. Sampel:'algacom.ch,basel.algacom.ch'"; `
100="an additional organization name. Sampel:'algaCom AG')"; `
30001="an unique ID (integer) identifying the organization entry"; `
30002="comma seperated list of allowed subject types. Sampel:'1,2,1003,10040'"; `
}
2: definition of function that builds the dynamic parameters
#-------------------------------------------------------------------------------------------------------
# Build-DynParams : Used to build the dynamic input parameters based on $usedFFs / $usedFFs_HelpMSG
#-------------------------------------------------------------------------------------------------------
function Build-DynParams($type) {
$paramDictionary = New-Object -Type System.Management.Automation.RuntimeDefinedParameterDictionary
foreach($ffName in $usedFFs.Keys) {
$ffID = $usedFFs.Item($ffName)[0]
$dynAttribCol = New-Object -Type System.Collections.ObjectModel.Collection[System.Attribute]
$dynAttrib = New-Object System.Management.Automation.ParameterAttribute
$dynAttrib.ParameterSetName = "__AllParameterSets"
$dynAttrib.HelpMessage = $usedFFs_HelpMSG.Item($ffID)
switch($type) {
"CREATE" { $dynAttrib.Mandatory = [bool]($usedFFs.Item($ffName)[1]) }
"UPDATE" { $dynAttrib.Mandatory = [bool]($usedFFs.Item($ffName)[2]) }
}
$dynAttribCol.Add($dynAttrib)
$dynParam = New-Object -Type System.Management.Automation.RuntimeDefinedParameter($ffName, [string], $dynAttribCol)
$paramDictionary.Add($ffName, $dynParam)
}
return $paramDictionary
}
3. Function that makes use of the dynamic params
#-------------------------------------------------------------------------------------------------------
# aAPS-OrganizationAdd : This will add a new organization entry
#-------------------------------------------------------------------------------------------------------
Function aAPS-OrganizationAdd {
[CmdletBinding()]
Param(
[Parameter(Mandatory=$true,HelpMessage="The name of the new organization")]
[String]$Descr,
[Parameter(Mandatory=$false,HelpMessage="The name of the parent organization")]
[String]$ParentDescr=$null,
[Parameter(Mandatory=$false,HelpMessage="The status of the new organization [1=Active|2=Inactive]")]
[int]$Status = 1,
[Parameter(Mandatory=$false,HelpMessage="If you want to see the data of the deactivated object")]
[switch]$ShowResult
)
DynamicParam { Build-DynParams "CREATE" }
Begin {}
Process {
# do what oyu want here
}
End {}
}
| |
doc_23532859
|
Through this adjaceny matrix i wish to find out all the closed paths of length 4 which exists in the graph. Can anyone please provide me with a pseudo code. thank u
A: This sounds like homework, so I won't give the whole thing away. But here's a hint: since you are interested in finding cycles of length 4, take the 4th power of the adjacency matrix and scan along the diagonal. If any entry M[i,i] is nonzero, there is a cycle containing vertex i.
A: Perhaps it is not the optimal way to compute it ( it's O(n^4) ), but a very straightforward way is scan through the all the vertices
a, b, c, d such that b > a, c > b, d > c
You can check then check for each of the following cycles:
1. ([a,b] && [b,c] && [c,d] && [d,a])
2. ([a,b] && [b,d] && [d,c] && [c,a])
3. ([a,d] && [d,b] && [b,c] && [c,a])
1: 2: 3:
A---B A---B A B
| | \ / |\ /|
| | X | X |
| | / \ |/ \|
D---C D---C C D
You're basically checking every ordered set of vertices (a,b,c,d) for the 3 ways that they could form a cycle.
So the pseudo code would be:
for a = 0 to <lastVertex>
for b = a + 1 to <lastVertex>
for c = b + 1 to <lastVertex>
for d = c + 1 to <lastVertex>
if(IsCycle(a,b,c,d)) AddToList([a,b,c,d])
if(IsCycle(a,b,d,c)) AddToList([a,b,d,c])
if(IsCycle(a,c,b,d)) AddToList([a,c,b,d])
next d
next c
next b
next a
A: Apply a depth-limited depth-first-search to every node and record nodes where the DFS finds the starting node.
For the search, see pseudo-code here: http://en.wikipedia.org/wiki/Depth-limited_search. You just need to add something like
if(node' == node && node'.depth==4) memorize(node)
to the beginning of the loop.
| |
doc_23532860
|
I have three C project in Azure Repos Git and I've configured a linux self-hosted agents.
The C_project_3 depends from .h and .a files of C_project_2 which in turn depends on .h and .a files of C_project_1.
The C_project_1 needs to build a not versioned file stored on the agent.
Is it possible configure the YAML file, of each project, to start the building process on cascade resolving the dependencies of .h, .a and extern file?
A: i have found the solution for question "The C_project_1 needs to build a not versioned file stored on the agent"
The sources of project are loaded on the agent in folder _work/1/s.
The not versioned file must be stored there.
I found the answer here: https://learn.microsoft.com/en-us/azure/devops/pipelines/repos/multi-repo-checkout?view=azure-devops
In detail:
Single repository: Your source code is checked out into a directory called s located as a subfolder of (Agent.BuildDirectory).
If (Agent.BuildDirectory) is C:\agent_work\1 then your code is
checked out to C:\agent_work\1\s.
| |
doc_23532861
|
import javax.sound.sampled.*;
public class Main {
public static void main(String[] args) throws Exception {
AudioFormat af_src = AudioSystem.getAudioInputStream(new java.io.File("myfile.wav")).getFormat();
System.out.println(af_src.getEncoding() + " " + af_src.getSampleRate());
AudioFormat af_tgt = new AudioFormat(8000, 8, 1, true, false);
System.out.println(AudioSystem.isConversionSupported(af_tgt, af_src));
}
}
Output below,
44100 PCM
false
Could you help tell what kinds of data format conversion in JDK 6 are supported by Sun's default implementation according to your experience?
@EDIT
According to @Andrew Thompson's suggstion, i made the test below,
import javax.sound.sampled.*;
public class Main {
public static void main(String[] args) {
float sourceSampleRate = 44100;
int sourceSampleSizeInBits = 16;
int sourceChannels = 2;
AudioFormat sourceAudioFormat = new AudioFormat(sourceSampleRate, sourceSampleSizeInBits, sourceChannels, true, false);
float targetSampleRate = sourceSampleRate / 4;
int targetSampleSizeInBits = sourceSampleSizeInBits / 2;
int targetChannels = sourceChannels / 2;
AudioFormat targetAudioFormat = new AudioFormat(targetSampleRate, targetSampleSizeInBits, targetChannels, true, false);
System.out.println(AudioSystem.isConversionSupported(targetAudioFormat, sourceAudioFormat));
}
}
Output remains:
false
Any idea?
@EDIT 2
JavaSound virtually implments nothing on data format conversion. It's the API user's call of duty if one can hardly find third-party plugin.
A: A quick investigation of JavaSound conversion abilities suggests to me it is mostly unimplemented.
OTOH note that there is a way to convert:
*
*44100 sampling rate to 11025. Ignore 3 out of 4 samples.
*16 bit sampling to 8 bit. Ignore the 'low' byte.
*Stereo to mono. Average the value for the left & right channel.
Not perfect, but I think the sound would still be easily recognizable.
| |
doc_23532862
|
Here is my code so you can see what i mean:
module.exports = {
'test-range-filter': function(browser)
{
// array to hold the input (checkbox) element ids
let checkboxElementIds = [];
browser.url(browser.launch_url + '/e-liquids/flavour-types/apple');
// loop the li elements
browser.elements('css selector','#narrow-by-list > div:nth-child(3) > div.filter-options-content > div > ol > li',function(foundElements){
// loop each of the li elements
foundElements.value.forEach(function(liElement){
// get the object keys
let liElementKeys = Object.keys(liElement);
// get the li element id using the keys[0]
let liElementId = liElement[liElementKeys[0]];
// use the li element id, to find the input (checkbox) inside of it
browser.elementIdElement(liElementId, 'css selector', 'input', function(inputElement) {
// store the element id in the array
checkboxElementIds.push(inputElement.elementId);
})
})
});
console.log("CHECKBOX ELEMENTS",checkboxElementIds)
}
};
Because the calls before console.log("CHECKBOX ELEMENTS",checkboxElementIds) are run async my checkboxElementIds array is always empty by the time that part of the code executes, the array does get populated, i can see by adding this:
setTimeout(function() {
console.log("CHECKBOX ELEMENTS", checkboxElementIds)
},5000);
Which gives this output:
CHECKBOX ELEMENTS [
'f37d1d8f-9575-4f97-b359-db7d357b4ff0',
'dcc155ca-19ea-4cf0-be6c-413302e55888',
'fe0c4ce8-0b07-4418-bd2f-3de720bcbfbb',
'ae77658f-be3c-47af-ad3d-23db4a327d3b',
'34001645-b33c-4a27-ba62-259c5561ee09',
'50241232-5799-4921-9e5d-5181e47e05f3',
'f26a2208-08b3-4ed4-a2ed-3d6d850d0553',
'4653ad6d-514a-4ca8-b55d-9a613d88a296',
'f4c820b6-8d60-4fd7-943c-465080feddcc'
]
I don't want to use a timer as it's unknown how long the process will take each time but I'm not sure how to make it run the code synchronously so i can continue down and select one at random.
A: To answer my own question, in case it helps anyone, i figured i can do it as follows:
'test-range-filter': async function(browser)
{
// array to hold the input (checkbox) element ids
let checkboxElementIds = [];
browser.url(browser.launch_url + '/e-liquids/flavour-types/apple');
let foundElements = await browser.elements('css selector','#narrow-by-list > div:nth-child(3) > div.filter-options-content > div > ol > li');
// loop each of the li elements
for (let e in foundElements){
// get the li element
let liElement = foundElements[e];
// get the object keys
let liElementKeys = Object.keys(liElement);
// get the li element id using the keys[0]
let liElementId = liElement[liElementKeys[0]];
// use the li element id, to find the input (checkbox) inside of it
let inputElement = await browser.elementIdElement(liElementId, 'css selector', 'input');
// get the object keys of the input element
let inputElementKeys = Object.keys(inputElement);
// get the input element id using keys[0]
let inputElementId = inputElement[inputElementKeys[0]];
// store the element id in the array
checkboxElementIds.push(inputElementId);
}
console.log("IDS",checkboxElementIds)
},
| |
doc_23532863
|
var a;
var b;
BABYLON.SceneLoader.ImportMesh("", "./public/Models/", "model1.stl", scene, function (newMeshes) {
// Set the target of the camera to the first imported mesh
camera.target = newMeshes[0];
a = newMeshes[0];
});
BABYLON.SceneLoader.ImportMesh("", "./public/Models/", "model2.stl", scene, function (newMeshes) {
// Set the target of the camera to the first imported mesh
//camera.target = newMeshes[0];
b = newMeshes[0];
});
var aCSG = BABYLON.CSG.FromMesh(a);
var bCSG = BABYLON.CSG.FromMesh(b);
"var a" and "var b" are undefined and Debug told me that
"BABYLON.CSG: Wrong Mesh type, must be BABYLON.Mesh"
Is there any method to convert imported mesh to BABYLON.MESH?
Thank you very much
A: This is because the ImportMesh is async, you have to move your code in the callbacks section:
BABYLON.SceneLoader.ImportMesh("", "./public/Models/", "model1.stl", scene, function (newMeshes) {
// Set the target of the camera to the first imported mesh
camera.target = newMeshes[0];
a = newMeshes[0];
BABYLON.SceneLoader.ImportMesh("", "./public/Models/", "model2.stl", scene, function (newMeshes) {
// Set the target of the camera to the first imported mesh
//camera.target = newMeshes[0];
b = newMeshes[0];
var aCSG = BABYLON.CSG.FromMesh(a);
var bCSG = BABYLON.CSG.FromMesh(b);
});
});
| |
doc_23532864
|
I am trying to create a nonclustered index on my table. I want to check if there exists a way to create this without giving a name to the index.
For e.g.
CREATE TABLE #mytable (Date_ datetime NOT NULL, ID_ varchar(10) NOT NULL, Value_)
When I add a PK to this table, I do not specify the name of that key. For e.g.
ALTER TABLE #mytable ADD PRIMARY KEY CLUSTERED (Date_ ASC, ID_ ASC)
Is it possible to do something similar to create a nonclustered index without specifying a name?
For e.g.
ALTER TABLE #mytable ADD NONCLUSTERED INDEX (Date_, Value_) -- FAILS!!!
The only command I know is
CREATE NONCLUSTERED INDEX *keyname* ON #mytable (Date_, Value_)
A: After create temp table execute dynamic sequel with guid as index name
DECLARE @NewId VARCHAR(64) = REPLACE(NEWID(),'-','');
EXEC('CREATE INDEX IX_'+@NewId+' ON #Table (ColA,ColB) INCLUDE (ColZ)');
A: No, it is not possible to create a non-clustered index without a name, the syntax is quite clear:
CREATE [ UNIQUE ] [ CLUSTERED | NONCLUSTERED ] INDEX index_name
index_name
Is the name of the index. Index names must be unique within a table or
view but do not have to be unique within a database. Index names must
follow the rules of identifiers.
CREATE INDEX (Transact-SQL)
The database object name is referred to as its identifier. Everything
in Microsoft SQL Server can have an identifier. Servers, databases,
and database objects, such as tables, views, columns, indexes,
triggers, procedures, constraints, and rules, can have identifiers.
Identifiers are required for most objects, but are optional for some
objects such as constraints.
Database Identifiers
| |
doc_23532865
|
However, I don't understand what is preventing the compilers (clang as well as gcc) from inlining the sqrt-function without the -ffast-math.
Compiling
#include <cmath>
double my_sqrt(double val){
return sqrt(val);
}
with clang and -02 yields (similar with gcc, even if gcc's result doesn't look optimal to me):
my_sqrt(double): # @my_sqrt(double)
sqrtsd %xmm0, %xmm1
ucomisd %xmm1, %xmm1
jp .LBB0_2
movapd %xmm1, %xmm0
retq
.LBB0_2:
jmp sqrt # TAILCALL
As I understand it, a build-in sqrt-function/CPU-instruction (sqrtsd) is called and if result is NaN (PF-flag is set for ucomisd) the library-version is used.
Maybe I'm mistaken, but NaN is the result if argument is negative, so what could the library-sqrt-function do better as to return a NaN? Why to call it then?
| |
doc_23532866
|
struct Caste {
var arr = [1,2]
}
let siri = [Caste(), Caste(), Caste()]
Now I want a single array in which all elements of each objects array consist as shown below:
let re1 = siri.compactMap { $0.arr }
print("COMPACT: \(re1)")
let re2 = siri.flatMap { $0.arr }
print("FLAT: \(re2)")
Result:
COMPACT: [[1, 2], [1, 2], [1, 2]]
FLAT: [1, 2, 1, 2, 1, 2]
As flatMap is deprecated in Swift 4.1 I tried with compactMap but it is giving array of array not a single array.
How can I achieve via compactMap as I'm getting via flatMap.
A: flatMap was split up into itself and compactMap. flatMap is to flatten an array while the purpose of compactMap is to take an array of [T?] and remove all nil objects making an array of [T]. This has a count of <= the original count depending on the number of nils.
| |
doc_23532867
|
var canvas = document.getElementById('myCanvas');
var context = canvas.getContext('2d');
var centerX = canvas.width / 2;
var centerY = canvas.height / 2;
var radius = 70;
context.beginPath();
context.arc(centerX, centerY, radius, 0, 2 * Math.PI, false);
context.fillStyle = 'pink';
context.fill();
context.lineWidth = 5;
context.strokeStyle = '#f0505f';
context.stroke();
body {
margin: 0px;
padding: 0px;
}
<canvas id="myCanvas" width="578" height="200"></canvas>
A: A lens flare effect overlays many smaller effects on top of your image to create the lens flare.
Here's a tutorial of which effects you will need for your lens-flare effect:
http://library.creativecow.net/articles/mylenium/lens_flare.php
And here's the html5 canvas techniques needed to create each effect.
I've been wanting to do a lens flare effect, but haven't had time to accomplish it.
So give it a go...if you have difficulties just post a question and I'd be glad to help.
Good luck with your project!
These are radial gradient fills (with & without a blur)
Html5 canvas techniques needed:
*
*createRadialGradient
*shadowBlur
These are stars (thick and thin) with radial gradient fills & blur
Html5 canvas techniques needed:
*
*star path created with a regular polygon
*createRadialGradient
*shadowBlur
This is a radial gradient fill with a blur that has been "flattened" using Y-scaling
Html5 canvas techniques needed:
*
*createRadialGradient
*shadowBlur
*scale transform (scaling Y will "flatten" the circle into a sliver)
This is an arc
Html5 canvas techniques needed:
*
*arc path command
This is an arc with a gradient that runs with the stroke
Html5 canvas techniques needed:
*
*arc path command
*image slicing (example: Gradient Stroke Along Curve in Canvas)
This is a series of rectangles drawn along a circle
Html5 canvas techniques needed:
*
*fillRect
*trigonometry (calculate x/y points along a circle)
*trigonometry (extend the radius of a circle)
| |
doc_23532868
|
But I have to wait till it finished.
Is there any way to add a progress bar to this Code in down below?
So that it shows the progress of extracting.
import zipfile
with zipfile.ZipFile('/content/drive/My Drive/Thesis/Dataset/dataset.zip','r') as zip_file:
zip_file.extractall('df')
| |
doc_23532869
|
model_str = """
log(units)~
log(price_usd) + (log(price_usd)|sku/country)
"""
model = lmerTest.lmer(model_str, data = df)
In this question Replace lmer coefficients in R the same question is solved, but in this case I'm using rpy2. So, I would like to know how to change the coefficients of a lmer model when using rpy2.
In order to change the coefficients with R:
library(lme4)
fm1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
summary(fm1)$coef
# Estimate Std. Error t value
#(Intercept) 251.40510 6.823773 36.842535
#Days 10.46729 1.545958 6.770744
fm1@beta[names(fixef(fm1)) == "Days"] <- 0
summary(fm1)$coef
# Estimate Std. Error t value
#(Intercept) 251.4051 6.823773 36.84253
#Days 0.0000 1.545958 0.00000
A: lmer returns a so-called RS4 class object, and its class "properties" (in Python term) are called "slots", and you access them with do_slot and do_slot_assign functions.
Here is a small example based on lme4's example:
from rpy2.robjects.packages import importr, isinstalled, PackageData
from rpy2.rinterface import FloatSexpVector
utils = importr('utils')
lme4 = importr('lme4')
sleepstudy = next(PackageData('lme4').fetch("sleepstudy").values())
formula = "Reaction ~ Days + (Days | Subject)"
fm1 = lmer(formula=formula, data=sleepstudy)
# get names of the "slots"
list(fm1.slotnames())
# prints: ['resp', 'Gp', 'call', 'frame', 'flist', 'cnms', 'lower', 'theta', 'beta', 'u', 'devcomp', 'pp', 'optinfo']
# get value of a slot
fm1.do_slot('beta')
# prints: [251.405105, 10.467286]
# to set a slot value (beta to a zero vector)
fm1.do_slot_assign('beta',FloatSexpVector([0,0]))
# check the new value
fm1.do_slot('beta')
# prints: [251.405105, 10.467286]
| |
doc_23532870
|
I took a copy of the source code from the server updated it and complied it locally. My local machine is running windows 7 64x. I copied the .exe file back to the server and when I tried to run it I received the runtime error 49: Activex component can't create object. I know this error occurs when the application tries to open a connection to the database using RDO.
I can run the .exe fine from my local machine and my virtual pc which is running windows xp.
This application was previously working on the server and the changes I made were to the contents of a file it outputs so no new references would be needed.
These are the lines it is falling over on:
rdoEnvironments(0).CursorDriver = rdUseNone
Set conDB = rdoEnvironments(0).OpenConnection("MRA", rdDriverNoPrompt, True)
A: I recently resolved the Activex component can't create object error as follows:
*
*Open your .vbp file for your VB6 project in a text editor.
*At the top of the file will be all the activex objects the project uses. In my case, these were:
Reference=*\G{00020430-0000-0000-C000-000000000046}#2.0#0#C:\WINDOWS\system32\stdole2.tlb#OLE Automation
Object={22D6F304-B0F6-11D0-94AB-0080C74C7E95}#1.0#0; msdxm.ocx
Reference=*\G{3F4DACA7-160D-11D2-A8E9-00104B365C9F}#5.5#0#C:\WINDOWS\system32\vbscript.dll\3#Microsoft VBScript Regular Expressions 5.5
Reference=*\G{3D0758FA-4171-11D0-A747-00A0C91110C3}#a.0#0#C:\WINDOWS\system32\dbgwproc.dll#Debug Object for AddressOf Subclassing
Object={248DD890-BB45-11CF-9ABC-0080C7E7B78D}#1.0#0; MSWINSCK.OCX
Object={831FDD16-0C5C-11D2-A9FC-0000F8754DA1}#2.0#0; mscomctl.ocx
Object={F9043C88-F6F2-101A-A3C9-08002B2F49FB}#1.2#0; COMDLG32.OCX
Object={3B7C8863-D78F-101B-B9B5-04021C009402}#1.2#0; RICHTX32.OCX
*Open regedit app.
*Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Classes
*Ctrl+F, then search for each class id above, such as {00020430-0000-0000-C000-000000000046}
*Expect to find Reference= entries in HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Interface and Object= entries in HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID
*After each entry you'll notice there's a version like #1.2. In my case I found the same version number listed in a Version key near where I found a match in the registry. If the versions don't match, it may be worth registering the OCX or DLL file of the correct version.
*After finding each entry, you can click in the regedit tree and hit left arrow till you get back to the Classes branch, then search for the next entry.
*Most importantly, if you don't find an entry for the class id you searched for, that is most likely causing the Activex component can't create object error.
In my case, the missing class was Reference=*\G{3D0758FA-4171-11D0-A747-00A0C91110C3}#a.0#0#C:\WINDOWS\system32\dbgwproc.dll#Debug Object for AddressOf Subclassing. This is a special class used when running VB6 apps in the debugger but it should not be distributed with the app or referenced in apps that are distributed. I got VB to stop referencing dbgwproc.dll by opening Project > <app name> Properties... > 'Make' tab and deleting DEBUGWINDOWPROC = 1 from Conditional Compilation Arguments:. After rebuilding, no more error occurred.
A: You might be missing some DLLs for RDO to work on the server, try this:
http://support.microsoft.com/kb/195474 - How To Determine RDO Files Needed for Distribution of App
| |
doc_23532871
|
For example:
(void)insertInTableOnAttributes:(id)fieldsNames, ... Values:(id)fieldsValues, ...;
sadly it appears a compiler error after the first (...) saying:
Expected ':' after method Prototype".
And in the implementation says:
Expected Method Body" in the same position (just after the first ...)
PD: I'm using Xcode 4.2.1.
A: You can't do that. How would the generated code possibly know where one argument list ends and the next begins? Try to think of the C equivalent
void insertInTableOnAtributes(id fieldNames, ..., id fieldValues, ...);
The compiler will reject that for the same reason.
You have two reasonable options. The first is to provide a method that takes NSArrays instead.
- (void)insertInTableOnAttributes:(NSArray *)fieldNames values:(NSArray *)fieldValues;
The second is to use one varargs with a name-value pair, similar to +[NSDictionary dictionaryWithObjectsAndKeys:]
- (void)insertInTableOnAttributes:(id)fieldName, ...;
This one would be used like
[obj insertInTableOnAttributes:@"firstName", @"firstValue", @"secondName", @"secondValue", nil];
The C analogy is actually quite accurate. An Obj-C method is basically syntactic sugar on top of a C method, so
- (void)foo:(int)x bar:(NSString *)y;
is backed by a C method that looks like
void foobar(id self, SEL _cmd, int x, NSString *y);
except it doesn't actually have a real name. This C function is called the IMP of the method, and you can retrieve it using obj-c runtime methods.
In your case of having arguments after a varargs, your
- (void)someMethodWithArgs:(id)anArg, ... andMore:(id)somethingElse;
would be backed by an IMP that looks like
void someMethodWithArgsAndMore(id anArg, ..., id somethingElse);
and since you cannot have any arguments after a varargs, this simply won't work.
A: - (void)insertInTableOnAttributes:(NSArray *)fieldsNames values:(NSArray *)fieldsValues;
// Use
[self insertInTableOnAttributes:[NSArray arrayWithObject:@"name", nil] values:[NSArray arrayWithObject:@"value", nil]];
| |
doc_23532872
|
I have a separate google SpreadsheetB that uses =IMPORTRANGE to import the data from SpreadsheetA and create a pivot table and corresponding chart (both located on SpreadsheetB).
If I were to manually adjust any data in SpreadsheetA (e.g., alter value of any cell, add a value to an empty cell, etc), then the data in SpreadsheetB—with its corresponding pivot table and chart—update dynamically with the new data from SpreadsheetA.
However, when SpreadsheetA is updated with new data programmatically via Python, IMPORTRANGE in SpreadsheetB does not capture the new data.
Any ideas as to why this happens and how I might be able to fix?
A: Both Sheet A and B show the same number of rows. I am a bit confused with your IMPORTRANGE() formula though, why the ampersand?
=IMPORTRANGE("https://docs.google.com/spreadsheets/d/16DyWC8rsQB1ThpLiQh0p5xH9CYK2cPqbPH547ybw2Fo/edit#gid=1875728384",""&"TestgAPI!A:J")
I changed to this:
=IMPORTRANGE("https://docs.google.com/spreadsheets/d/16DyWC8rsQB1ThpLiQh0p5xH9CYK2cPqbPH547ybw2Fo/edit#gid=1875728384","TestgAPI!A:J")
A: Although probably not the ideal, my solution to this was to use gspread to add a new worksheet to spreadsheetA, which somehow manages to kickstart importrange() in SpreadsheetB.
I would still love to see a cleaner solution, if anyone knows of one—but this has continued to work since implementing a week ago.
| |
doc_23532873
|
Here is some code
#include <stdio.h>
void func1(double x);
void below_five(void);
void above_five(void);
void other(void);
void sort(double *p[], int n);
void print_doubles(double *p[], int n);
int main(void){
double *numbers[9];
int nbr;
printf("\nEnter 10 doubles that are less than 5 or greater than 5, type 0 to exit");
for(int i = 0; i < 10 ; i++)
{
scanf("%d", &nbr);
func1(nbr);
numbers[i] = nbr;
if(nbr == 0)
break;
}
sort(numbers, 10);
print_doubles(numbers, 10);
return 0;
}
void func1(double val)
{
double (*ptr)(void);
if(val <= 5.00){
ptr = below_five;
}else if((val > 5.00) && (val <= 10.00)){
ptr = above_five;
}else
ptr = other;
}
void below_five(void){
puts("You entered a number below or equal to five");
}
void above_five(void){
puts("You entered a number above five");
}
void other(void){
puts("You entered a number well above five.");
}
void sort(double *p[], int n)
{
double *tmp;
for(int i = 0; i < n; i++)
{
if(p[i] > p[i+1]){
tmp = p[i];
p[i] = p[i+1];
p[i + 1] = tmp;
}
}
}
void print_doubles(double *p[], int n)
{
int count;
for(count = 0; count < n; count++)
printf("%d\n", p[count]);
}
Like I said, what I expect it to be able to do is collect doubles into the scanf method and then print the numbers after sorting them, but it seems the for loop collects doubles forever without end in this case.
What have I done wrong, exactly?
A: See my comments in your updated code.
There are other modifications required, but the minimum updates needed to work your code is below
#include <stdio.h>
void func1(double x);
void below_five(void);
void above_five(void);
void other(void);
void sort(double p[], int n); /* Simply use a array notation,
arrays passed to functions decays to pointer */
void print_doubles(double p[], int n); /* Simply use a array notation,
arrays passed to functions decays to pointer */
int main(void){
double numbers[10];
double nbr; // Change type to double, as you're reading doubles
printf("\nEnter 10 doubles that are less than
5 or greater than 5, type 0 to exit\n");
for(int i = 0; i < 10 ; i++)
{
scanf("%lf", &nbr); // Use correct format specifier to read doubles
func1(nbr);
numbers[i] = nbr;
if(nbr == 0)
break;
}
sort(numbers, 10);
print_doubles(numbers, 10);
return 0;
}
void func1(double val)
{
double (*ptr)(void);
if(val <= 5.00){
ptr = below_five;
}else if((val > 5.00) && (val <= 10.00)){
ptr = above_five;
}else
ptr = other;
/* Why you set the pointer to function if you don't call it,
so call it here*/
(*ptr)();
}
void below_five(void){
puts("You entered a number below or equal to five");
}
void above_five(void){
puts("You entered a number above five");
}
void other(void){
puts("You entered a number well above five.");
}
void sort(double p[], int n) /* Your sorting routine is wrong ,
see the modified code */
{
double tmp;
for(int j = 0; j < n-1; j++)
for(int i = 0; i < n-j-1; i++)
{
if(p[i] > p[i+1]){
tmp = p[i];
p[i] = p[i+1];
p[i + 1] = tmp;
}
}
}
void print_doubles(double p[], int n)
{
int count;
for(count = 0; count < n; count++)
printf("%lf\n", p[count]); // Use correct format specifier
}
Demo Here
| |
doc_23532874
|
What I have tried ?
add image as background of a div and also as a img tag.
applying gradient on div
but no luck.
How it can be managed ? Please guide.
Thanks
A: Cant you just save the text of your logo as a .png with a transparent background? If you've added the gradient to your body tag it will change when the browser window expands and contracts
A: you have to save your logo image with transparent background....you open the psd file of the logo and disable the visibility of the background layer and then save it as a png image.
| |
doc_23532875
|
retrieveToken(): JwtToken {
return JSON.parse(localStorage.getItem(_key));
}
I need to map the data stored in local storage to my interface
export interface JwtToken {
access_token: string;
expires_in: number;
token_type: string;
refresh_token: string;
}
I've tried:
return JSON.parse(localStorage.getItem(_key) as JwtToken);
return JSON.parse<JwtToken>(localStorage.getItem(_key));
Any help would be appreciated
A: You're performing your cast on the string value retrieved from storage instead of casting the result of JSON.parse().
It should be:
return JSON.parse(localStorage.getItem(_key)) as JwtToken;
To get rid of the error caused by the fact that localStorage.getItem() potentially returns null, you can add a non-null assertion to convince the compiler that the value for _key will definitely be present in localStorage:
return JSON.parse(localStorage.getItem(_key)!) as JwtToken;
Or, better yet, perform an explicit null check:
const jwtTokenString = localStorage.getItem(_key);
if (jwtTokenString) {
return JSON.parse(jwtTokenString) as JwtToken;
}
| |
doc_23532876
|
select * from table
The output looks like so.
| Name | HostName | Week |
| ------------- |------------| -------|
| java | Hosta | 1 |
| java | Hostb | 1 |
| java | Hostb | 2 |
| Ansible | Hosta | 1 |
| Ansible | Hosta | 2 |
| Ansible | Hosta | 3 |
| Ansible | Hostb | 3 |
My aim is to generate an output that pivots the weeks into column tables, with the values being a count of Hosts for a given vulnerability in that week.
| Vulnerability | Week 1 | Week 2 | Week 3 |
| ------------- |--------| -------| -------|
| java | 2 | 1 | 0 |
| Ansible | 1 | 1 | 2 |
My initial attempt was to do
select * from table
PIVOT(
count(HostName)
For week in ([1],[2],[3])
) AS OUT
But the output was the correct layout, but incorrect data as if it was only counting the first occurrence.
Is an amendment to the count term required or is my approach the wrong one?
A: Conditional aggregation is simpler:
select vulnerability,
sum(case when week = 1 then 1 else 0 end) as week_1,
sum(case when week = 2 then 1 else 0 end) as week_2,
sum(case when week = 3 then 1 else 0 end) as week_3
from t
group by vulnerability;
Note only is pivot bespoke syntax, but it is sensitive to what columns are in the table. Extra columns are interpreted as "group by" criteria, affecting the results from the query.
| |
doc_23532877
|
Couldn't find a Data Structure set for table/row "pages:x". Please
select a Data Structure and Template Object first.
If I looked in a template mapping I got the next message that there is no mapping available. In the table tx_templavoila_tmplobj in the column templatemapping the mapping is stored as BLOB. When converting to UTF-8 everything is gone. Because its binary I can't access it and convert it in an easy way.
How can I keep the mapping? I don't want to map everything new. What can I do?
Here there are two proposed solutions but I want to know if there are better ones. In the solution from Michael I also have to map everything again?
What is the fastet way to restore the mapping?
A: I can't say if you'll be able to recover the data now that it's been converted, but if you're willing to run your conversion script I have had some success with the following approach:
*
*Unserialize the data in the templatemapping field of the tx_templavoila_tmplobj table.
*Convert the unserialized data array to your target encoding (there is a helper method t3lib_cs::convArray which you might be able to use for this purpose).
*Serialize the converted data and save it back to the templatemapping field.
A: Fastest way: just change the field manually back to mediumtext. All mappings should be fine again. I know, it's quick and dirty, but worked for me...
| |
doc_23532878
|
I have created a jsfiddle to illustrate the problem:
https://jsfiddle.net/sregorcinimod/7x4vuwLr/8/
When you click on the blue rectangle a popover should appear.
The problem line is the fact that i have a position:absolute on the svg
#spacing svg{
max-width:100%;
position:absolute; //this is the line that is causing issues
bottom:0px;
}
If I remove that line the popover appears but I need that in for other things.
The constraints are:
*
*I need to have position:absolute on the svg due to other more complex things that aren't in the jsfiddle i.e. responsive positioning of multiple layered svgs.
*I need the trigger to be focus and not click because I need the popover to be dismissed when the user either clicks on the x in the title or anywhere in the browser window.
Things I have tried:
*
*wrapping the svg in a div.
*changing the container.
A: Add a tabindex attribute to the rect e.g. tabindex="0"
| |
doc_23532879
|
<h:inputText id="nom" value="#{InscriptionBean.nom}" placeholder="test">
</h:inputText>
but that's not working i also tried
<h:inputText id="nom" value="#{InscriptionBean.nom}" h:placeholder="test">
</h:inputText>
Hope you can help me.
A: Use the following attribute to add a placeholder to your tag:
p:placeholder="test"
But what are p and h?
You need to declare the right tag library at the start of your view file:
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:h="http://xmlns.jcp.org/jsf/html"
xmlns:f="http://xmlns.jcp.org/jsf/core"
xmlns:p="http://xmlns.jcp.org/jsf/passthrough">
The generated output would be:
<input type="text" id="nom" name="nom" placeholder="test">
But double check if you are using jsf 2.2 (this may not work in lower versions)
A: Add xmlns to page header :
xmlns:t="http://xmlns.jcp.org/jsf/passthrough"
and use it like following :
<h:inputText id="nom" value="#{InscriptionBean.nom}" t:placeholder="test" />
| |
doc_23532880
|
Sample session:
(gdb) p xtalFreq
Cannot access memory at address 0xffd3dd38
(gdb) p *0xffd3dd38
$9 = 27000
A: Got the answer - this was a bug in GDB 6.3 itself, which got fixed in the latest version (GDB 7.1)
| |
doc_23532881
|
Versions Tried 2.1.5 and 2.0.8
Exception 1 and Exception 2
When looking up the code, we found that in ColumnFamily.java, cellInternal is of type BufferedCounterUpdateCell, whose diff operation is not supported. As well reconcile method is called on a wrong type too.
Just to confirm the issue, we did use the sstable2json utility to inspect the data. And as we guessed
Cells of a properly dumped sstable had valid counters
(":1:c05:desktop/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/","00010000db5ad3b00d4711e5b52dab7bf928868d00000000000000590000000000000059",1433835586985,"c",-9223372036854775808),
(":1:c05:*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/","00010000db5ad3b00d4711e5b52dab7bf928868d00000000000000590000000000000059",1433835586985,"c",-9223372036854775808)
whereas the faulty ones had buffered counters and hence the sstable2json failed
("*:0:c01:*/direct/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/","0000000000000001",1433924262793),
("*:0:c01:*/*/singapore/*/*/*/*/*/*/*/*/*/*/*/*/*/*/*/","0000000000000002",1433924262793),
Basically sstable2json doesn't have support for dumping BufferedCounterUpdateCells, Hence it assumes such cells to be of normal type and dumps them.
It is evident from error logs and sstable2json output, that instead of dumping CounterColumns, CQLSSTableWriter dumped Counter Update Columns counter types with a different mask, which resulted in error while cassandra tried to load SSTables with such columns up.
We see this issue happening always when SSTable get created via CQLSSTableWriter.
While going through any issues reported on the same note, we stalled on this. Something related to switching CFs for writing. We guess problem could be on the same lines. Any inputs are welcome.
----Update
On further debugging we figured out that CQLSSTableWriter fails to convert CounterUpdateColumns to CounterColumns as it is usually done in all other cases row mutations. Might need a patch
A: Counters are not supported on CQLSStableWriter. See https://issues.apache.org/jira/browse/CASSANDRA-10258 for more information.
| |
doc_23532882
|
However, how do I reverse this reactivity chain? So that the value presented in input1 changes to reflect what was input into input2? In the below MWE, I tried doing this in the line marked << comment/uncomment this line to see but when that line is uncommented (allowing my attempted solution to run), the 2nd input box is hidden even though the "Show" checkbox for that 2nd input remains checked for show. That 2nd input box should continue showing until unchecked.
See image below too.
MWE code:
library(shiny)
library(shinyjs)
library(shinyMatrix)
### Checkbox matrix ###
f <- function(action,i){as.character(checkboxInput(paste0(action,i),label=NULL))}
actions <- c("show", "reset")
tbl <- t(outer(actions, c(1,2), FUN = Vectorize(f)))
colnames(tbl) <- c("Show", "Reset")
rownames(tbl) <- c("2nd input", "3rd input")
### Checkbox matrix ###
xDflt <- 5
userInput <- function(inputId,x,y){
matrixInput(inputId,
value = matrix(c(x), 1, 1, dimnames = list(c(y),NULL)),
rows = list(extend = FALSE, names = TRUE),
cols = list(extend = FALSE, names = FALSE, editableNames = FALSE),
class = "numeric")}
ui <- fluidPage(
tags$head(
tags$style(HTML(
"td .checkbox {margin-top: 0; margin-bottom: 0;}
td .form-group {margin-bottom: 0;}"
))
),
br(),
sidebarLayout(
sidebarPanel(
uiOutput("panel"),
hidden(uiOutput("secondInput")),
),
mainPanel(plotOutput("plot1"))
)
)
server <- function(input, output){
input1 <- reactive(input$input1)
input2 <- reactive(input$input2)
output$panel <- renderUI({
tagList(
useShinyjs(),
userInput("input1",
# if(isTruthy(input2())){input$input2[1,1]} else # << comment/uncomment this line to see
{xDflt},
"1st input"),
strong(helpText("Generate curves (Y|X):")),
tableOutput("checkboxes")
)
})
### Begin checkbox matrix ###
output[["checkboxes"]] <-
renderTable({tbl},
rownames = TRUE, align = "c",
sanitize.text.function = function(x) x
)
observeEvent(input[["show1"]], {
if(input[["show1"]] ){
shinyjs::show("secondInput")
} else {
shinyjs::hide("secondInput")
}
})
### End checkbox matrix ###
output$secondInput <- renderUI({
req(input1())
userInput("input2",input$input1[1,1],"2nd input")
})
outputOptions(output,"secondInput",suspendWhenHidden = FALSE)
output$plot1 <-renderPlot({
req(input2())
plot(rep(input2(),times=5))
})
}
shinyApp(ui, server)
A: Your checkboxes become unchecked because you render and re-render them along with the output$panel. You forgot to add uiOutput("checkboxes") to the UI. Here are the steps:
*
*Remove tableOutput("checkboxes") from output$panel <- renderUI().
*Add uiOutput("checkboxes"), to the UI.
*Uncomment your line if(isTruthy(input2() ....
That should do the trick.
Some remarks about the code. I find it unnecessarily complex. E.g. you use c(x) in the userInput function which is unnecessary. You wrap input$input1 in a reactive expression which does not add anything (except readability maybe) but adds complexity to the reactivity chain. There are several ways to make the code easier to understand and read, which would probably help avoid simple mistakes like a missing uiOutput.
| |
doc_23532883
|
Screenshot: MYSQL command Line
Screenshot: WebPlatform Message
i've tried all the suggested workaround but none avail
*
*deleting mysql_pwd from registry (HKCU/Software/Microsoft/WebPlatformInstaller/mysql_pwd)
*installing latest mysql connector (6.9.9)
the other suggested way is to delete the folder path for mysql on my drive which I dont want to as i have existing data from my current database (MYSQL 5.7)
A: Have you created a non-root user and tested with it? I would do that anyway for INFOSEC reasons. My two cents: get the site onto a Linux VM. https://roots.io/trellis
| |
doc_23532884
|
chart.Series.Clear();
chart.ChartAreas[0].AxisX.Title = "Version";
chart.ChartAreas[0].AxisX.TitleFont = new System.Drawing.Font("Arial", 12, FontStyle.Regular);
chart.ChartAreas[0].AxisY.Title = "Time";
chart.ChartAreas[0].AxisY.TitleFont = new System.Drawing.Font("Arial", 12, FontStyle.Regular);
Series MIN = chart.Series.Add("Minimum");
MIN.Points.DataBindXY(listVersion, MIN_list[j]);
MIN.ChartType = SeriesChartType.Line;
MIN.Color = Color.Red;
MIN.BorderWidth = 3;
I am looking forward to something like this
If it is possible How can I do it ?
Thank you.
A: Yes you can do that:
Here are the steps I took:
First we disable the original Legend as it can't be manipulated the way we need to..:
chart1.Legends[0].Enabled = false;
Now we create a new one and a shortcut reference to it:
chart1.Legends.Add(new Legend("customLegend"));
Legend L = chart1.Legends[1];
Next we do some positioning:
L.DockedToChartArea = chart1.ChartAreas[0].Name; // the ca it refers to
L.Docking = Docking.Bottom;
L.IsDockedInsideChartArea = false;
L.Alignment = StringAlignment.Center;
Now we want to fill in one line for headers and one line per series.
I use a common function for both and pass in a flag to indicate whether the headers (x-values) or the cell data (y-values) should be filled in. Here is how I call the function:
addValuesToLegend(L, chart1.Series[0], false);
foreach (Series s in chart1.Series) addValuesToLegend(L, s, true);
Note that for this to work we need a few preparations in our Series:
*
*We need to set the Series.Colors explicitly or else we can't refer to them.
*I have added a format string to the Tag of each series; but maybe you find a better solution that avoids hard-coding the format for the header..
So here is the function that does all the filling and some styling:
void addValuesToLegend(Legend L, Series S, bool addYValues)
{
// create a new row for the legend
LegendItem newItem = new LegendItem();
// if the series has a markerstyle we show it:
newItem.MarkerStyle = S.MarkerStyle ;
newItem.MarkerColor = S.Color;
newItem.MarkerSize *= 2; // bump up the size
if (S.MarkerStyle == MarkerStyle.None)
{
// no markerstyle so we just show a colored rectangle
// you could add code to show a line for other chart types..
newItem.ImageStyle = LegendImageStyle.Rectangle;
newItem.BorderColor = Color.Transparent;
newItem.Color = S.Color;
}
else newItem.ImageStyle = LegendImageStyle.Marker;
// the rowheader shows the marker or the series color
newItem.Cells.Add(LegendCellType.SeriesSymbol, "", ContentAlignment.MiddleCenter);
// add series name
newItem.Cells.Add(LegendCellType.Text, addYValues ? S.Name : "",
ContentAlignment.MiddleLeft);
// combine the 1st two cells:
newItem.Cells[1].CellSpan = 2;
// we hide the first cell of the header row
if (!addYValues)
{
newItem.ImageStyle = LegendImageStyle.Line;
newItem.Color = Color.Transparent;
newItem.Cells[0].Tag = "*"; // we mark the 1st cell for not painting it
}
// now we loop over the points:
foreach (DataPoint dp in S.Points)
{
// we format the y-value
string t = dp.YValues[0].ToString(S.Tag.ToString());
// or maybe the x-value. it is a datatime so we need to convert it!
// note the escaping to work around my european locale!
if (!addYValues) t = DateTime.FromOADate(dp.XValue).ToString("M\\/d\\/yyyy");
newItem.Cells.Add(LegendCellType.Text, t, ContentAlignment.MiddleCenter);
}
// we can create some white space around the data:
foreach (var cell in newItem.Cells) cell.Margins = new Margins(25, 20, 25, 20);
// finally add the row of cells:
L.CustomItems.Add(newItem);
}
To draw the borders around the cells of our legend table we need to code the PrePaint event:
private void chart1_PrePaint(object sender, ChartPaintEventArgs e)
{
LegendCell cell = e.ChartElement as LegendCell;
if (cell != null && cell.Tag == null)
{
RectangleF r = e.ChartGraphics.GetAbsoluteRectangle(e.Position.ToRectangleF());
e.ChartGraphics.Graphics.DrawRectangle(Pens.DimGray,Rectangle.Round(r));
// Let's hide the left border when there is a cell span!
if (cell.CellSpan != 1)
e.ChartGraphics.Graphics.DrawLine(Pens.White,
r.Left, r.Top+1, r.Left, r.Bottom-1);
}
}
You can add more styling although I'm not sure if you can match the example perfectly..
| |
doc_23532885
|
To trigger the archive odds page in the Developer Tools I need to hover on the odd.
Code
url = "https://www.betexplorer.com/archive-odds/4l4ubxv464x0xc78lr/14/"
headers = {
"Referer": "https://www.betexplorer.com",
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36'
}
Json = requests.get(url, headers=headers).json()
A: As the site is being loaded by JavaScript, requests doesn't work. I have used selenium to load the page, extract the complete source code after everything is loaded.
Then used beautifulsoup to create a soup object to get required data.
From the source code you can see that the data-bid of the <tr> are what are being passed to get the odds data.
I extracted all the data-bid and passed them to the URL you've provided at the very end of your question one by one.
This code will get all the odds data in JSON format
import time
from bs4 import BeautifulSoup
import requests
from selenium import webdriver
base_url = 'https://www.betexplorer.com/soccer/estonia/esiliiga/elva-flora-tallinn/Q9KlbwaJ/'
driver = webdriver.Chrome()
driver.get(base_url)
time.sleep(5)
soup = BeautifulSoup(driver.page_source, 'html.parser')
t = soup.find('table', attrs= {'id': 'sortable-1'})
trs = t.find('tbody').findAll('tr')
for i in trs:
data_bid = i['data-bid']
url = f"https://www.betexplorer.com/archive-odds/4l4ubxv464x0xc78lr/{data_bid}/"
headers = {"Referer": "https://www.betexplorer.com",'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36'}
Json = requests.get(url, headers=headers).json()
# Do what you wish to do withe JSON data here....
| |
doc_23532886
|
I have a parent package, called department and under that I have english and science as packages.
I have few components in English which I have exposed using the export array in the English.module. Similarly, I have done the same for Science.module.
The issue here is,when I try to build using 'npm run build' and while Compiling TypeScript sources through ngc, it throws an error as below.
BUILD ERROR
error TS6059: File 'C:/project/package/english/english.component.ts' is notunder 'rootDir' 'C:/project/package/science/'. 'rootDir' is expected to contain all source files.
The component in science is using a component under english.
This is how my tsconfig is configured,
"baseUrl": "./package,
"rootDir": "./package
I am not sure what is missing but not able to get rid of this error. Any input will be helpful
This is my project structure
*
*package
*
*science
*english
Here is my tsconfig.json
{
"compilerOptions": {
"baseUrl": "./package",
"experimentalDecorators": true,
"target": "es5",
"moduleResolution": "node",
"rootDir": "./package",
"skipLibCheck": true,
"types": ["jasmine"]
},
"include": [
"packages/**"
]
}
| |
doc_23532887
|
What can I do to prevent this side effect?
A: This is most likely the operating system that is doing this. The question is how you are running the camel routes. If it is in an application server or something like ServiceMix, run the server as a "Windows Service" rather than in a user account. This will allow it to run in the background just fine. For most application server platforms like tomcat, jboss, glassfish, and servicemix how to run as a service is documented in their documentation.
| |
doc_23532888
|
Cat= "I cost £3,000"
And some are like this:
AnotherCat= "that other cat is just £10 more expensive than mine £2990
I want to parse out just 3000 from cat and 2990 from Another Cat
A: Your question is much more complicated then parse number from string. If you just want to extract all number from string you can use next regex:
var result = Regex.Matches(stringValue, @"\d+");
but if you want to to understand, where is price of something, but not a count of cats, for example, it is related to Artificial Intelligence and Machine Learning.
| |
doc_23532889
|
anchor_dates <- as.Date(c("2015-07-20","2015-07-21","2015-07-22"))
set.seed(3)
groups <- data.frame(Test.Date = as.Date(c(rep("2015-07-18", 3), rep("2015-07-19", 3), rep("2015-07-20", 3), rep("2015-07-21", 3))),
Group = rep(c("AAA","BBB","CCC"), 4), Var1 = round(runif(12,0,10), ), Var2 = round(runif(12,0,7)))
> head(groups)
Test.Date Group Var1 Var2
1 2015-07-18 AAA 2 4
2 2015-07-18 BBB 8 4
3 2015-07-18 CCC 4 6
4 2015-07-19 AAA 3 6
5 2015-07-19 BBB 6 1
6 2015-07-19 CCC 6 5
I need to use the Dates in the "anchor_dates" list as anchor points in the "groups" set and aggregate variables by Group from the previous two Test Dates before the Anchor Date. There may not always be a result on each Test Date for a given Group, so I can't use a subset() subtracting the Anchor Date by 1 and 2. I need to be able to pull the last two Test Dates for each Group before the Anchor Date regardless of how far back they are and non sequential.
The following gets me close, however when I try to
unsplit(temp, groups$Group)
after the aggregation, the return is a flattened set which has something wrong with repeating the same Var sums and doesn't allow me to use Map() on the set afterwards adding in the anchor Date from the "anchor_dates" list.
f <- lapply(anchor_dates, function(x) {
lapply(split(groups, groups$Group), function(y) {
temp <- tail(y[order(y$Date == x), ], 2)
temp <- aggregate(cbind(Var1, Var2) ~ Group, data = temp, FUN = sum)
})
})
[[1]]
[[1]]$AAA
Group Var1 Var2
1 AAA 7 6
[[1]]$BBB
Group Var1 Var2
1 BBB 8 3
[[1]]$CCC
Group Var1 Var2
1 CCC 11 3
..............
The final result should instead be returned like below (or comparable solution)
[[1]]
Group Var1 Var2
1 AAA 5 10
2 BBB 14 5
3 CCC 10 11
[[2]]
Group Var1 Var2
1 AAA 4 12
2 BBB 9 3
3 CCC 12 7
[[3]]
Group Var1 Var2
1 AAA 7 6
2 BBB 8 3
3 CCC 11 3
Which allows me to end up with the following
f1 <- Map(cbind, f, anchor_dates)
do.call(rbind, f1)
Group Var1 Var2 Anchor.Date
1 AAA 5 10 2015-07-20
2 BBB 14 5 2015-07-20
3 CCC 10 11 2015-07-20
4 AAA 4 12 2015-07-21
5 BBB 9 3 2015-07-21
6 CCC 12 7 2015-07-21
7 AAA 7 6 2015-07-22
8 BBB 8 3 2015-07-22
9 CCC 11 3 2015-07-22
A: I did this using a function with another function inside it. The outside function is suitable to be called using by(), with subsetted data frames, while the internal one lets us examine multiple anchor dates.
func.get_agg_values <- function(df.groupdata,list_of_anchor_dates) {
df.returndata <- lapply(X = list_of_anchor_dates,
active.group.df = df.groupdata,
FUN = function(anchor.date,active.group.df) {
# Get order of the data frame in a proper order
active.group.df <- active.group.df[order(active.group.df$Test.Date,decreasing = TRUE),]
# Next, we subset active.group.df to those rows that are before the anchor date
# Since it was ordered, we can just take 1 and 2 as the last two dates before the anchor date
active.group.df <- active.group.df[as.numeric(active.group.df$Test.Date - anchor.date) < 0,][1:2,]
# Finally, get the sums and return a data frame
returned.row.df <- data.frame(Group = unique(active.group.df$Group),
Var1 = sum(active.group.df$Var1),
Var2 = sum(active.group.df$Var2),
Anchor.Date = anchor.date)
return(returned.row.df)
})
return(do.call(what = rbind.data.frame,
args = df.returndata))
}
f1 <- do.call(what = rbind.data.frame,
args = by(data = groups,
INDICES = groups$Group,
FUN = func.get_agg_values,
list_of_anchor_dates = anchor_dates))
> f1
Group Var1 Var2 Anchor.Date
AAA.1 AAA 5 10 2015-07-20
AAA.2 AAA 4 12 2015-07-21
AAA.3 AAA 7 6 2015-07-22
BBB.1 BBB 14 5 2015-07-20
BBB.2 BBB 9 3 2015-07-21
BBB.3 BBB 8 3 2015-07-22
CCC.1 CCC 10 11 2015-07-20
CCC.2 CCC 12 7 2015-07-21
CCC.3 CCC 11 3 2015-07-22
A: `rownames<-`(do.call(rbind,by(groups,groups$Group,function(g)
do.call(rbind,lapply(anchor_dates,function(anc) {
befores <- which(g$Test.Date<anc);
twobefore <- befores[order(anc-g$Test.Date[befores])[1:2]];
cbind(aggregate(.~Group,g[twobefore,names(g)!='Test.Date'],sum),Anchor.Date=anc);
}))
)),NULL);
## Group Var1 Var2 Anchor.Date
## 1 AAA 5 10 2015-07-20
## 2 AAA 4 12 2015-07-21
## 3 AAA 7 6 2015-07-22
## 4 BBB 14 5 2015-07-20
## 5 BBB 9 3 2015-07-21
## 6 BBB 8 3 2015-07-22
## 7 CCC 10 11 2015-07-20
## 8 CCC 12 7 2015-07-21
## 9 CCC 11 3 2015-07-22
| |
doc_23532890
|
vec <- c(1,2,3,4,5)
What is the best way to retrieve all subvectors of a given length? For example, all subvectors with the length 3 would be:
1,2,3
2,3,4
3,4,5
Thank you for your help!
A: Try this:
k <- 3
embed(vec, k)[, k:1]
| |
doc_23532891
|
var chart = AmCharts.makeChart("chartdiv", {
"type": "xy",
"theme": "light",
"dataDateFormat": "DD-MM-YYYY",
"graphs": [
{
"id":"g8",
"balloon":{
"drop":true,
"adjustBorderColor":false,
"color":"#ffffff"
},
"bullet":"round",
"bulletBorderAlpha":1,
"bulletColor":"#FFFFFF",
"bulletSize":5,
"dashLength":0,
"hideBulletsCount":50,
"lineThickness":2,
"lineColor":"#67b7dc",
"title":"Store 8",
"useLineColorForBulletBorder":true,
"xField":"d1",
"yField":"p1",
"xAxis":"g8",
"balloonText":"<span style='font-size:18px;'>$[[d2]]</span><br>07/1/2017-12/31/2017"
}
],
"valueAxes": [
{
"id": "g8",
"axisAlpha": 1,
"gridAlpha": 1,
"axisColor": "#b0de09",
"color": "#b0de09",
"dashLength": 5,
"centerLabelOnFullp": true,
"position": "bottom",
"type": "date",
"minp": "DD-MM-YYYY",
"markPeriodChange": false,
}
],
"dataProvider": [
{
"d1":"01/01/2017",
"p1":"5353.9000"
},{
"d1":"02/01/2017",
"p1":"5353.9000"
},{
"d1":"01/02/2017",
"p1":"5288.9500"
},{
"d1":"01/03/2017",
"p1":"6850.9900"
},{
"d1":"01/04/2017",
"p1":"5543.1900"
},{
"d1":"01/05/2017",
"p1":"5519.0100"
},{
"d1":"01/06/2017",
"p1":"6191.7500"
}
]
});
https://jsfiddle.net/noroots/xru967ha/
I don't know why, but X-axis labels missing June and all labels looks like having left offset. How can I move it to the left and show missing month?
A: You could add data items before and after without drawing points:
"dataProvider": [{
"d1":"01/12/2016"
}, {
"d1":"01/01/2017",
"p1":"5353.9000"
}, ...
Please check the example here: https://jsfiddle.net/xru967ha/5/
Old Answer
Please check the example below. It's using AmSerialChart and then the datePadding plugin to set 15 extra days at the beginning and the end of your data.
"categoryAxis": {
"parseDates": true,
"minPeriod": "MM",
"prependPeriods": 0.5, // add 15 days start
"appendPeriods": 0.5 // add 15 days to end
}
var chart = AmCharts.makeChart("chartdiv", {
"type": "serial",
"theme": "light",
"marginRight": 40,
"marginLeft": 60,
"dataDateFormat": "DD-MM-YYYY",
"valueAxes": [{
"id": "v1",
"axisAlpha": 0,
"position": "left",
"ignoreAxisWidth": true,
"dashLength": 5
}],
"graphs": [{
"id": "g1",
"balloon":{
"drop":true,
"adjustBorderColor":false,
"color":"#ffffff"
},
"bullet": "round",
"bulletBorderAlpha": 1,
"bulletColor": "#FFFFFF",
"bulletSize": 5,
"hideBulletsCount": 50,
"lineThickness": 2,
"lineColor":"#67b7dc",
"title": "red line",
"useLineColorForBulletBorder": true,
"valueField": "p1",
"balloonText": "<span style='font-size:18px;'>[[value]]</span>"
}],
"chartCursor": {
"pan": true,
"valueLineEnabled": true,
"valueLineBalloonEnabled": true,
"cursorAlpha":1,
"cursorColor":"#258cbb",
"limitToGraph":"g1",
"valueLineAlpha":0.2,
"valueZoomable":true
},
"categoryField": "d1",
"categoryAxis": {
"parseDates": true,
"minorGridEnabled": true,
"axisColor": "#b0de09",
"color": "#b0de09",
"dashLength": 5,
"boldPeriodBeginning": false,
"markPeriodChange": false,
"minPeriod": "MM",
"prependPeriods": 0.5,
"appendPeriods": 0.5
},
"export": {
"enabled": true
},
"dataProvider": [
{
"d1":"01/01/2017",
"p1":"5353.9000"
},{
"d1":"02/01/2017",
"p1":"5353.9000"
},{
"d1":"01/02/2017",
"p1":"5288.9500"
},{
"d1":"01/03/2017",
"p1":"6850.9900"
},{
"d1":"01/04/2017",
"p1":"5543.1900"
},{
"d1":"01/05/2017",
"p1":"5519.0100"
},{
"d1":"01/06/2017",
"p1":"6191.7500"
}
]
});
#chartdiv {
width : 800px;
height : 500px;
}
<script src="https://www.amcharts.com/lib/3/amcharts.js"></script>
<script src="https://www.amcharts.com/lib/3/serial.js"></script>
<script src="//www.amcharts.com/lib/3/plugins/tools/datePadding/datePadding.min.js"></script>
<script src="https://www.amcharts.com/lib/3/plugins/export/export.min.js"></script>
<link rel="stylesheet" href="https://www.amcharts.com/lib/3/plugins/export/export.css" type="text/css" media="all" />
<script src="https://www.amcharts.com/lib/3/themes/light.js"></script>
<div id="chartdiv"></div>
| |
doc_23532892
|
I've tried to give the script tag an id and then display:none; and show it if it has that id but apparently you can't hide script tags with css? Looking for another solution.
I've seen this: < ! [if lt IE 7] > for internet explorer but I am unfamiliar with it and how to use it for chrome. I've just started to learn code again after 5 years and the stuff I try and accomplish seem to be difficult for an amateur coder lol
A: You can actually use conditional comments to hide things from Internet Explorer contrary to the answer from deceze. (These are different from comments used to show things to internet explorer which are more common, those are known as 'Downlevel hidden conditional comments')
<!--[if lte IE 6]><![if gte IE 7]><![endif]-->
<!-- This is a bit mad, but code inside here is served to everything
except browsers less than IE7, so all browsers will see this -->
<!--[if lte IE 6]><![endif]><![endif]-->
A: Not exactly what you ask for, but in order to execute scripts in all webbrowsers but Google Chrome you can also do the following:
<script>
var is_chrome = navigator.userAgent.toLowerCase().indexOf('chrome') > -1;
if(!is_chrome)
{
//Your Code goes here (won't be executed if it's Google Chrome)
}
</script>
A: "conditional comments" work only at older IE browser.
If you want to prevent to use some version or browser use:
https://github.com/mikemaccana/outdated-browser-rework or use function like this https://stackoverflow.com/a/5918791/8770040. Then when user use a specific version of browser remove every thing form page or show some overlay element.
| |
doc_23532893
|
*
*The number of entries in the B-Tree can grow indefinitly (or at least to > 1000GB)
*Disk-level copying operations are minimized
*The values can have arbitrary size (i.e. no fixed schema)
If anyone could provide insight into layouting B-Tree structures on disk level, I'd be very grateful. Especially the last bullet point gives me a lot of headache. I would also appreciate pointers to books, but most database literature I've seen explains only the high-level structure (i.e. "this is how you do it in memory"), but skips the nitty gritty details on the disk layout.
A: UPDATE(archived version of oracle index internals): http://web.archive.org/web/20161221112438/http://www.toadworld.com/platforms/oracle/w/wiki/11001.oracle-b-tree-index-from-the-concept-to-internals
OLD (the original link does not exist anymore):
some info about oracle index internals: http://www.toadworld.com/platforms/oracle/w/wiki/11001.oracle-b-tree-index-from-the-concept-to-internals
Notes:
Databases do not directly implement indexes based on B-tree but on a variant called B+ tree. Which according to wikipedia:
A B+ tree can be viewed as a B-tree in which each node contains only keys (not key-value pairs), and to which an additional level is added at the bottom with linked leaves.
Databases work, in general, with block-oriented storage and b+ tree is more suited then a b-tree for this.
The blocks are fixed size and are left with some free space to accommodate future changes in value or key size.
A block can be either a leaf(holds actual data) or branch(holds the pointers to the leaf nodes)
A toy model how writing to disk can be implemented (for a block size 10k for arithmetic simplification):
*
*a file of 10G is created on disk(it has 1000 of blocks)
*first block is assigned as root and the next free one as a leaf and a list of leaf addresses is put in root
*new data inserted, the current leaf node is filled with values until a threshold is reached
*data continue to be inserted, the next free ones are allocated as leaf blocks and the list of leaf nodes is updated
*after many inserts, the current root node needs children, so the next free block is allocated as branch node, it copies the list from root and now the root will maintains only a list of intermediary nodes.
*if node block needs to be split, the next free block is allocated as branch node, added into root list, and list of leaf nodes is split between initial and new branch node
When the information is read from a big index: can go following:
*
*read first/root block (seek(0), read(10k)) which points to the a child which is located in block 900
*read block 900 (seek(900*10k), read(10K)) which points to a child which located in block 5000
*read block 5000 (seek(5000*10k), read(10K)) which points to the leaf node located in block 190
*read block 190 (seek(190*10k), read(10K)) and extract the interested value from it
a really large index can be split on multiple files, then the address of block will be something as (filename_id, address_relative_to_this_file)
A: Read it
This will definitely help
http://www.geeksforgeeks.org/b-tree-set-1-introduction-2/
| |
doc_23532894
|
in my main page I declare the user control twice. each user control has a different id.
finally, when I use both of the user controls, the code within the user control applies only on the first instance and not the second user control I declared on the page. how can I solve this?
| |
doc_23532895
|
<a *ngFor="let demo of demos" [routerLink]="['demo', demo.name]">example</a>
Currently I am getting demo.name as "example.net/demo/A%20%demo%20%test". I want to format this as "example.net/demo/a-demo-test" to show in the browser address bar.
Any help would be appreciated.
A: You can use functions inside routerLink. So you can use a function in your component:
hyphenateUrlParams(str:string){
return str.replace(' ', '-');
}
And use it in your routerLink:
[routerLink]="['/demo', hyphenateUrlParams(demo.name)]"
This provides much more re-usability than mutating variables directly inside the routerLink.
A: Create a pipe that converts spaces to hyphens.
@Pipe({
name: 'kebabCase'
})
export class KebabCasePipe implements PipeTransform {
transform(value: string): string {
return value.replace(' ', '-');
}
}
Use the pipe in your links:
<a *ngFor="let demo of demos" [routerLink]="['demo', demo.name | kebabCase]">example</a>
| |
doc_23532896
|
I need speed and I just need a fixed size so I was thinking that a circular array would be the best.
But what I want to do is at each step to:
*
*first, overwrite the latest information in the array with the newest that just arrived
*next, using the all array starting from the oldest to the newest
*repeat
I have difficulty to see how to handle the second step in C++ while being efficient. Or maybe something else than a circular array would be better? Any advise or point of view is welcome.
To have something more graphic:
for step in steps:
(current writing position = 2)
current buffer = [8, 9, 3, 4, 5, 6, 7]
new info = 10
overwrite buffer(new info)
new buffer = [8, 9, 10, 4, 5, 6, 7]
current writing position += 1 //(3)
array to use = [4, 5, 6, 7, 8, 9, 10]
function(array to use)
(I used integer following each other to see the chronology of each information in the buffer)
What I am thinking about is to copy the last part and first part and then concatenate them:
std::vector<int> buffer{8, 9, 10, 4, 5, 6, 7};
std::vector<int> oldest(&buffer[3],&buffer[6]);
std::vector<int> youngest(&buffer[0],&buffer[2]);
oldest.insert( oldest.end(), youngest.begin(), youngest.end() );
function(oldest)
If you know something that would be quicker please tell me.
A: If you really need speed you should not copy elements but use the index information you already have to access the elements in the right order.
So the handling function would just need a pointer to the array (or reference to std::vector), know the size and the current working pos.
// process from working pos to end of buffer
for(int i = current_pos; i < buffer_size; ++i) {
processElement(new_buffer [i]);
}
// process the remainder from begin to working pos
for(int i = 0; i < curent_pos; ++i) {
processElement(new_buffer [i]);
}
This should not be to hard to inplement as your working position marks both, the begin and end of your data to process.
A: This approach reduces the copy overhead n-fold where n is the number of extra array elements + 1 used.
Example: array with 2 extra elements
Note, in this case, the oldest value is on the left, the function has been called with pointer to arr[0] (start_pos = 0)
arr == [3, 4, 5, 6, 7, 8, 9, x, x]
now, lets insert the new value 10
arr == [3, 4, 5, 6, 7, 8, 9, 10, x]
start_pos += 1
call function with pointer to the second element (the old 3 won't be used)
function(arr + start_pos)
and now add the 11 and increment the working position (the old 4 won't be used)
arr == [3, 4, 5, 6, 7, 8, 9, 10, 11]
start_pos += 1
function(arr + start_pos)
Now, the array is full.
And only now it is needed to copy the last elements to the begin of the array (after the start_pos to the end) and set working_pos back to 0
depending on the number of extra elements this needs to be done only every 10th, 100th or even 1000th iteration !
result of copying would be:
arr == [6, 7, 8, 9, 10, 11, 9, 10, 11]
*
start_pos = -1 // prepare for the +1 in regular iteration.
next added value (12) will overwrite the * value
arr == [6, 7, 8, 9, 10, 11, 12, 10, 11]
start_pos += 1 // is 0 now
function(arr + start_pos)
Of course, you need one variable to determine the pos to insert the new element behind the other val or you derive from start_pos + nElemsToProcess
If your function() does only take std containers it is probably not the right choice to met the need for speed.
| |
doc_23532897
|
<nav class="navbar navbar-expand" id="mainNav">
<div class="container">
<a class="navbar-brand" href="#"><img src="assets/new_images/logo1.png" width="210px"></a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarResponsive" aria-controls="navbarResponsive" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarResponsive">
<ul class="navbar-nav ml-auto">
<li class="nav-item">
<a class="nav-link" href="help.php">help</a>
</li>
<li class="nav-item">
<a class="nav-link" href="status.php">STATUS</a>
</li>
<li class="nav-item">
<a class="nav-link" href="logout.php">LOGOUT</a>
</li>
</ul>
</div>
</div>
</nav>
A: Check out flexbox with bootstrap 4 You can set default (from smallest size) to sm/md/lg/xl screen size to flex-column then after that set it to flex-sm-row (in this example, I use small screen size.)
<head>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css">
</head>
<body>
<nav class="navbar navbar-expand-sm bg-dark navbar-dark flex-column flex-sm-row">
<!-- Brand/logo -->
<a class="navbar-brand" href="#">Logo</a>
<!-- Links -->
<ul class="navbar-nav">
<li class="nav-item">
<a class="nav-link" href="#">Link 1</a>
</li>
<li class="nav-item">
<a class="nav-link" href="#">Link 2</a>
</li>
<li class="nav-item">
<a class="nav-link" href="#">Link 3</a>
</li>
</ul>
</nav>
</body>
| |
doc_23532898
|
[
{
uuid: 1,
emailNotifications: [
{
targetEmailAddress: "w@w.pl"
},
{
targetEmailAddress: "w2@w.pl"
}
]
},
{
uuid: 2,
emailNotifications: [
{
targetEmailAddress: "xxxw@w.pl"
},
{
targetEmailAddress: "xxxw2@w.pl"
},
]
},
]
I want query those with emailNotifications.targetEmailAddress equals to "w@w.pl":
db.collection.find({
emailNotifications: {
$elemMatch: {
targetEmailAddress: "w@wp.pl"
}
}
})
but it finds nothing. Where is the error?
A: Your value of targetEmailAdress in the query is wrong. You don't have w@wp.pl in your documents.
| |
doc_23532899
|
I already tried: required="required" also
<span class="channel">
<select class="selectpicker" name="channel_id" data-style="btn btn-link" id="exampleFormControlSelect1" required="true">
<option>Channel</option>
@foreach($channels as $channel)
<option value="{{ $channel->id }}">{{ $channel->title }}
</option>
@endforeach
</select>
</span>
If there is any other option in Bootstrap 4 please let me know.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.