Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Example fails to work - Invalid argument: model does not implement getClient()
Example in README at https://github.com/oauthjs/express-oauth-server fails to work with error:
{"error":"invalid_argument","error_description":"Invalid argument: model does not implement `getClient()`"}
If this isn't a complete example it would help to mention that and suggest what is needed to get this working. Node version: 6.x
I've also tried passing this as the model: https://github.com/oauthjs/express-oauth-server/tree/master/examples/memory
It was missing a method that needed to be added btw.. not sure if this is being kept up to date?
I have the same issue.
I tried to replace model: memorystore, by model: new memorystore(), then I have another problem with this in the getClient().
Any idea ?
I have the same problem... any news about this?
Thanks
After I implemented the solution from @stauvel, I got another error:
{"error":"invalid_argument","error_description":"Invalid argument: model does not implement saveAuthorizationCode()"}
@chitkosarvesh I was able to resolve the message of client_id missing by appending it to the URL on the post or by adding it to the body of the request. However, with the example I still get an error.
Unhandled rejection RangeError: Invalid status code: undefined at ServerResponse.writeHead (_http_server.js:188:11) at ServerResponse._implicitHeader (_http_server.js:179:8) at write_ (_http_outgoing.js:645:9) at ServerResponse.end (_http_outgoing.js:764:5) at ServerResponse.send (C:\development\authserver\node_modules\express\lib\response.js:211:10) at ServerResponse.json (C:\development\authserver\node_modules\express\lib\response.js:256:15) at ServerResponse.send (C:\development\authserver\node_modules\express\lib\response.js:158:21) at ExpressOAuthServer.handleError (C:\development\authserver\node_modules\express-oauth-server\index.js:157:9) at ExpressOAuthServer.<anonymous> (C:\development\authserver\node_modules\express-oauth-server\index.js:86:28) at ExpressOAuthServer.tryCatcher (C:\development\authserver\node_modules\express-oauth-server\node_modules\bluebird\js\release\util.js:16:23) at Promise._settlePromiseFromHandler (C:\development\authserver\node_modules\express-oauth-server\node_modules\bluebird\js\release\promise.js:512:31) at Promise._settlePromise (C:\development\authserver\node_modules\express-oauth-server\node_modules\bluebird\js\release\promise.js:569:18) at Promise._settlePromise0 (C:\development\authserver\node_modules\express-oauth-server\node_modules\bluebird\js\release\promise.js:614:10) at Promise._settlePromises (C:\development\authserver\node_modules\express-oauth-server\node_modules\bluebird\js\release\promise.js:689:18) at Async._drainQueue (C:\development\authserver\node_modules\express-oauth-server\node_modules\bluebird\js\release\async.js:133:16) at Async._drainQueues (C:\development\authserver\node_modules\express-oauth-server\node_modules\bluebird\js\release\async.js:143:10) at Immediate.Async.drainQueues (C:\development\authserver\node_modules\express-oauth-server\node_modules\bluebird\js\release\async.js:17:14) at runCallback (timers.js:800:20) at tryOnImmediate (timers.js:762:5) at processImmediate [as _immediateCallback] (timers.js:733:5)
Anyone working this ?
Can anyone share there implementation of OAuth2 Server with NodeJS ?
Thanks.
check this out: https://github.com/waychan23/koa2-oauth-server/tree/master/examples/complete-example
Hitting this issue also
I get
{
error: "invalid_argument",
error_description: "Invalid argument: model does not implement saveAuthorizationCode()"
}
With, I know this is saying that I should implement (write the "saveAuthorizationCode()" function) inside the model.js file, but still after that I get:
{
error: "invalid_request",
error_description: "Missing parameter: client_id"
}
Does anyone know a way to solve this?
I'm kind of stuck
It seems you need to pass an with new instanced model.
const MemoryStore = require('./model.js')
const memoryStore = new MemoryStore()
app.oauth = new OAuthServer({
model: memoryStore // See https://github.com/oauthjs/node-oauth2-server for specification
})
Any workaround to get this to work?
The example is outdated. I debugged it for a couple of days, and finally got it to work:
https://gist.github.com/kobi/7c0b8196d8b585dcc62d71947f909342
The main issue was InMemoryCache.prototype.saveToken that has to return the token, the user, and the client.
I only tested it for client_credentials, which is the simplest grant type, and it seems to work nicely.
As @kobi well said, the provided sample is only implementing the authorize() endpoint. There are other two endpoints: token(), authenticate().
Take a look on https://tools.ietf.org/html/rfc6749#section-6 to know how to use them.
|
GITHUB_ARCHIVE
|
Since 7 years I’m working with LDAP but when times change the company who is using the software wants it to be LDAPS.
I’m currently running on Centos 6 (will be 7 this year).
The company provided me with an certificate in PFX format and a password. How do I install a PFX on Centos 6, and how do I get LDAPS working? Do I need OpenLDAP (and how to config that)?
I only need to check if a user + pass is in the AD, I don’t need anything more from the AD.
We have the following setup:
- Two OpenLDAP servers – openldap1, openldap2
- They are to be set up as N-Way multi-master
- Certificates are all set up correctly with alternate names etc and trust each other
I want slapd to bind to all interfaces on the server, so was hoping to run the service as
/usr/sbin/slapd -u ldap -h ldaps://
However, this gives
5cabf191 <<< dnNormalize: <cn=subschema> 5cabf191 read_config: no serverID / URL match found. Check slapd -h arguments. 5cabf191 slapd destroy: freeing system resources. 5cabf191 syncinfo_free: rid=002 5cabf191 syncinfo_free: rid=002 5cabf191 slapd stopped. 5cabf191 connections_destroy: nothing to destroy.
I think i understand this to be because of our replication setup which looks like the following ServerIDs:
dn: cn=config objectClass: olcGlobal cn: config ..snipped.. olcTLSCertificateKeyFile: /etc/openldap/certs/keys/ldapskey.pem olcTLSCertificateFile: /etc/openldap/certs/ldapscert.pem olcTLSCACertificateFile: /etc/openldap/certs/cacert.pem olcServerID: 1 ldaps://openldap1 olcServerID: 2 ldaps://openldap2 entryCSN: 20190409004218.061111Z#000000#000#000000 modifiersName: cn=config modifyTimestamp: 20190409004218Z contextCSN: 20190409004339.981340Z#000000#000#000000
I think my error is because slapd -h argument cannot match to a serverID in the list?
If this is the case, how do I work around it?
If i manually run the following it, works, but this doesn’t help me bind to all interfaces.
/usr/sbin/slapd -u ldap -h ldaps://openldap1
I have an IP that floats between both servers to give high availability if one were to go down, so need slapd to listen on all interfaces.
Looking for best secure way to enable LDAPS support in ActiveDirectory / on DMZ servers, I did some leg work but I would like to run this by you guys.
I don’t have CA available, and domain is .local so I can’t purchase signed cert. ( at this point migration to TLD is not an option. )
I found tutorial that showing that I can create self signed certificate with makecert, are there any issues doing that … ?
Run – makecert -a sha1 -eku 18.104.22.168.22.214.171.124.1 -sky exchange -sr localmachine -ss MY -pe -r -n “CN=DCNAME2” -len -m 12 LDAP.cer
From MMC – Certificates go to Personal Store – export created certificate with KEY
Then import the PFX file that was created in previous step under Local Computer – Trusted Root Certificates.
Does this make sense… ? and what are the security implications, any better way of doing it.
|
OPCFW_CODE
|
Adding date to a vector of data
Im having a little trouble with R and adding a date to a vector of data. I guess i'm messing around with objects the wrong way?
Data: y (that is numeric[9])
y <-data.frame
y
temp cons wind ror solar nuclear chp net thermal
0.5612 0.5065 0.1609 0.2989 0.7452 0.9621 0.2810 0.6998 0.4519
I want to add a column in at the start that contains todays date, so it will look like:
date temp cons wind ror solar nuclear chp net thermal
28-06-2013 0.5612 0.5065 0.1609 0.2989 0.7452 0.9621 0.2810 0.6998 0.4519
Im using Sys.Date()+1 to get the date of tomorrow, but when I cbind it with my data, I get some unwanted results, like:
tomorrow<-Sys.Date()+1
cbind(tomorrow, y)
vector y
temp 15884 0.5612
cons 15884 0.5065
wind 15884 0.1609
ror 15884 0.2989
solar 15884 0.7452
nuclear 15884 0.9621
chp 15884 0.2810
net 15884 0.6998
thermal 15884 0.4519
I don't want the date displayed in this numeric format, and im not quite sure why the data suddenly becomes a matrix variable.
Apparently y is not a data.frame. If you cbind two vectors you get a matrix and there can only be one data type in a matrix.
instead of use cbind, you can add it to the list with the $ operator, that's y$date<- something
add the output from dput(head(y)) to your question.
You don't have a data.frame, you have a vector. You can append data to a vector like so:
y <- rnorm(10)
names(y) <- letters[1:10]
cbind(Sys.Date(), y) # vector, see?
y
a 15883 -1.21566678
b 15883 0.98836517
c 15883 -1.01564976
d 15883 -0.59483533
e 15883 -0.40890915
f 15883 1.69711341
g 15883 0.05012548
h 15883 0.42253546
i 15883 1.05420278
j 15883 0.15760482
Adding data to vectors is through c.
c(Sys.Date(), y)
a b c d e f g h i
"2013-06-27" "1969-12-30" "1970-01-01" "1969-12-30" "1969-12-31" "1969-12-31" "1970-01-02" "1970-01-01" "1970-01-01" "1970-01-02"
j
"1970-01-01"
To coerce to a data.frame and cbind the data, do this.
y <- data.frame(matrix(y, nrow = 1, dimnames = list(1, names(y))))
cbind(Sys.Date(), y)
Sys.Date() a b c d e f g h i j
1 2013-06-27 0.3946908 0.09510043 0.9753345 -1.05999 -1.041331 0.5796274 0.125427 1.319828 -1.844391 0.3365856
Although the solution of @Roman Lustrik works, I think it is simpler:
> y$date <- Sys.Date()
> y
a b c d e f g h i j
1 -1.104803 1.184856 0.9791311 1.866442 -0.3385167 0.04975147 -0.1821668 -0.7745292 -0.9261035 1.021533
date
1 2013-06-27
|
STACK_EXCHANGE
|
Software development in Durban
One thing is clear when you look back at the past decades of IT and software engineering: everything changes.
Periods of gradual improvement in hardware, language, infrastructure, and methodology are punctuated by paradigm-shifting innovation.-Software development in Durban
This evolution has allowed IT to stay ahead of ever-changing business demands, but it has not been easy or cheap.
Many IT budgets are consumed by maintaining the old stuff and staying current with upgrades and migrations can deplete funding and resources before business benefits are realized.
With the right approach, it is possible to modernize a portfolio of applications in a way that yields value quicker and at lower cost — making it easier and less expensive to stay current as products and technologies continue to evolve.
There are three specific software development patterns to modernize existing applications:
These modernization patterns address transitioning existing applications to more modern architectures and infrastructure and making them accessible to new applications. This paper also examines the conditions that lead to rewriting when that is the only option.
These patterns help enterprises figure out how to get the most out of existing applications and establish a practice for continuous modernization that will serve the business now and in the future.
APPLICATION DEVELOPMENT AND DEPLOYMENT
Not so long ago, applications were coded in programming languages and compiled into a format that was unique to a processor and operating system. The applications were generally self-contained, tended to be large, and ran in private datacenters. Everyone assumed they would have long lifespans. These were built using heavyweight software development life-cycle approaches with formal, upfront requirements and long development timelines. All of that has changed.
Those applications are now called monolithic legacy applications — the dinosaurs of business applications. Although they served the purpose for which they were built, the pace of business and technical innovation accelerated, and these applications became a burden on the enterprise.
That innovation has led to the application development and deployment model commonly used today; DevOps processes that guide the creation of micro-services, deployed on containers running in clouds. Just consider the progress in each of these four areas of application development:
methodology, architecture, deployment, and infrastructure Application infrastructure evolved from large application-specific servers to horizontally scaled commodity servers supporting many applications. It is common now for applications to be deployed on multiple servers in dispersed datacenters, private clouds, and public clouds. This is much faster to deploy, and it improves performance and availability.
The evolution in application development and deployment across these four areas has led to faster initial development, more frequent updates, higher quality, closer alignment to business needs, greater flexibility in operations, and reduced costs.
These days, we refer to the software components supporting an application as a stack. More broadly, a full stack includes all the components needed to develop and deploy an application, the methodology used to develop it, and the hardware it runs on.
Companies have embraced new products and technologies for new projects, but legacy solution stacks are still used at many companies.
For example, many financial services firms developed custom applications decades ago. They deployed Bloomberg terminals and trader workstations, client-server systems, and n-tier web appli¬cations. Today, those same firms are creating mobile applications for customers and employees. As a result of growth and acquisition, they are running dozens of modern and legacy stacks throughout their application portfolio.
Modernization is not about adopting new technologies and practices, it is about what happens to the old ones. Imagine an old house heated with coal with newer sections heated by oil and gas. Upgrading the entire house to solar is expensive with little return, but it makes sense to use solar in the latest addition, and it is worthwhile to make it all work together under one roof.
There are two primary goals of application modernization: use existing functionality and data in new applications as much as possible (deriving new value from old applications), and bring the benefits of new processes, products, and technologies to old applications. – Software development
|
OPCFW_CODE
|
Hello everyone! Radek from Poland here, author of Discordian - dark theme inspired by Discord which from now on you can find listed under community themes in Obsidian. https://github.com/radekkozak/discordian
First of all, as this is my first post i wanted to say: Happy New Year everyone ! Hope this one will be less pandemic and will find each of you in better place than the previous one.
Anyway, to quickly introduce myself: i am photog, writer and Android dev by profession. I’ve been managing my personal notes for years, historically with pen and notebook, then slowly with NV, NValt, with few hiccups of Bear and the Archive app. Needless to say i am avid Zettelkastener and now - which is kind of obvious - big Obsidian fan. I moved all of my notes to Obsidian somewhere in the midst of 2020 and i’ve been following this amazing forum and Discord channel for some time (mostly reading and sitting quiet as i am rather introvert guy who ditched social media eight years ago;)
Here i want to thank you all for the knowledge you give out on these forums. PKM and Zettelkasten realms are not so known in the place where i live - or developed as an idea and practice i should say - so i am even more grateful i found this helpful community. Not to make this longer than it has to be: since i’ve made Obsidian my default / to-go app for personal Zettelkasten and learn a ton from this place i decided to give something back to the community.
Hope you will enjoy this theme. Please consider it beta as it is product of my personal experience that i use on my daily basis. I did my best though to polish it enough to be used with the latest version of Obsidian and listed plugins. Wanted it to be simple, flat and unobtrusive but still with a flair and of course idea was to have a Discord look.
If you find a bug or ways to improve it - please let me know or better still file and issue or even PR on Github. There is also Discussions panel on Github if you prefer it that way Discussions · radekkozak/discordian · GitHub
You can also observe status of your issues by checking what i’m currently working on here: https://github.com/radekkozak/discordian/projects/2
Happy to accommodate and receive contributions from all of you !
For best experience possible, please download the fonts and install them in your system before using the theme. Provided fonts closely resemble those in Discord app and are sort of required here.
Install the Discordian Theme Plugin (listed in community plugins) This plugin is not required to use Discordian Theme, but highly recommended. It gives you more fine-grained control over some aspects of the theme along with Writer Mode, Flat Andy Mode, Paragraph Focus with customizable fade out, Readable line length and so on
Many ideas and css solutions are inspired by or come from an amazing Obsidianites both on Obsidian’s Forum and Discord channel: @kepano @death.au @nickmilo @tallguyjenks to name a few. Thanks you once again for shared knowledge and being part of Obsidian community.
|
OPCFW_CODE
|
Patihis, L., Frenda, S. J., LePort, A. K. R., Petersen, N., Nichols, R. M., Stark, C. E. L., McGaugh, J. L., & Loftus, E. F. (2013). False memories in highly superior autobiographical memory individuals. Proceedings of the National Academy of Sciences, 110, 20947–20952. doi:10.1073/pnas.1314373110
Patihis, L., Ho, L. Y., Tingen, I. W., Lilienfeld, S. O., & Loftus, E. F. (2014). Are the “memory wars” over? A scientist-practitioner gap in beliefs about repressed memory. Psychological Science, 25, 519-530. doi: 10.1177/0956797613510718 (Supplemental Materials)
Patihis, L. (in press). Let's be skeptical about reconsolidation and emotional arousal in therapy. Brain & Behavioral Sciences.
Wylie*, L. E., Patihis*, L., McCuller, L. L., Davis, D., Brank, E. M., Loftus, E. F., & Bornstein, B. H. (2014). Misinformation effects in older versus younger adults: A meta-analysis and review. In M. P. Toglia, D. F. Ross, J. Pozzulo, & E. Pica (Eds) The Elderly Eyewitness in Court, UK: Psychology Press. *First two authors contributed equally with sequence chosen by reverse alphabetical order. [at Amazon (US) (UK), Waterstones (UK)] (Read here on Google Books)
Patihis, L., Oh, J. S., & Mogilner, T. (in press). Phoneme discrimination of an unrelated language: Evidence for a narrow transfer but not a broad-based bilingual advantage. International Journal of Bilingualism.
Interests (cont): Is there a type of person that is particularly vulnerable to false memories? In other words, is there a kind of false-memory-personality trait? The flip side of this is a crucially important question, practically speaking, and that is: Is anyone immune from memory distortions? Should anyone be exempt from the kind of caution and scrutiny that is recommended in regards to memory contamination in the judicial system and psychotherapy? Should people, for example, with very strong memory ability be excluded from the cautionary approach to eyewitness testimony--an approach that emphasizes the fallibility and malleability of memory?
As well as memory distortion research, I have recently also been peripherally involved in one of the most interesting new developments in memory science: highly superior autobiographical memory (HSAM) research.
Another interest is in other practical applications related to what people believe about how memory works, because this, again, affects how memory evidence is recovered and assessed in legal and therapy settings.
In addition to practical applications, I am intrigued by various theory questions. For example, questions about the standing of, including predictive power of, various memory distortion theories (e.g. source monitoring, fuzzy trace, etc.). Another interest is in how affective adaptation and emotion appraisal theory can explain distortions in memory for past felt emotion. Also of interest is the recent spate of reconsolidation articles published in top tier journals. And a final interest in a skeptical capacity is in theories such as motivated forgetting, catharsis theory, and various views on how traumatic memory works.
Also of interest is scientific skepticism and distinguishing science from pseudoscience. I particularly like Karl Popper's suggestion that the ideal approach to science is to actively seek disconfirming evidence for falsifiable hypotheses, and like to apply this idea to both science and to critical thinking outside of the lab--for example to avoid groupthink, antilocution, etc.
University of California, Irvine
Psychology and Social Behavior
4201 Social and Behavioral Sciences Gateway
Irvine, CA 92697-7085
|
OPCFW_CODE
|
Can I use S-Video output converted to VGA for daily work?
I have here a video card with a VGA output. Next to that, it has also an S-Video output.
I am thinking on upgrading my system into a dual-desktop direction, by buying an s-video -> vga converter and using it as a secondary display.
Unfortunately, I can't find anywhere information, if it works or not.
I am curious about the usability of the s-video -> vga conversion, so
"you will get at most 576x576 pixel, which is not usable for work", is an acceptable answer.
The trivial "Buy a dual vga card for $5" is not an acceptable answer (this question is only about the feasibility of the s-video -> vga conversion).
"Yes by me it drives an 1024x768 monitor, it is my cutting page", is an acceptable answer.
It is a common, not graphic-intensive workstation for common office work.
No, S-video is not suitable for a computer console unless you like to look at a 720x480 screen. Use it only to display standard definition video.
@sawdust It is a standardized thing in the S-Video? To me, it doesn't sound unimaginable to produce video output in any resolution (which resolution would be set by me, in the display properties). So, it is unimaginable to have an S-Video output in a resolution which differs from 720x480?
Sure, there's 640x480 for a 4:3 aspect ratio. S-video is for video, not computer graphics. End of story.
I have here a video card with a VGA output. Next to that, it has also an S-Video output.
The VGA port is for computer text and graphics on a computer monitor.
The S-Video port is for connection to a television to display video (e.g. a movie from the DVD drive) at standard definition (i.e. NTSC or PAL).
Standard definition NTSC video is fixed at 480 (interlaced) horizontal lines. (Actually there's a total of 525 lines.)
The resolution of S-video is not considered suitable for modern computer text and graphics work, where the minimum resolution configurable in Windows 7 is 800x600 progressive.
The degradation of the PC's desktop text by converting to 480i S-video, and then back to some higher-resolution VGA would not be readable IMO.
You could probably simulate the degradation by grabbing a full-resolution screen capture, downscaling that image to 720x480 (e.g. use Resize by pixels in MS Paint), and then upscaling it (use zoom).
Yes, you can use S-Video as an output and in theory it can be converted to VGA, but you'll need an active device between them. The S-Video has 4-pins where the VGA is using a 15-pin-connector. In practice this means that you'll maybe get some picture with the active converter, but there will be no automatic resolution detection and the image might go far over the edges of the screen, or there might be black borders on all sides. In my opinion S-Video to VGA conversion is unusable for any kind of work (with those resolutions PowerPoint won't most probably fit on the screen).
Thank you the answer! But: I know what is the optimal resolution of the possible monitors (1366x768 or 1280x800), so if there is no automatic resolution detection, what if I simply set it in the display properties?
You could do that, but it would most probably lead to overscan. And the image would also be ulgy, since in S-Video there is only 1 minus-pin. In VGA there is a + and a - for each (R, G and B). The converter would simply short the minuses and in theory that's no a good thing. (And off-topic: I once had a tv connected to laptop with straight S-Video cable and...no, the image is very likely going to get unreadable etc.)
"S-Video ...carry analog RGB signals" -- Wrong. S-video is not RBG, but chrominance and luminance signals for standard definition video. "S-Video there is only 1 minus-pin" -- You're not making any sense. It's not a differential signal.
Oh, actually that's true. Sorry, got confused. As sawdust said, VGA and S-Video are carrying different kinds of signals, so you cannot even convert them without an active device. And then there is the problem with the resolution.
Please see updated answer
@sawdust Please consider a vote change on the edited answer. Anyways, if you would convert your comment to an answer, I would be happy to accept it.
|
STACK_EXCHANGE
|
In nearly every project that I worked on in recent years, the database not only stores the data maintained by the application, but also describes (parts of) the application itself. If you ever had to implement a permission system which grants users to view or edit tables or open forms or execute functions, you already know that.
The resulting problem is that if the application relies on certain key data to be present and correct, accidental modification or deletion of that data usually causes the application to fail.
I try to show how to use triggers to prevent accidental data modification.
Prevent table data deletion
The simplest way to prevent data deletion is to have an INSTEAD OF DELETE trigger which does nothing, or simply raises an error:
CREATE TRIGGER [dbo].[Prevent_Foo_Delete] ON [dbo].[Foo] INSTEAD OF DELETE AS BEGIN SET NOCOUNT ON; RAISERROR('You cannot delete Foo',1,0); END
Conditional deletion prevention
In this case, deletion should only be allowed under certain conditions. For example, we could allow to only delete single records:
CREATE TRIGGER [dbo].[Prevent_Foo_Multi_Delete] ON [dbo].[Foo] INSTEAD OF DELETE AS BEGIN SET NOCOUNT ON; IF (SELECT COUNT(ID) FROM deleted) > 1 RAISERROR('You can only delete a single Foo',1,0); ELSE DELETE Foo FROM Foo INNER JOIN deleted ON Foo.ID = deleted.ID
Similarly, one could prevent the deletion of detail records to only a single master record by writing
IF (SELECT COUNT(DISTINCT BAR_ID) FROM deleted) > 1
The same method can be used for UPDATE triggers. It may be, however, easier to define an ON UPDATE trigger to avoid rephrase the UPDATE statement in an INSTEAD OF trigger. In case of failure, we rollback the current transaction:
CREATE TRIGGER [dbo].[Prevent_Foo_Update] ON [dbo].[Foo] FOR UPDATE AS BEGIN SET NOCOUNT ON; IF (SELECT COUNT(ID) FROM inserted) > 1 BEGIN ROLLBACK TRANSACTION RAISERROR('You can only modify 1 Foo',1,0); END END
These mechanisms prevent you from an accidental UPDATE or DELETE on all records (e.g. by a missing WHERE clause, or semicolon in front of the WHERE condition).
However, there is still the TRUNCATE TABLE command which deletes all data in a table and cannot be stopped by a DELETE trigger:
Because TRUNCATE TABLE is not logged, it cannot activate a trigger.
The rescue shows in the preceding sentence:
You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint
Simply have a table that references the tables to be protected:
CREATE TABLE Prevent_Truncate( Foo_ID INT REFERENCES Foo(ID), Bar_ID INT REFERENCES Bar(ID) )
You only appreciate how valuable your data is once it’s lost 😉
|
OPCFW_CODE
|
INNOCENCE AND GUILT AS CORRELATIVE FACTORS
Innocence and guilt ‘hang together’, like two sides of the same coin, whether that metaphorical coin happens to be metachemical, chemical, physical, or metaphysical, not to mention, in subordinate contexts, pseudo-metaphysical, pseudo-physical, pseudo-chemical, or pseudo-metachemical.
Hence there is, besides the obvious distinction between ‘being innocent’ or ‘being guilty’ of, say, a crime or even a sin, a more complex distinction which amounts, in any element or pseudo-element, to the equivalence of being free or being bound, that is, in effect, equivalent to freedom and binding or, for that matter, virtue and vice, brightness and shadow, positivity and negativity. Therefore one is both innocent and guilty, just as one is free and bound, positive and negative, virtuous and vicious, etc., etc.
But metachemical innocence and guilt is not the same as chemical innocence and guilt, even if, being of a female character, it shares with its lesser sister certain features in common. For the innocence of metachemistry is evil (beautiful and loving) and the guilt of metachemistry criminal (ugly and hateful), whereas the innocence of chemistry is pseudo-evil (strong and proud) and the guilt of chemistry pseudo-criminal (weak and humble), the ratio of innocence to guilt in metachemistry, a noumenal or ethereal element, differing from its chemical, or phenomenal and corporeal, counterpart in the absolute terms of 3:1 as against the relative terms of 2˝:1˝. Yet innocence still predominates over guilt in either element.
Likewise, physical innocence and guilt is not the same as metaphysical innocence and guilt, even if, being of a male character, it shares with its greater brother certain factors in common. For the innocence of physics is pseudo-graceful (knowledgeable and pleasurable) and the guilt of physics pseudo-wise (ignorant and painful), whereas the innocence of metaphysics is graceful (truthful and joyful) and the guilt of metaphysics wise (illusory and woeful), the ratio of innocence to guilt in physics, a phenomenal or corporeal element, differing from its metaphysical, or noumenal and ethereal, counterpart in the relative terms of 2˝:1˝ as against the absolute terms of 3:1. Yet innocence still preponderates over guilt in either element.
Therefore the innocence of evil and the guilt of crime not only differs from the innocence of pseudo-evil and the guilt of pseudo-crime, but stands in gender opposition to the innocence of grace and the guilt of wisdom, not to mention the innocence of pseudo-grace and the guilt of pseudo-wisdom, as incompatible gender ideals that can only suffer a catastrophic negation if subjected to the hegemonic sway of the opposite gender, whereof, in the female case, evil will be negated by good and crime by punishment, whilst, in the male case, grace will be negated by sin and wisdom by folly.
For if the innocence of evil and the guilt of crime are negated, by a pseudo-chemical subordination to a physical hegemony, then the damned outcome can only be the pseudo-guilt of goodness (pseudo-weakness and pseudo-humility) and the pseudo-innocence of punishment (pseudo-strength and pseudo-pride), in consequence of a fall from free soma metachemically to bound soma pseudo-chemically (evil to good) and from bound psyche metachemically to free psyche pseudo-chemically (crime to punishment0, neither of which could be desirable from a metachemical standpoint, where the female is hegemonic, but rather eventualities to faithlessly fear, just as, from the converse standpoint, the pseudo-chemical damned could be inferred to live in faithless hope of a return to metachemical innocence and guilt.
Similarly, if the innocence of pseudo-evil and the guilt of pseudo-crime are negated, by a pseudo-metachemical subordination to a metaphysical hegemony, then the counter-damned (pseudo-damned) outcome can only be the pseudo-guilt of pseudo-goodness (pseudo-ugliness and pseudo-hatred) and the pseudo-innocence of pseudo-punishment (pseudo-beauty and pseudo-love), in consequence of a counter-fall (pseudo-fall) from free soma chemically to bound soma pseudo-metachemically (pseudo-evil to pseudo-good) and from bound psyche chemically to free psyche pseudo-metachemically (pseudo-crime to pseudo-punishment), neither of which could be desirable from a chemical standpoint, where the female is hegemonic, but rather eventualities to pseudo-faithlesslessly fear, just as, from the converse standpoint, the pseudo-metachemical counter-damned (pseudo-damned) could be inferred to live in pseudo-faithless hope of a return to chemical innocence and guilt.
For no female, whether pseudo-chemical or pseudo-metachemical, is going to be at ease with a situation which negates her authentic sense of innocence and guilt, freedom and binding, under male hegemonic pressures, such that, in the metaphysical case, favour genuine grace and wisdom at the expense of pseudo-evil and pseudo-crime coupled, on its own side of the gender fence, to sin and folly, or, in the physical case, favour pseudo-grace and pseudo-wisdom at the expense of evil and crime coupled, on its own side of the gender fence, to pseudo-sin and pseudo-folly, and consequently the pseudo-metachemical will no more be resigned to pseudo-goodness and pseudo-punishment than their pseudo-chemical counterparts to goodness and punishment.
Which is a test for males and the very existence of culture or pseudo-culture, as the class/axial case may be, not merely in polar rejection of philistinism or pseudo-philistinism, but in opposition to pseudo-barbarity and barbarity in the interests, for pseudo-females, of civility and pseudo-civility.
For just as the pseudo-metachemical pseudo-female corollary of culture is pseudo-civility, so the pseudo-chemical pseudo-female corollary of pseudo-culture is civility, and neither can be sustained (by males) where culture or pseudo-culture is not. Only a reversion, pseudo-faithlessly or faithlessly hoped for by the pseudo-civil and civil, to pseudo-barbarity or barbarity, as the axial/class case may be, with the restoration, in consequence, of a female hegemonic sway over philistines and pseudo-philistines.
For males, on the other hand, the importance of remaining in control of their hegemonic positions cannot be underestimated, least of all in metaphysics, since the negation of grace and wisdom by sin and folly in the church-hegemonic axial case is something to faithfully fear … as the righteous, hegemonic over the pseudo-just, must faithfully fear the meek, subordinate to the pseudo-vain. The negation of pseudo-grace and pseudo-wisdom by pseudo-sin and pseudo-folly in the state-hegemonic axial case is likewise something to pseudo-faithfully fear … as the pseudo-righteous, hegemonic over the just, must pseudo-faithfully fear the pseudo-meek, subordinate to the vain.
Those, on the other hand, who live in sin and folly, pseudo-physical guilt and innocence, could be inferred to live in faithful hope of deliverance through salvation to the greater innocence of grace and the lesser guilt of wisdom in righteousness, while their pseudo-sinful and pseudo-foolish pseudo-metaphysical counterparts could be inferred to live in pseudo-faithful hope of counter-deliverance through counter-salvation (pseudo-salvation) to the greater innocence of pseudo-grace and the lesser guilt of pseudo-wisdom in pseudo-righteousness, neither of which, however, would have anything to do with metaphysics and therefore with God and, more significantly, Heaven, but, equating with man and the earth, leave much to be desired from a truly religious, or church-hegemonic, axial standpoint.
It is for this reason that ‘Kingdom Come’ will not be a physical but a metaphysical destiny primarily intended for the pseudo-physical, whose deliverance from meekness will bring about the counter-damnation (pseudo-damnation) of the pseudo-vain to pseudo-justice, subordinate, for ever more, to the hegemonic triumph of righteousness.
|
OPCFW_CODE
|
OK, with a name like Rocket Science Games you had to expect that things might 'crash and burn' now and then, didn't you? Just before the CGDC, Rocket Science laid off all the development personnel and became something approximating a virtual corporation — it still exists, but it doesn't do much. You might, I suppose, say the Rocket is drifting in space, working to stay alive, hoping for rescue.
And just as I was getting used to having a regular job, too. That part alone took almost a year.
Still, it's not all bad. I've actually had a month to realize that there is a season called Spring (usually spent inside trying to get product ready for Christmas, which comes in September or so). I've had a chance to visit a lot of companies and see what kind of cool stuff they're doing — especially the small ones. And I've come to realize that I've grown up enough to be able to appreciate leisure time. You wouldn't think that would be a problem, but it always has been for me. I have the first inkling that, were I independently wealthy, I might NOT spend my life just the way I have, building cool stuff. Scary.
Everyone should have a break now and then, just to see what it's like. I can't recommend the way I got here — having a company flame out under me was no fun — but I can recommend the results.
NuMega BoundsChecker Version 5
Anybody who does development for Windows needs...well, needs as much help as they can get, frankly. Windows is an immensely complex system for which we build horrifically complex software. And it's easy to get screwed up by relatively simple errors when you code in C or C++ for Windows. BoundsChecker is an automated debugging tool which can help you find significant errors without significant effort. It's not a cure-all, but it's a useful tool that I think every development team programming for Windows should have.
BC 5 comes in several flavors. I have the Visual C++ Edition, designed to integrate specifically with Microsoft Visual C++ and knowledgeable about certain internal formats it uses. Unlike previous versions of BC, which were compatible with multiple compilers and debuggers, the Visual C++ Edition contains features and benefits usable ONLY with Visual C++. There's another version of BC 5 (Standard Version) which is usable with most other compilers. and there is a Delphi edition. In the works is a version compatible with Borland's C++ Builder (basically a C++ version of Delphi).
BC 5 supports two levels of error detection, each level requiring a different amount of interaction from the user (and a different level of intrusion into your normal process). The simplest level is ActiveCheck, which can be used without recompilation or relinking, and which detects Windows function failed, Invalid argument, Stack overrun, Dynamic memory overrun, Memory leak, Resource leak, and Unallocated pointer. The more intrusive (and more comprehensive) level is FinalCheck, which requires that BC "instrument" (insert error detection code into) your code. FinalCheck detects (in addition to the problems found by ActiveCheck) Reading/Writing overflows memory, Reading uninitialized memory, Array index out of range, Assigning pointer out of range, Dangling pointer, non-Function pointers, Memory leak by free, Memory leak by reassignment, Memory leak leaving scope, and Returning pointer to local variable.
In addition to the above error detection, BC also does event logging so that you can see what system events occurred before a problem happened; checks for Win32 compliance for NT, 95, and Win32s; checks for undocumented Windows calls; and highlights library calls not supported by ANSI C.
The major problem with BoundsChecker has always been the slow speed of instrumentation and instrumented code. The Visual C++ edition is intended to improve the performance of instrumentation when used with Visual C++ by instrumenting intermediate code instead of source code. I was unable to run head-to-head comparisons between BC5 and the previous version I used (BC 3), but my gut feeling is the BC 5 is substantially faster when instrumenting. Running instrumented code (especially with extensive event logging) is still too slow for some real-time applications.
But that's no reason not to use BoundsChecker where you can. Especially with the new pricing, every development team should have at least one copy for testing at least periodic builds. It's not out of line to have the BuildMaster use BC on every daily build or every build posted to source control, but at least every milestone build should be checked.
Introduction to the Personal Software Process
Watts S. Humphrey
Addison-Wesley, PB, 278 pp.
I first used a time log when I was a TD at Electronic Arts and had to account for my time so that it could be billed to various projects. The first surprise was in how easy it was to do. The second surprise was where my time was actually going. Once we had information about where our time was going, the next step was to prioritize projects, and pretty soon we could say "sorry, you're below the line and I do not have time to do that" (a wonderful thing in a company that wants to take every second of your life). If you log your time and find out where it's going, you too may suddenly discover that you have MORE time in your life — or at least that you can make better choices about how you spend the time you do have.
Time logging is probably the central thrust of this book. Watts Humphrey has written a number of significant volumes on software engineering and management, including Managing Technical People, Managing the Software Process, and A Discipline for Software Engineering. Introduction to the Personal Software Process (hereafter, IntroPSP) is his latest, and I find it a worthwhile addition to his section of my bookshelf.
I was first introduced to Humphrey's writing by A Discipline for Software Engineering (hereafter, DSE), which introduced the PSP (Personal Software Process) for engineers to improve their ability to create code and to schedule their creation of code. I didn't agree with everything in DSE, especially the emphasis on Lines of Code (LOC) as a productivity metric. But I did (and do) agree with the fundamental idea that you must measure your productivity and your daily activities in order to improve your efficiency. Lord Kelvin is generally credited with the quote "You can't control what you can't measure", and Humphrey takes the idea to heart both in DSE and in IntroPSP.
The heart of the PSP is: Record what you do and use that information to plan your time. Once you know how long it took you to do a task of a given size, you can use that knowledge to plan how long a task yet-to-be-done will take. Of course, then you have to accurately estimate the size of a task before you've done it. The PSP covers that, too, although I think it's better covered in DSE than in IntroPSP.
Once you have information on your daily activities, you can use it to determine useful tidbits of information like the number of minutes you spend per debugged LOC, the number of LOC per hour, and so forth. These numbers give you a reasonable idea of your productivity (albeit in a given language, even a given development environment) so that you can correctly determine if a change in your techniques or behavior generates a positive or negative effect upon your efficiency. Without good data, you're really just guessing whether that new tool you started using really helps you produce code faster or just moves time from writing code the first time to debugging it later.
IntroPSP is just what it says — a nice little introduction to a more precise (OK, some might say compulsive) method of time management and what I might call "productivity engineering". I don't really think it takes the place of DSE, but it's a much lighter introduction and contains quite a bit of useful information — the survey or course notes, if you will, to DSE's textbook. It will certainly give you enough information about the PSP to decide if you want to use it or not.
One caveat, however — it appears that Watts Humphrey is now teaching courses in the PSP, which gives him an additional financial incentive in its popularity and success. I would think carefully (and read both volumes very carefully) before committing myself (or my entire department) to training in any technique claimed to be an across-the-board benefit. It smacks of Silver Bullet Syndrome (see Assessment and Control of Software Risks, by Capers Jones). If you're going to try it, try it with just one or two people first, and see what it does for them.
See you next time....
Evan Robinson is the former Director of Games Engineering at Rocket Science Games, Inc. Before becoming a manager he was a consultant, programmer, technical director, and game developer. He is now eking out a tenuous living writing review columns and going on job interviews. You can email him to offer him a consulting opportunity or a job, or just to talk (well, OK, type) about books, software, development, or the meaning of life.
Copyright © Evan Robinson. All Rights Reserved.
|
OPCFW_CODE
|
DOE Data Day (D3)
Globus professional services manager Rick Wagner will be speaking at the 2019 Department of Energy (DOE) Data Day event at Lawrence Livermore National Laboratory (LLNL):
- Title: Globus Research Data Platform
- Date/Time: Wednesday, Sept. 25 @ 1:55 - 2:20 p.m. (part of "Data-Intensive Computing" session)
- Location: Bldg. 170, Room 1091
- Abstract: This presentation will introduce how Globus is used for data management, motivated by examples drawn from major facilities and projects within the DOE, NSF, and NIH funding ecosystems. Globus provides high-performance, secure, file access, transfer and synchronization directly between storage systems (i.e., without needing to relay via an intermediary machine). Globus scales to meet the needs of increasingly diverse data by handling all the difficult aspects of data transfer, from authentication at source and destination to performance optimization and automatic fault recovery. It supports both high-performance GridFTP transfers and secure HTTPS access for direct upload/download. Originating from Argonne National Laboratory and now developed and operated by the University of Chicago, Globus has become a preferred service for moving and sharing data between and among a wide variety of storage systems at research labs, campus computing resources, and national facilities like the Argonne and Oak Ridge Leadership Computing Facilities (ALCF and OLCF), DOE’s Joint Genome Institute, the National Energy Research Scientific Computing Center (NERSC), and the Advanced Photon Source (APS) at Argonne.
Globus relies on two core components: the Globus service, which coordinates data transfer; and the Globus Connect software that is deployed on storage systems to enable secure, high performance data access. Globus Connect’s modular backend storage interface enables interoperability across HPC and cloud storage systems such as Amazon S3, Ceph, HPSS, HDFS, and Box. We provide secure data sharing allowing users to make data on Globus endpoints accessible to other individual users and/or groups. The Globus Connect software provides advanced features that ensure that users who access such shared endpoints are restricted to the locations and permissions granted by the owner. Such shared endpoints can be created and managed dynamically by users and programs, providing a convenient mechanism for data sharing. All Globus services expose programmatic APIs that can be used by developers and data providers to offer robust file transfer and sharing, while leveraging advanced identity management, single sign-on, and authorization.
Globus’s data management features build on Globus Auth, a mature, widely used foundational identity and access management platform service designed to address the needs of the science and engineering community for authentication and authorization across platforms, institutions, and services. It serves to broker authentication and authorization interactions between end- users, identity providers, resource servers (services), and clients (including web, mobile, desktop, and command line applications, and other services), permitting unified access to research data, across all systems.
Developers can use Globus Auth APIs to integrate these capabilities into services, applications, and tools without needing to develop software to authenticate users, support peripheral workflows (e.g., password reset), or apply security updates. By eliminating friction associated with the frequent need for multiple accounts, identities, credentials, and groups when using distributed resources, Globus Auth streamlines the creation, integration, and use of advanced research applications and services. Globus Auth builds upon the OAuth 2 and OpenID Connect specifications to enable standards-compliant integration using existing client libraries. It supports identity federation models that enable linking of diverse identities (e.g., XSEDE, ORCiD, institutional), a secure scoped access token model for interacting with services, and APIs for resource servers and clients to validate and introspect tokens, with a delegation model by which services can obtain short-term delegated tokens to access other services.
Globus Auth capabilities are all accessible via a REST API and associated SDKs, making it easy to integrate them into applications and services. Its effectiveness is illustrated by its adoption by projects such as: NCAR's Research Data Archive; DOE’s KBase; the NSF’s JetStream cloud and XSEDE network; and NIH’s FaceBase Consortium. Globus Auth supports over 420 identity providers, including most DOE national laboratories. Others measures of maturity, adoption, and impact are that Globus Auth manages over 75,000 unique identities; supports 300 applications and services; and has issued almost 4.4 million access tokens.
Globus services are widely used within and outside the Department of Energy, with tens of thousands of users and more than 14,000 storage systems accessible via Globus, including at most leading US universities and research computing centers. By using Globus, users can leverage other DOE investments, such as ESnet; for example, researchers from Argonne recently completed the largest single transfer managed by Globus, moving 2.9 PB between OLCF and ALCF storage without disruption. DOE researchers use Globus to receive data from core facilities like the APS, move data to compute resources and archival systems, pull data from remote instrument facilities into analysis environments, and automate these tasks.
The D3 workshop is dedicated to data management activities in the Department of Energy (DOE) national laboratories on September 25-26 at Lawrence Livermore National Laboratory (LLNL). The DOE has joined the larger scientific community in the promotion of data management as a means to higher quality and more efficient research and analysis. Data management includes a disciplined approach to metadata, which tracks provenance and provides traceability from raw data products through analysis results and potentially through production. We will be discussing a variety of topic areas including Data Curation and Standards, Data Intensive Computing, Data Management in the Cloud, and Data Access, Sharing, and Sensitivity. For details, visit the event page.
|
OPCFW_CODE
|
We designed the Open Targets Platform following a collaborative and iterative design process (Karamanis et al. 2018). Qualitative feedback from users has been at the heart of this process. In addition to helping us understand what needs to be improved, this feedback helps us assess whether we have met our main objective.
By the time the platform was made available publically, we had collected a lot of feedback from users testifying that it was comprehensive and intuitive. Users also shared cases in which the platform helped them with their day to day activities in drug target identification (see Karamanis et al. 2018 Supplementary Table 1).
We supplemented the qualitative feedback from users with quantitative metrics. Brainstorming a long list of metrics can quickly get unwieldy and difficult to prioritize. In order to avoid this problem, we identified key performance indicators using the HEART methodology. This methodology breaks down the experience of using a product into five aspects: Happiness, Engagement, Adoption, Retention and Task completion (Rodden et al. 2010).
The importance of each aspect of the HEART methodology varies from product to product. For the Open Targets Platform, we decided to focus on Adoption, Engagement and Retention (in that order) for the definition of quantitative metrics because these aspects can be captured regularly and more directly through web analytics. Although we use qualitative feedback (see Karamanis et al. 2018 Supplementary Table 1) as the main indicator of Happiness, we intend to start surveying our users periodically in order to monitor differences in the Net Promoter Score (Reichheld 2003) between major updates of the Platform. Task completion is less relevant to our application since using the platform is much more open-ended than, for instance, making a purchase online (which has a clear completion action).
We defined high level goals and lower level signals for the prioritized aspects as well as actual metrics for each aspect. The members of our multidisciplinary team were invited to contribute to these definitions, similarly to how they participated in our collaborative user research, design and testing activities. This process helped us clarify the purpose of collecting analytics before investing effort in the actual way in this will be done. We review the statements of goals, signals and metrics periodically, with the most recent version shown in Table 1.
||We want people to be visiting the site and viewing its pages.
||Visiting site, Viewing pages
||Visits per week, Unique visitors per week, Pageviews per week, Unique pageviews per week
||We want people to be using the site and performing certain actions.
||Spending time on site, Searching the site, Downloading information, Clicking on evidence links
||Average visit duration, Bounce rate per week, Actions per week, Actions per visit
||We want people to come back to the site after their first visit.
||Returning to the site
||Returning visits, % returning visits / all visits
Table 1: Following the HEART methodology, we prioritized Adoption, Engagement and Retention as the aspects of user experience to focus on for the definition of quantitative metrics. We collaboratively defined high level goals for each of the prioritized aspects and then identified lower level signals as well as actual metrics for each aspect.
|
OPCFW_CODE
|
Continuous Testing for dummies
Understand how continuous testing plays a role in DevOps
03 December 2018
Continuous Testing for Dummies includes implementing end-to-end tests that can evaluate the end-user experience all throughout frontend and backend processes. One of the primary goals of CT is to ensure that the tests are broad enough to spot that whenever there is an application change, it does not adversely impact the functionality of the software. It is about reducing the number of false positives by giving importance to the most flexible and strong test frameworks rather than broken scripts. It is about code review and optimization in the test suite so that there are no redundancies.
The methodology of DevOps and agile ensures that the entire technologies, processes and people will have to undergo transformation while the testing component remains the same. Continuous Testing takes care to change the testing module as well.
Drawbacks of the legacy testing process
- Most of the tests are done at the later stages due to the inability to have the user interface and the other components earlier
- Most of the tests are time-consuming and hence regression tests could not be deployed after each build
- There is no feedback regarding the impact of the changes to the existing user experience
- There is a considerable rework to be in sync with the accelerated release processes
- Most of the test environment suffer from instability due to issues in test data, lack of dependencies, false positives and more.
How Continuous Testing makes working so easy?
Using the right set of integration between the automation, collaboration and toolset it is possible to have end to end testing that can be performed more in line with the agile and DevOps methodology. The process of continuous testing can be divided into various modules like development, continuous integration, Quality Analysis and performance of the application. These four domains need to be tested in their own unique ways so that the complete end-to-end testing is achieved.
The process of testing starts with the development of the code which is done by using the tools like Appium and Selenium which we are using at Lean Apps for testing the functionality of the code.
Optimizing the test includes the process of test data management, test maintenance and test optimization management. Virtualizing the testing process means having access to real world testing process that can be done through early, frequent and ubiquitous testing. The effective continuous testing framework ensures that the elements of the testing strategy comprise the development, operations and quality analysis process for a holistic approach.
What are the advantages of Continuous Testing?
- Aligning testing with business risk to optimize the test execution
- Reducing the amount of manual testing and giving speed to automated testing
- Automating quality check and providing insight for the software release
- Moving the focus of testing to the API layer if possible
- Integrating functional testing into the CI/CD to make it part of delivery pipeline
Testing is one of the most important pieces of the downstream software delivery process that needs to be given the right importance. It is all about mitigating the business risk involved with the testing process that makes continuous testing for dummies that much more prominent. If the software testing is not able to ensure that the business risk cannot be mitigated then it becomes an issue because the entire process of continuous integration and continuous delivery becomes a tough task to take to its logical conclusion.
If the automated delivery process cannot identify how changes impact business risk or disrupt the end-user experience, then the increased frequency and speed of Continuous Integration and Continuous Delivery could become more of a liability than an asset. The pace of modern application delivery is very fast and the continuous testing has to keep in sync with that and also with heightened complexity and accelerated rates of change that are demanded in the software.
|
OPCFW_CODE
|
Definition of Food Nets
Many misconceptions about food webs and food chains. Because the two are almost the same, but actually they are indeed interconnected. Then, what makes the food chain and food webs different?
The food chain is a process of transferring an energy that is owned from one organism to another. The food chain has a sequence called trophic. Whereas food nets are a combination of several food chain processes whose cycles or processes have a relationship with each other.
Therefore, it can be concluded that the food chain is part of the food web process or a process of eating is eaten on a more scale. small. And food webs are processes or collections of various food chains with a wider and larger scale and scope.
Nets – food webs are also called resource systems. Naturally, living things will eat more than one variety or type of food.
For example, a squirrel eats seeds, nuts, and fruits. Then, the squirrel is eaten by a raccoon or fox. Foxes can also eat grasshoppers and mice, and so on. Most living things are part of several food chain processes.
Functions of Food Nets
Based on the above understanding and explanation, food webs have several functions. The following are some of the functions of food webs, namely:
- Nets – food webs have a function for simplification when understanding the relationships between species
- Nets – Food webs in ecosystems have functions and uses to describe direct interactions between species so that the relationships between species are easily distinguished, which are included in transitional species, basal species, and also species of peak predators.
- Food webs also have a function to study top-down controls or bottom-up controls in a structure or form of community.
One example of simple food webs based on ecosystems of rice fields is inside grass, water spinach, genjer, ulay, grasshopper, butterfly, rice, snail, frog, rat, worm, snake, bird pipit, etc.
Nets – food webs are like lan from several food chain processes that are put together into one and are interrelated or related. Food webs appear and occur because of living things that consume more than one type or variation of food or living things consumed by other living things.
Nets not only occur in one ecosystem , but it occurs in various kinds of ecosystems. Here are some food nets from various ecosystems such as rice fields, sea, forests, gardens, lakes, rivers and land.
1. Food Nets in the Rice Field
Nets – many food webs occur in various ecosystems, one of which is in the rice fields, where rice fields are one of the habitats inhabited by living things in them.
Various kinds of living things that live there, definitely make food nets so they can maintain their lives. Here is an example of a food net in a rice field:[1945foodnetinaricefield” width=”504″ height=”317″/>
Rice and trees as producers eaten by rats, grasshoppers and caterpillars. Then, rats are eaten by snakes and eagles. Grasshoppers are eaten by frogs and chickens. Caterpillars are eaten by frogs and chickens.
Then chicken is eaten by snakes and eagles. Frogs are eaten by snakes and eagles. Eagles can also eat snakes. When snakes and eagles die, they will be broken down by decomposing bacteria.
2. Food Nets in the Sea
Nets not only occur in the ecosystem of rice fields, but also occur in the sea. Food nets in the sea are different from food nets in the rice fields, because living things that live inside are also different.
Every living thing will certainly make food webs to extend and maintain its survival. Here is an example of food nets at sea:
Explanation: seaweed as an autotrophic organism or its own food producer or also called a producer eaten by crabs that are consumers. Furthermore, crabs are eaten by squid which are also consumers. Then, squid are eaten by elephant seals and penguins which are also referred to as consumers.
Then, sea elephant dogs are eaten by whales. Penguin is eaten by whales and also sea leopards are also located as consumers.
Then, phytoplankon which is also an organism that is capable of producing its own food can be eaten by small shrimp and zooplankton. Zooplankton is a bacteria and waste-eating animals or carcasses, usually found in dark areas.
Then, zooplankton is eaten by fish. Small shrimp are eaten by blue whales and fish. Then, the fish are eaten by seagulls and sea leopards. And seagulls are eaten by sea leopards.
3. Food Nets in the Forest
In addition to the rice and marine ecosystems, food nets also occur in forest ecosystems. Just like food nets in other ecosystems, there are also food webs in the forest. Only living things that inhabit them are different.
In the forest there are also many predatory living things such as lions, wolves, or tigers. So that living things like rabbits, goats will be easily threatened. The following is an example of a food web in the forest:
Explanation: Plants or trees that act as producers are eaten by tikur, rabbits and goats. Usually consumers who are in the first level are herbivorous, but it is possible that it is also an omnivorous species. Then, rats are eaten by owls and jungle cats.
Rabbits are eaten by jungle cats and wolves. Goats are eaten by wolves. Then, the owl is eaten by jungle cats and snakes. Jungle cats are eaten by lions and wolves. A wolf is eaten by a lion. Then, snakes are eaten by eagles and jungle cats. And the eagle is eaten by the lion. And if the lion dies, it will be eaten by decomposing bacteria.
4. Food Nets in the Garden
In addition to the above ecosystems, food nets also occur in the garden ecosystem. Basically all the concepts of nets – food nets are the same, only living beings distinguish them. Food webs in the garden are also almost the same as food webs in the fields. The following are examples of food webs in the garden:[1945foodnetnets” width=”505″ height=”372″/>
Explanation: Plants as producers or producers of food themselves are eaten by first-level concentrations of grasshoppers, rats, rabbits and birds seed eater. Then, grasshoppers are eaten by spiders. Seed-eating birds are eaten by snakes, eagles and foxes. Then, rats and rabbits are eaten by foxes and eagles.
Then spiders are eaten by snakes and insectivorous birds. Insectivorous birds are eaten by eagles and foxes. Snakes are eaten by eagles. And if eagles and foxes die, they will be eaten by decomposing bacteria.
5. Food Nets on the Lake
Furthermore, food webs also occur in the lake ecosystem. It's almost the same as food nets in the sea, but only living things aren't as much as those in the sea. All must begin with autotroph plants that are able to produce their own food, then eaten by other consumers who live and live on the lake.
6. Food Nets on the River
In addition, food nets are also found in rivers. Just like food webs that occur in lakes and at sea, but food webs that occur in rivers are smaller than lakes and seas, because the ecosystems are only few, not as much as in the sea and on lakes.
7. Food webs on the ground
And food nets also occur on land. Food webs that occur on land, of course, most of the perpetrators are living things on land, such as humans, land plants, and also animals that are on land.
Our discussion this time about the understanding of nets along with its functions and also examples of food nets from various ecosystems. Hopefully this article can be useful. Thank you 🙂
The rain lovers and coffee connoisseurs who like to sit in a daydream while fantasizing about their dreams that will come true.
|
OPCFW_CODE
|
Datadog is the SaaS monitoring and security platform that unifies metrics, traces, logs, and security signals in a single pane of glass, providing both comprehensive and deep visibility into the performance of modern applications. Datadog reduces the time needed to detect and resolve issues by reducing the number of tools needed to troubleshoot performance problems across teams.
DBeaver Corporation specializes in developing tools for visualizing data for all of the most popular databases. With the advanced features of DBeaver, it appears more convenient to explore, process, and administrate a huge range of SQL, NoSQL, and cloud data sources. Our users can work within various infrastructures, including local storage, distributed servers, and clouds. We provide a high level of security and help users protect their data with complex authorization mechanisms. We have been supplying our product worldwide and receive more than 1 million downloads per month.
The Debian Project is an association of individuals who have made common cause to create a free operating system. This operating system they have created is called Debian GNU/Linux, or simply Debian for short.
Debian systems currently use the Linux kernel. Linux is a piece of software started by Linus Torvalds and supported by thousands of programmers worldwide.
The Eclipse Foundation AISBL is an independent, Europe-based not-for-profit corporation that acts as a steward of the Eclipse open source software development community. It is an organization supported by over 350 members, and represents the world's largest sponsored collection of Open Source projects and developers. The Eclipse Project was originally created by IBM in November 2001 and was supported by a consortium of software vendors. In 2004, the Eclipse Foundation was founded to lead and develop the Eclipse community.
From the Internet to the iPod, technologies are transforming our society and empowering us as speakers, citizens, creators, and consumers. When our freedoms in the networked world come under attack, the Electronic Frontier Foundation (EFF) is the first line of defense.
Elastic is a search company. As the creators of the Elastic Stack (Elasticsearch, Kibana, Beats, and Logstash), Elastic builds self-managed and SaaS offerings that make data usable in real time and at scale for use cases like application search, site search, enterprise search, logging, APM, metrics, security, business analytics, and many more. Thousands of organizations worldwide, including Cisco, eBay, Goldman Sachs, Microsoft, The Mayo Clinic, NASA, The New York Times, Wikipedia, and Verizon, use Elastic to power mission-critical systems.
Fedora is a Linux-based operating system that provides users with access to the latest free and open source software in a stable, secure and easy to manage form. We strongly believe in the bedrock principles that created all the components of our operating system, and because of this we guarantee that Fedora will always be free for anybody anywhere to use, modify and distribute.
|
OPCFW_CODE
|
package ftdc
import (
"context"
"io"
"log"
"math"
"github.com/evergreen-ci/birch"
"github.com/pkg/errors"
)
const (
second_ms int64 = 1000
max_samples int = 300
)
// TranslateGenny exports the contents of a stream of genny ts.ftdc
// chunks into cedar ftdc which is readable using t2. Translates
// cumulative event driven metrics into metrics of one-second granularity.
func TranslateGenny(ctx context.Context, iter *ChunkIterator, output io.Writer, actorOpName string) error {
collector := NewStreamingCollector(max_samples, output)
var prevSecond int64
var prevChunk *Chunk
var elems []*birch.Element
for iter.Next() {
if err := ctx.Err(); err != nil {
return err
}
if prevChunk == nil {
prevChunk = iter.Chunk()
}
currChunk := iter.Chunk()
var startTime *birch.Element
// While Metrics can be identified using Metrics[i].Key(),
// each metric has a fixed position in the Metrics slice.
// The 0th position in Metrics is timestamp.
timestamp := currChunk.Metrics[0]
for i, ts := range timestamp.Values {
currSecond := int64(math.Ceil(float64(ts) / float64(second_ms)))
if prevSecond == 0 {
prevSecond = currSecond
}
// If we've iterated to the next second, record the values in this sample.
if currSecond != prevSecond {
idx := i
chunk := currChunk
for _, metric := range chunk.Metrics {
switch name := metric.Key(); name {
case "ts":
startTime = birch.EC.DateTime("start", prevSecond*second_ms)
case "counters.n":
elems = append(elems, birch.EC.Int64("n", metric.Values[idx]))
case "counters.ops":
elems = append(elems, birch.EC.Int64("ops", metric.Values[idx]))
case "counters.size":
elems = append(elems, birch.EC.Int64("size", metric.Values[idx]))
case "counters.errors":
elems = append(elems, birch.EC.Int64("errors", metric.Values[idx]))
case "timers.dur":
elems = append(elems, birch.EC.Int64("dur", metric.Values[idx]))
case "timers.total":
elems = append(elems, birch.EC.Int64("total", metric.Values[idx]))
case "gauges.workers":
elems = append(elems, birch.EC.Int64("workers", metric.Values[idx]))
case "gauges.failed":
elems = append(elems, birch.EC.Int64("failed", metric.Values[idx]))
default:
break
}
}
prevSecond = currSecond
prevChunk = currChunk
if len(elems) > 0 {
actorOpElems := birch.NewDocument(elems...)
actorOpDoc := birch.EC.SubDocument(actorOpName, actorOpElems)
cedarElems := birch.NewDocument(startTime, actorOpDoc)
cedarDoc := birch.EC.SubDocument("cedar", cedarElems)
if err := collector.Add(birch.NewDocument(cedarDoc)); err != nil {
log.Fatal(err)
return errors.WithStack(err)
}
elems = nil
}
}
}
}
return errors.Wrap(FlushCollector(collector, output), "flushing collector")
}
|
STACK_EDU
|
<?php
namespace Tests;
error_reporting(E_ALL ^ E_STRICT);
use AlternativePayments\Model\Subscription;
use PHPUnit_Framework_TestCase;
use DateTime;
class SubscriptionTest extends PHPUnit_Framework_TestCase {
private $testSubscription;
private $testId;
private $planId;
private $customerId;
private $paymentId;
private $testCreated;
private $testUpdated;
private function SetTestValues(){
$this->testId = "testcodevalue12345";
$this->planId = "testplanid12345";
$this->customerId = "testcustomerid12345";
$this->paymentId = "testpaymentid12345";
$this->testCreated = new DateTime('2014-01-01 00:00:00');
$this->testUpdated = new DateTime('2014-01-01 00:00:00');
}
private function InitTests(){
$this->testSubscription = new Subscription();
$this->testSubscription->setId($this->testId);
$this->testSubscription->setCreated($this->testCreated);
$this->testSubscription->setUpdated($this->testUpdated);
$this->testSubscription->setPlanId($this->planId);
$this->testSubscription->setCustomerId($this->customerId);
$this->testSubscription->setPaymentId($this->paymentId);
}
public function SetUp(){
$this->SetTestValues();
$this->InitTests();
}
public function testCode() {
$this->assertTrue($this->testSubscription->getId() == $this->testId);
}
public function testCreatedDate() {
$this->assertTrue($this->testSubscription->getCreated() == $this->testCreated);
}
public function testUpdatedDate() {
$this->assertTrue($this->testSubscription->getUpdated() == $this->testUpdated);
}
public function testPlanId(){
$this->assertTrue($this->testSubscription->getPlanId() == $this->planId);
}
public function testCustomerId(){
$this->assertTrue($this->testSubscription->getCustomerId() == $this->customerId);
}
public function testPaymentId(){
$this->assertTrue($this->testSubscription->getPaymentId() == $this->paymentId);
}
}
|
STACK_EDU
|
Python for exploratory data analysis and association rules applied to an e-commerce dataset
2020 was a historic year for e-commerce. With social isolation, the online sales broke many records. And despite the start of vaccination and a possible return to mobility, the trend is to keep growing in 2021.
Many professionals who work with e-commerce do not know the potential that data analysis can bring to the business. In this article, I’m going to talk a little bit about how to do some simple analysis with python. Those analytics solutions can open up new opportunities, identify problems and provide useful information for managing e-commerce.
We will conduct an Exploratory Data Analysis (EDA) on a e-commerce data set available on Kaggle. After that we are going to use the Apriori association algoritm , which is nothing more than a method of exploring relationships between items.
The data set is available at: https://www.kaggle.com/roshansharma/online-retail
First we are going to import the libraries and the data:
As usual, i like to check the first lines of the data.
Let’s check the shape, columns types and null values:
Initially we can see that we have 135080 null values and the CustomerID variable is not in string format. If we give describe(), it will be possible to see negative values in the data. These amounts are due to canceled orders. I will remove them, as we will not need it.
We will make the necessary transformations:
Now, let’s take a new look at the headers and answer the first question that came to my mind (Which country has the highest sales value?):
We can see this in a bar chart:
This is a London store, so it makes sense to have a much higher sales to UK.
We will use this information to apply apriori in the two countries with the highest sales: UK and Netherlands.
First, let’s understand the measures that we will work on: Support, Confidence, Lift and Conviction.
- Support: The measure indicate the proportion of X in Y.
- Confidence: The Confidence measure is calculated on top of a rule (X => Y). It expresses the proportion of “If X is bought, what is the chance of Y being bought?”
- Lift: The lift measure indicates the chance of Y being bought, if X is bought, and considering all of Y quantities.
- Conviction: The Conviction measure is interested in calculating the frequency that X occurs and Y does not occur, that is, it is interested in when the rule fails.
If you want to know more details on how the whole theory behind it works, you can find it in the references at the end of the article.
Let’s get back to the code:
The first part of the code above generates a database with UK orders only. A pivot table is generated where each column corresponds to a product and each row corresponds to the sum of the quantity purchased for that product in a given order.
Next, we’ll apply the apriori algorithm:
Here we did the generation of frequent itemsets and association rules.
Note that we need to define the support measure.
For this database, we found only two association rules for 0.03 support.
We will perform the same procedure for the Netherlands database. This time let’s try a 0.07 support:
This code is similar to the code used to gnerate the UK rules. The objective is to show how the minimum support and the minimum confidence can vary from one base to another. One country may have a more homogeneous purchasing profile and generate rules with greater support, while another country may generate rules with less support.
Finaly we come to the end of this article, whose main objective was to carry out an exploratory analysis and identify some association rules for a data set extracted from e-commerce.
I hope you guys enjoyed and see you soon!
|
OPCFW_CODE
|
I also am having the same issue and I’m very curious to fix this. thanks.
when a new user signs up they will get the “default” them, /wp-content/themes/default
replace that with whatever theme you want them to get, in this case, the buddypress theme
i may have just figured that out….not sure if thats what youre saying……
here’s what i did:
1. deleted all themes in my theme folder except the one i wanted to use, then renamed it “default”
2. i then went to the member themes folder, deleted all the themes i wasn’t using except the one i wanted to use, and renamed it “default”
Here’s a WPMU plugin that allows admin to set new blog defaults, including the default theme.
Thanks all i’ll try both suggestions (including the plugin when i get home from work if moving the BP theme into “Default” doesn’t work.)
The first suggestion worked, now one other question (I swear) LOL. Is there a way to make the sidebar widgets & etc all the same? For instance if I create another blog, I can then put my own widgets on the blog.
I see I’m not the only one thinking in this direction. Basically a uniform default pre-widgeted blog layout for new users to use, that matches the existing site, creating a “facebook” like experience for lack of a better term, where everything fits nicely in its box.
I think the next step would be once you’ve got that done, to remove the “Appearance” area from the user admin entirely, that way they don’t go breaking it, or seeing a menu that they can’t use.
I’ve thought about making a default theme that wasn’t widgetized, with a hard coded sidebar.php; that would work, but it’s more of a work-around than a resolution.
Does anyone know what would be the best way to get this level of integration of blogs and buddy press?
I found that screenshot here: http://apeatling.wordpress.com/2008/06/12/new-buddypress-theme/
@John James Jacoby: You could use the new blog defaults idea to set the widgets for all new blogs. Disabling the menus is pretty easy via a plugin. If you wanted to be secure you’d probably also need a quick check in the wp-admin interface to ensure that users don’t stumble upon the theme / widgets pages. You could easily redirect them if they did.
That would provide a pretty consistent experience without having to re-code the sidebar.
Ooh, or even simpler, oh yes, this is genius. Hook into the pre get option call and replace the sidebar-widgets option with the sidebar widgets from another blog. So you could either use the sidebar widgets from your primary blog (ie blog id 1) or another blog that you could maintain for just that purpose. Then you have one central place to manage the widgets, and they’re automatically modified across all your blogs. Nice.
Let me know (PM, I probably won’t see replies on this thread) if you need any pointers with the code. I could probably whip something up or at least give you the core function calls to investigate.
Ironically enough, that’s pretty much exactly what I ended up doing. Spent a little time getting close and comfortable with a few hooks and filters and checked out some of the existing WPMU default settings blogs.
|
OPCFW_CODE
|
WD MyCloud NAS - 1 Failed Drive replaced, found another dead one before array was rebuilt
I have a WD MyCloud 4-bay NAS with 4x 3tb drives in it setup in a RAID10 configuration (with a total RAIDed capacity of 5.5tb). A drive had failed and I got it replaced. When I entered the dashboard to have it added, I noticed one of the 3tb drives was showing as having a capacity of 4.4gb. Now the NAS says that there are no configured volumes. Anyone have any ideas on how to retain the data that's on the drives? I had about 5.4tb of data I would like to not lose.
Another case of incompetence killing the cat, sadly. I got seriously stern warnings about taking backups 30 years ago - how times have NOT changed.
Voting to close: Questions should demonstrate reasonable information technology management practices. Questions that relate to unsupported hardware or software platforms or unmaintained environments may not be suitable for Server Fault. Not taking backups is maybe acceptable when you work at the counter at McDonalds - but NOT when you are responsible for data.
RAID is not a backup. If you have important data, take backups.
There are recovery options though, which have 0% - 100% probability of getting your data back.
The basic process is to take image copies of the hard drives to other hard drives, and then try to reconstruct the arrays from the images. To actually do this, you need to know how drives, RAID systems and filesystems interact with each other.
If you don't have such knowledge, then your only option is a data recovery company.
... and monitor your RAID, scrub periodically and have notifications about component status. Unmonitored RAID equals no redundancy, because in this case you only recall there was a RAID when it was already failed below its survival mark.
Anyone have any ideas on how to retain the data that's on the drives?
There is no data on the drives. THere are fragments of data, but you literally ask how to put together a mirror from half the pieces.
There is something like common sense. Which includes backup - since 30 years or so. Ever since leaving school back then i have been told times and times ago how people ruined companies by not making backups.
So, SOEMONE decided to not make backups or SOMEONE decided to not grant you the funds when you asked. That person is legally responsible for gross neglect and incompetence, should your boss want to pursue that legally.
The only answer we can give is: Reinitialize and restore from backup. Not that hard - you DO have tapes or at least disc backups. Now you may realize why there are still tapes around.
|
STACK_EXCHANGE
|
Brian Jay Stanley
Brian Jay Stanley
Asheville, North Carolinawww.brianjaystanley.com/LIS/resume/
Software developer and writer
- Director of DevelopmentBuildFaxAsheville, NCNovember 2011 - present
- Developed scalable, cloud-based web applications using Python, Django, MySQL, and Memcached.
- Wrote technical specifications.
- Created RESTful web service.
- Wrote clean, commented, object-oriented code following best practices.
- Designed database schemas, object models, and XML schemas.
- Implemented enterprise search using Solr/Lucene.
- Designed efficient algorithms for processing massive data sets.
- Trained and supervised other developers.
- Wrote API documentation.
- Wrote automated test suites.
- Configured Linux and Apache and managed Amazon EC2 servers in the cloud using RightScale.
- Accurately estimated project timelines and met deadlines.
- Senior DeveloperBuildFaxAsheville, NCOctober 2010 - November 2011
- DeveloperBuildFaxAsheville, NCOctober 2009 - October 2010
- DeveloperdataBridgeAsheville, NCJune 2009 - September 2009
- Developed corporate intranets and extranets.
- Created custom console applications in Visual C#.
- Wrote SQL queries, views, and stored procedures in SQL Server.
- Designed reports with SQL Server Reporting Services.
- Installed and configured virtual development environments running Windows Server.
- Performed server administration tasks.
- Developed databases in Microsoft Access.
- Interfaced with clients to gather requirements.
- Conference Database Developer and Electronic Journal PublisherACA-UNCA Partnership for Undergraduate Research, University of
North Carolina at AshevilleAsheville, NCApril 2009 - October 2009
- Developed MySQL database for managing conference registration, article submissions, and review process.
- Developed website and web-based forms using Joomla!.
- Wrote PHP scripts for performing administrative tasks.
- Developed electronic journal publishing system using MySQL › PHP › XML › XSLT pipeline.
- Provided expertise on best practices for information management.
- Wrote technical manual.
- Web and Database DeveloperRamsey Library Special Collections, University of North Carolina at
AshevilleAsheville, NCFebruary 2008 - April 2009
- Developed master collection database in Microsoft Access.
- Wrote validation scripts in Visual Basic to improve integrity and efficiency of data entry.
- Created XSLT stylesheets to generate web pages from XML files and crosswalk data into library catalog.
- Wrote Python programs for processing XML and HTML files.
- Modified metadata schemas, administered controlled vocabularies, and created and managed metadata for digital collections in a variety of formats.
- Designed website for large manuscript collection.
- Developed comprehensive workflows and best practices for creating digital collections.
- Trained staff in data entry, naming conventions, and web content creation.
- Wrote online documentation.
|University of Illinois at Urbana-Champaign Champaign IL||M.S. Library and Information Science (digital libraries concentration)||2009||GPA: 4.0|
|Duke University Durham NC||M.T.S. Master of Theological Studies||2003||GPA: 3.95 summa cum laude|
|University of North Carolina at Chapel Hill Chapel Hill NC||B.A. Religious Studies||2001||GPA: 3.99 highest distinction, highest honors|
- Programming languages:
- regular expressions,
- Visual Basic
- Relational databases:
- SQL Server,
- Microsoft Access
- XML standards:
- XML Schema
- Web standards:
- Metadata/cataloging standards:
- Dublin Core,
- Programming languages:
- Visual Studio,
- Oxygen XML Editor,
- Frameworks/Content Management Systems:
- Movable Type
- Server Administration:
- Linux (Ubuntu),
- Apache 2,
- Windows Server/IIS
- Cloud Computing:
- Amazon EC2,
- Amazon S3,
- Graphics editors:
- Adobe Photoshop
- "Odyssey of Desire", Pleiades, 2013, vol. 33.1, p.23-39
- "A Sense of All Sorrows", Pleiades, 2012, vol. 32.2, p.64-70
- "The Communion of Strangers", The Sun, 2012, vol. 436, p.13-15
- "Meditation During a Rainstorm", Connecticut Review, 2010, vol. 32.2, p.129-135
- "A Visit to the City", The Redwood Coast Review, 2010, vol. 12.2, p.10
- "Confessions of a Carnivore", North American Review, 2009, vol. 294.5, p.37-39
- "In Praise of Passion", The Dalhousie Review, 2009, vol. 89.3, p.369-379
- "On Being Nothing", The Antioch Review, 2009, vol. 67.2, p.340-348
- "Night Thoughts", The Tusculum Review, 2008, vol. 4, p.94-98
- "The Electric Present", Rock & Sling, 2007, vol. 4.2, p.14-18
- "The Lonely Race", The Laurel Review, 2007, vol. 41.2, p.91-94
- "The Finite Experience of Infinite Life", The Hudson Review, 2005, vol. 58.1, p.19-28
- "On Being Nothing", The New York Times, 2012, opinionator.blogs.nytimes.com/2012/09/09/on-being-nothing/
- "Confessions of a Carnivore", America Now: Short Readings From Recent Periodicals, edited by Robert Atwan, Ninth Edition, Bedford/St. Martin's, 2011
- The Best American Essays 2011, Notable Essays of 2010, "Meditation During a Rainstorm", 2011
- The Best American Essays 2010, Notable Essays of 2009, "On Being Nothing", 2010
- The Best American Essays 2006, Notable Essays of 2005, "The Finite Experience of Infinite Life", 2006
|
OPCFW_CODE
|
(Mac OSX 10.10.5)
I can reproduce from the matplotlib website mplot3d the example code for a 3D scatter plot scatter3d_demo.py, however the plot renders as a static image. I can not click on the graph and dynamically rotate to view the 3D plotted data.
I have achieved the static 3D plot using the example code - using (a) ipython from within Terminal, (b) ipython notebook from within terminal, and (c) ipython notebook launched from the Anaconda launcher.
I think I am missing some very basic step as assumed knowledge.
In past learning, plotting has opened a GUI Python App which has a graph viewer. (Solution 2 in code shown below opens this.) Perhaps I need to know the code to export the output graph to that display method? (Yes, use %matplotlib (only) as first line without inline or notebook as shown in comments in code block below.)
As an example in ipython notebook:
# These lines are comments # Initial setup from an online python notebook tutorial is below. # Note the first line "%matplotlib inline" this is how the tutorial has it. # Two solutions 1. use: "%matplotlib notebook" graphs appear dynamic in the notebook. # 2. use: "%matplotlib" (only) graphs appear dynamic in separate window. # ( 2. is the best solution for detailed graphs/plots. ) %matplotlib inline import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D pd.set_option('html',False) pd.set_option('max_columns',30) pd.set_option('max_rows',10) # What follows is a copy of the 3D plot example code. # Data is randomly generated so there is no external data import. def randrange(n, vmin, vmax): return (vmax-vmin)*np.random.rand(n) + vmin fig = plt.figure() ax = fig.add_subplot(111, projection='3d') n = 100 for c, m, zl, zh in [('r', 'o', -60, -25), ('b', '^', -30, -5)]: xs = randrange(n, 23, 50) ys = randrange(n, 0, 100) zs = randrange(n, zl, zh) ax.scatter(xs, ys, zs, c=c, marker=m) ax.set_xlabel('X Label') ax.set_ylabel('Y Label') ax.set_zlabel('Z Label') plt.show()
Can someone identify what I am missing?
Looking at Python 3.3.6 documentation, section 25.1perhaps the tkinter package ...
The tkinter package (“Tk interface”) is the standard Python interface to the Tk GUI toolkit. Both Tk and tkinter are available on most Unix platforms, as well as on Windows systems.
I think though, this relates to development of GUI programs so I am not sure this is relevant. (Correct, this was not needed for the solution.)
|
OPCFW_CODE
|
CS 290G: Secure Computation (Spring 2014)
Instructor: Huijia (Rachel) Lin, rachel.lin(at)cs(dot)ucsb(dot)edu
Class time and location: TR 3-4:50pm, Phelps 2510
Office hours: Wed 1:30-2:30pm or by appointment, HFH 1153
Class webpage: http://www.cs.ucsb.edu/~rachel.lin/courses/14s290G/
With the growing demand for security and privacy, the field of cryptography has expanded rapidly in the past three decades. Beyond the original goal of ensuring secure communication, innovative and powerful concepts and primitives have emerged that enable new secure paradigm of computing. In this course, we will survey some of the exciting new developments in private database, computation over encrypted data, secure computation without trusted third party, and verifiable outsourcing of computation.
The basic nature of cryptography is all-or-nothing, protecting the privacy of honest individuals against the evil. Core cryptographic primitives, such as, encryption, hash functions, signatures etc., are developed and continuously improved to ensure data confidentiality, integrity and authenticity. A fundamental question that follows is how to extract utility from the heavily securely guarded data? Can we still compute over them? Can we collaborate across boundaries of trust? Can we support dynamic data, maintain efficiency and flexibility? We will see examples of new cryptographic primitives---Oblivious RAM, Searchable Encryption, Fully Homomorphic Encryption, Secure Multi-Party Computation, and Universal Arguments---that achieve both security and utility in some scenarios, and brainstorm about other scenarios where security and utility remain in conflict.
Course Objectives: For students who are interested in cryptography research, the objective is preparing the background for delving into a topic. For students who are interested in security issues in other areas, the objective is familiarizing them with cryptographic tools and thinking, enabling application of cryptographic tools in other areas, and spurring interdisciplinary research.
Course Set-ups and Requirements: The course will be a combination of lectures and paper presentations by the students. At the beginning of the course, I will give lectures on bare basics of cryptography. Then the course will move on to more concrete topics. For each topic, I will again give lectures to introduce the primitives we study and some background. After that, students will present papers on this topic.
Students will also pursue a course research project. Some examples of the flavors of projects are: 1. Improve known construction of a primitive. 2 Propose a new primitive, formalize it and explore its construction. 3. Apply a primitive to solve a security problem in another area. Any combination of the flavors are also encouraged. The final outputs of the project include a presentation and a short report.
Final assessment will depend on a combination of presentation, in-class participation, and final project. The tentative weights of different components are 35%, 15%, 50%
The following is a WORKING DRAFT of the schedule of the class. There will be changes depending on the pace of the class.
|Week||Date||Lecture contents||Papers/Reading material||Format|
Welcome and Basics I
Private Databases I
|4||2014-04-22||Private Databases II
||Student Presentation by Victor|
|2014-04-24|| Private Databases III
||Student Presentation by Divya|
|5||2014-04-29||Computing Over Encrypted Data I
|2014-05-01||Computing Over Encrypted Data II
|6||2014-05-06||Computing Over Encrypted Data III
||Student Presentation by Jason|
|2014-05-08|| Computing Over Encrypted Data IV
||Student Presentation by Akhila|
Secure Multi-Party Computation I
|2014-05-15||Secure Multi-Party Computation II
|8||2014-05-20||Secure Multi-Party Computation III
||Student Presentation by Chirs and Omer|
|2014-05-22||Secure Multi-Party Computation IV
||Student Presentation by Morgan and Asad|
|9||2014-05-27||Verifiable Outsourcing of
Computation I ||Lecture|
|2014-05-29||Verifiable Outsourcing of
Computation II ||Student Presentation by Fish|
|10||2014-06-03||Verifiable Outsourcing of
|2014-06-05||Student Projects Presentation|
Resources and References:
O. Goldreich. The
Foundations of Cryptography
J. Katz and Y. Lindell. Introduction to Modern Cryptography
R. Pass and
a. shelat. A
Course in Cryptography (Lecture Notes)
M. Bellare and P. Rogaway. Introduction to Modern Cryptography (Lecture Notes)
|
OPCFW_CODE
|
Although quant in nature, online surveys also enable researchers to add a little qualitative flair to the overall findings by using unstructured, open-ended questions. The collected feedback can provide enriching, supportive insight akin to a focus group sound-bite. For surveys of smaller sample sizes, the verbatims may even help “break a tie” on content-related analytical conundrums, thus aiding business decision making. However, since there is no interviewer present, open-ended questions must be written thoughtfully and used selectively. On the analysis end, researchers must be assiduous and unbiased in their coding and interpretation.
In online surveys, verbatims are collected by programming in open-ended or free response question types. Verbatims are sometimes gathered from an “Other” answer option whereupon selecting, respondents are asked to specify an unlisted answer in their own words. Open-ended questions can provide a lot of insight if used properly. Since they solicit unbiased information compared to structured questions, they’re excellent to use when first introducing a topic to learn respondents’ initial thoughts. To use in a concept test, for example, you would present the product or service to the respondent (usually text + image/video) and first ask a purchase interest question using a Likert scale – a structured question. But before presenting any specific attributes (biasing material) to the respondent, you ask for their unbiased feedback via unstructured questions, usually in the form of likes and dislikes. Then, one approach in analysis could be to determine how often the key attributes according to you, the researcher, and according to the respondents’ actual attribute ratings matched with what they typed earlier in the survey (e.g., if “This product looks safe for my child” is one of your key attributes, how many respondents mentioned something relating to safety in their open-ended responses before seeing the list of attributes? Was it mentioned positively or negatively? Does this align with their ratings?). Open-ended questions are also very useful in exploratory research when you are trying to learn more about a topic generally and need specific feedback from your sample population. It is important to recognize a few disadvantages of unstructured questions, so you can use them sparingly and collect richer insights into the process. The principal disadvantage is that they are costly and time-consuming to code. Coding verbatims requires a researcher to summarize responses in a format that is useful for data analysis. So, while it may seem like asking 10 open-ended questions in your survey is critical for your business decision making, having to sift through and organize each respondent’s thoughts may not result in the enlightenment you expected. Additionally, you must consider the respondent’s burden to reply to 10 unstructured questions. It is much easier and faster to simply select from a list of provided answer choices; instead, open-ended questions require the respondent to consider and type out a response that makes sense. Implicitly, more weight or value is given to respondents who type out more thoughtful and lengthy responses. Furthermore, you’ll notice fewer articulate responses in online surveys since it can be more effort to type a response on a keyboard than to speak to an interviewer if one were present. There also may be more spelling errors and misinterpretation resulting from the respondents’ use of autocorrect, voice-to-text, etc.
All survey questions and answers need to be assigned a code, usually a number. In the example below, the top row consists of a category for each demographic trait question, survey question, plus respondent ID and date/time survey was taken. In the gender question (column C), two codes were used, ‘1’ and ‘2’, where ‘1’ represents ‘Female’ and ‘2’ represents ‘Male’. In the screener question PQ1 (column K), there were 7 different answer choices, and each number code represents which answer choice the respondent in the corresponding row selected. In a yes/no question, like Q1 (column L), the codes ‘1’ and ‘0’ represent ‘yes’ and ‘no’. These codes were automatically created and applied through the survey platform. The coding of unstructured questions, on the other hand, is much more complicated. Codes must be manually developed for every answer provided. Sometimes, based on past projects or other experience, researchers can precode; this is one method to help overcome the key disadvantage in a verbatim analysis. In precoding, responses that are anticipated are recorded in a multiple-choice format and responses that match the answer category are grouped there and coded accordingly. This would be useful when there is a limited number of optional responses possible, so they are more easily predicted. Typically, though, coding must wait until after fieldwork has been completed, so be sure to keep that in mind when considering project timing. During coding, researches will group a number of similar responses and assign them a category code. Then, all other similar responses will be applied to that category code and grouped together. Consider these helpful tips when developing category codes for open-ended responses:
- Category codes should be mutually exclusive – i.e., each response can fit into only one category with no overlap.
- Category codes should be collectively exhaustive – i.e., every response can fit into one of the categories and isn’t left out.
- You may need to include an “Other” or “NA” category to ensure you’re using collectively exhaustive categories, but keep in mind only 10% or less of the total sample should fit here.
- Category codes should be assigned to critical issues, even if no respondent mentions them, which can be telling on its own.
- Use codes that retain as much detail as possible (be as specific as you can).
The data collected from open-ended questions can provide enriching insights that add personality to your reporting. Similar to qualitative insights gathered in focus groups, verbatims can support key findings and make the data feel more human and real. Proceed with caution, however, as coding and analysis are timely and costly. Use free response questions only when the data will help support business decision making – it should be “need to have”, not “nice to have” information. Keep in mind that online surveys often result in a higher rate of shorter and less detailed responses due to various factors. Post-fielding, researchers must code the verbatims into distinct and appropriate categories, grouping similar responses accordingly. Category codes should be mutually exclusive, collectively exhaustive, assigned to critical issues regardless of mention, and as specific as possible.
|
OPCFW_CODE
|
Site builder has Flash support, audio and video players, RSS. Excellent traffic reporting. Good shopping-cart tax and shipping options.
Confusing sign-up has too many options, come-ons. Components poorly integrated. E-mail account provision is stingy. Product logs you out too often.
GoDaddy has a capable online site builder and excellent site stat reporting, but the overall site-creation and management experience is far less intuitive and streamlined than what you get with Microsoft Office Live Small Business and Yahoo! Small Business.
Although it's probably best known as a huge domain-name registrar, GoDaddy can also help small businesses move online with tools and services like its WebSite Tonight site builder, e-mail account provisioning, search optimization, e-mail marketing, and site-statistic reporting. Patient tinkerers who want to track their Web traffic and fine-tune their sites to improve search-engine rankings will be happy with GoDaddy. The firm certainly offers comprehensive support, but if you want to get your business online as fast and painlessly as possible there are better options.
GoDaddy's site gets high marks for security (Firefox's location bar indicates the highest level of encryption and verification), but not for layout. The home page has to be one of the busiest, most confusing on the Web, overloaded with offers, prices, menu choices, and text and image links. Although it's possible to find what you need, the plethora of choices is a good indication of what to expect if you sign up for GoDaddy's services.
I looked at GoDaddy's New Business solution, the closest competitor to the base business-site plans of Yahoo! Small Business (YSB; $11.95 monthly or $134.43 yearly) and Microsoft Office Live Small Business (OLSB; free the first year with 500MB storage and $14.95 per year after that). At $159.93 per year, GoDaddy's entry-level offering isn't actually a bargain compared with the competition, despite all the splashy special-offer promotions on its site.
For your fee, you get a domain name with private registration, WebSite Tonight (WST) page-building and hosting, Traffic Facts reporting, and Traffic Blazer Deluxe site-promotion tools. The Business Starter plan provides 2GB of storage, lets you transfer 100GB of data, and lets you build a maximum of ten pages. The last seems restrictive, given that services like OLSB and YSB let you add all the pages that fit in your allotted storage. GoDaddy also sells other business-related packages. E-mail Users gives company-branded e-mail addresses, group calendars, and folders. The Domain Buyers service lets individuals park domains to save them for later use, or, more likely, to gouge those who need them for real business. eCommerce Sites is for selling online with shopping carts.
After you sign up, you have to come up with a domain name that isn't taken. The search tool that helps with this is limited compared with those of Register.com and even the Microsoft and Yahoo! challengers. To close the deal on your new domain, you'll have to wade through a blizzard of other offers. When you finally get to your shopping cart, the annoyance level drops a bit, but not completely. GoDaddy provides one nice touch, thoughit uses Domains By Proxy to register your domain; your true registration info remains private, helping to prevent spam flooding the address you use to register.
When you log in to your account, you get a page that's nearly identical to the busy home page nonaccount holders see, except that the My Account choice has working links. The My Account page itself is as dizzying as the rest of the site. Choose the WebSite Tonight link to visit the online Web-page builder. If you don't want to finish building your whole site in the same session, you can get back to it later via the WebSite Tonight home page, a portal that shows pages you're working on, top help articles, and thermometer-style controls that display your storage and data transfer usage. Next to these are strategically placed upgrade buttons in case you're reaching account limits.
You have to click on use credit to take advantage of WebSite Tonight or any other features included in your subscriptionan annoying extra step the better-integrated OLSB and YSB! don't require. Using a wizard, you enter site settings, then choose and customize a design. After selecting one of your domains to use, you must create another username/password combination for that domainanother sign of the offerings' lack of integration.Next: Tailoring Your Look
|
OPCFW_CODE
|
An Earley-Algorithm Context-free grammar Parser Toolkit
SPARK - An Earley Algorithm Parser toolkit.
SPARK stands for Scanning, Parsing, and Rewriting Kit. It uses Jay Earley’s algorithm for parsing context free grammars, and comes with some generic Abstract Syntax Tree routines. There is also a prototype scanner which does its job by combining Python regular expressions.
The original version of this was written by John Aycock for his Ph.d thesis and was described in his 1998 paper: “Compiling Little Languages in Python” at the 7th International Python Conference. The current incarnation of this code is maintained (or not) by Rocky Bernstein.
Note: Earley algorithm parsers are almost linear when given an LR grammar. These are grammars which are left-recursive.
This uses setup.py, so it follows the standard Python routine:
python setup.py install # may need sudo # or if you have pyenv: python setup.py develop
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|File Name & Checksum SHA256 Checksum Help||Version||File Type||Upload Date|
|spark_parser-1.7.0-py2.4.egg (34.4 kB) Copy SHA256 Checksum SHA256||2.4||Egg||Oct 10, 2017|
|spark_parser-1.7.0-py2.5.egg (33.8 kB) Copy SHA256 Checksum SHA256||2.5||Egg||Oct 10, 2017|
|spark_parser-1.7.0-py2.6.egg (33.7 kB) Copy SHA256 Checksum SHA256||2.6||Egg||Oct 10, 2017|
|spark_parser-1.7.0-py2.7.egg (33.5 kB) Copy SHA256 Checksum SHA256||2.7||Egg||Oct 10, 2017|
|spark_parser-1.7.0-py3.3.egg (34.8 kB) Copy SHA256 Checksum SHA256||3.3||Egg||Oct 10, 2017|
|spark_parser-1.7.0-py3.4.egg (34.4 kB) Copy SHA256 Checksum SHA256||3.4||Egg||Oct 10, 2017|
|spark_parser-1.7.0-py3.5.egg (34.2 kB) Copy SHA256 Checksum SHA256||3.5||Egg||Oct 10, 2017|
|spark_parser-1.7.0-py3.6.egg (33.6 kB) Copy SHA256 Checksum SHA256||3.6||Egg||Oct 10, 2017|
|spark_parser-1.7.0-py3-none-any.whl (17.6 kB) Copy SHA256 Checksum SHA256||py3||Wheel||Oct 10, 2017|
|spark_parser-1.7.0.tar.gz (187.8 kB) Copy SHA256 Checksum SHA256||–||Source||Oct 10, 2017|
|
OPCFW_CODE
|
<?php
namespace FasterPay;
use FasterPay\Services\GenericApiService;
use FasterPay\Services\HttpService;
use FasterPay\Services\PaymentForm;
use FasterPay\Services\Payment;
use FasterPay\Services\Subscription;
use FasterPay\Services\Signature;
use FasterPay\Services\Pingback;
class Gateway
{
protected $config;
protected $http;
protected $baseUrl = '';
protected $project;
protected $extraParams = [];
public function __construct($config = [])
{
if (is_array($config)) {
$config = new Config($config);
}
$this->config = $config;
$header = [
'X-ApiKey: ' . $this->config->getPrivateKey(),
'Content-Type: application/json'
];
$this->http = new HttpClient($header);
}
protected function getBaseUrl()
{
if (!$url = $this->config->getApiBaseUrl()) {
$url = $this->baseUrl;
}
return $url . '/';
}
public function getEndPoint($endpoint)
{
return $this->getBaseUrl() . $endpoint;
}
public function getHttpClient()
{
return $this->http;
}
public function paymentForm()
{
return new PaymentForm($this);
}
public function signature()
{
return new Signature($this);
}
public function pingback()
{
return new Pingback($this);
}
public function subscriptionService()
{
return new Subscription(new HttpService($this));
}
public function paymentService()
{
return new Payment(new HttpService($this));
}
public function getConfig()
{
return $this->config;
}
public function callApi($endpoint, array $payload, $method = GenericApiService::HTTP_METHOD_POST, $header = [])
{
$endpoint = $this->getEndPoint($endpoint);
$service = new GenericApiService(new HttpClient);
return $service->call($endpoint, $payload, $method, $header);
}
}
|
STACK_EDU
|
package insidescraper
// ResolvedSection stores optimized data.
type ResolvedSection struct {
*SiteData
ID string
Content []ContentReference
Audio map[string]Media
// AudioCount contains the total number of audio classes contained in this section,
// including all descendant sections.
AudioCount int
}
// ContentReference can refer to any of section, lesson, or media
type ContentReference struct {
Type DataType
// Reference can either be the ID of a section or lesson, or a media source URL.
Reference string
}
// ResolvingItem is an interim object during resolution.
type ResolvingItem struct {
Type DataType
SectionID string
// Audio is for when a section consists only of a single audio.
Audio *Media
// Lesson is for when a section is really just a lesson.
Lesson *Lesson
}
// SectionResolver optimizes data structure.
type SectionResolver struct {
// Site is the original Site
Site Site
// NeweSite is the resolved Site
ResolvedSite ResolvedSite
}
// ResolvedSite contains all site data.
type ResolvedSite struct {
Sections map[string]ResolvedSection
Lessons map[string]Lesson
// IDs of all top level sections.
TopLevel []TopItem
}
// ResolveSite resolves the Site into an optimized site.
func (resolver *SectionResolver) ResolveSite() {
resolver.ResolvedSite = ResolvedSite{
TopLevel: resolver.Site.TopLevel,
Sections: make(map[string]ResolvedSection),
Lessons: make(map[string]Lesson),
}
for _, topSection := range resolver.Site.TopLevel {
resolver.ResolveSection(topSection.ID)
}
}
// ResolveSection converts the given section into its most efficiant
// representation.
func (resolver *SectionResolver) ResolveSection(sectionID string) *ResolvingItem {
// If this item was already resolved, don't do it again.
if _, exists := resolver.ResolvedSite.Sections[sectionID]; exists {
return &ResolvingItem{
Type: SectionType,
SectionID: sectionID,
}
}
section := resolver.Site.Sections[sectionID]
if section.AudioCount == 1 {
if len(section.Lessons) > 0 {
lesson := resolver.Site.Lessons[section.Lessons[0]]
media := resolver.ResolveMedia(lesson.Audio[0], lesson.SiteData)
return &ResolvingItem{
Type: MediaType,
Audio: &media,
}
}
}
if !(len(section.Sections) > 0 && len(section.Lessons) > 0) {
if len(section.Sections) == 1 {
return resolver.ResolveSection(section.Sections[0])
}
if len(section.Sections) > 0 {
if resolver.isEverySectionMedia(sectionID) {
return &ResolvingItem{
Type: LessonType,
Lesson: resolver.simpleSectionsToLesson(sectionID),
}
}
} else if resolver.isEveryLessonMedia(sectionID) {
return &ResolvingItem{
Type: LessonType,
Lesson: resolver.simpleLessonsToLesson(sectionID),
}
}
}
// TODO: Better resolve lessons. Resolve lessons to parent (current section).
// If a lesson just has one media, add it to parent.
// Otherwise, reference the lesson in parent, and add lesson to resolved output.
resolver.ResolvedSite.Sections[sectionID] = ResolvedSection{
SiteData: section.SiteData,
ID: sectionID,
AudioCount: section.AudioCount,
Content: make([]ContentReference, 0),
Audio: make(map[string]Media),
}
// Incorporate all the lessons. If its a single audio, is absorbed into parent section.
for _, lessonID := range section.Lessons {
resolver.useResolvedToParent(resolver.resolveLesson(lessonID), sectionID)
}
// Finally, if this is a real, complicated section, resolve all of its sub sections.
for _, subsectionID := range section.Sections {
resolver.useResolvedToParent(resolver.ResolveSection(subsectionID), sectionID)
}
return &ResolvingItem{
Type: SectionType,
SectionID: sectionID,
}
}
// resolveLessons resolves lesson into reference. If a lesson is just a single media, turned into media.
func (resolver *SectionResolver) resolveLesson(lessonID string) *ResolvingItem {
lesson := resolver.Site.Lessons[lessonID]
if len(lesson.Audio) == 1 {
audio := resolver.ResolveMedia(lesson.Audio[0], lesson.SiteData)
return &ResolvingItem{
Type: MediaType,
Audio: &audio,
}
}
if len(lesson.Audio) == 0 {
return &ResolvingItem{
Type: MediaType,
Audio: nil,
}
}
return &ResolvingItem{
Type: LessonType,
Lesson: &lesson,
}
}
// useResolvedToParent integrates the resolved section into the parent.
func (resolver *SectionResolver) useResolvedToParent(resolved *ResolvingItem, parentID string) {
parent := resolver.ResolvedSite.Sections[parentID]
if resolved.Type == LessonType || resolved.Type == SectionType {
reference := resolved.SectionID
if resolved.Type == LessonType {
reference = resolved.Lesson.ID
}
parent.Content = append(parent.Content, ContentReference{
Type: resolved.Type,
Reference: reference,
})
}
// For a lesson, also add it to the lesson map.
if resolved.Type == LessonType {
resolver.ResolvedSite.Lessons[resolved.Lesson.ID] = *resolved.Lesson
} else if resolved.Type == MediaType && resolved.Audio != nil {
parent.Audio[resolved.Audio.Source] = *resolved.Audio
parent.Content = append(parent.Content, ContentReference{
Type: MediaType,
Reference: resolved.Audio.Source,
})
}
resolver.ResolvedSite.Sections[parentID] = parent
}
// simpleContentToLesson converts the given section to a lesson.
func (resolver *SectionResolver) simpleContentToLesson(sectionID string, sourceIDs []string, toAudio func(ID string) Media) *Lesson {
section := resolver.Site.Sections[sectionID]
audio := make([]Media, 0)
for _, ID := range sourceIDs {
audio = append(audio, toAudio(ID))
}
return &Lesson{
ID: section.ID,
SiteData: section.SiteData,
Audio: audio,
}
}
// simpleLessonsToLesson converts from section which has all lessons with one
// class to just one lesson.
func (resolver *SectionResolver) simpleLessonsToLesson(sectionID string) *Lesson {
section := resolver.Site.Sections[sectionID]
return resolver.simpleContentToLesson(sectionID, section.Lessons, func(id string) Media {
lesson := resolver.Site.Lessons[id]
if len(lesson.Audio) == 1 {
return resolver.ResolveMedia(lesson.Audio[0], lesson.SiteData)
}
return Media{}
})
}
// simpleSectionsToLesson converts from all child sections having just one
// lesson to one lesson with all that content.
func (resolver *SectionResolver) simpleSectionsToLesson(sectionID string) *Lesson {
return resolver.simpleContentToLesson(sectionID, resolver.Site.Sections[sectionID].Lessons, func(id string) Media {
return *resolver.ResolveSection(id).Audio
})
}
// IsEverySectionMedia checks if the given section is really
// just a lesson.
func (resolver *SectionResolver) isEverySectionMedia(sectionID string) bool {
section := resolver.Site.Sections[sectionID]
for _, subSection := range section.Sections {
if resolver.Site.Sections[subSection].AudioCount > 1 {
return false
}
}
return true
}
// isEveryLessonMedia checks if all lessons in section have just one media.
func (resolver *SectionResolver) isEveryLessonMedia(sectionID string) bool {
section := resolver.Site.Sections[sectionID]
for _, lessonID := range section.Lessons {
if len(resolver.Site.Lessons[lessonID].Audio) > 1 {
return false
}
}
return true
}
// ResolveMedia gives the given media all of its data.
func (resolver *SectionResolver) ResolveMedia(audio Media, lesson *SiteData) Media {
title := audio.Title
if len(title) == 0 {
title = lesson.Title
}
description := audio.Description
if len(description) == 0 {
description = lesson.Description
}
audio.Title = title
audio.Description = description
return audio
}
|
STACK_EDU
|
- Is this a monad?
- does this demonstrate a reasonable understanding of error monad?
- what am I missing?
- what else can I do with this code to flex monads more?
- i'm confused as to the relation of success/fail to "return"/"result"/"lift" (i think these are all the same concepts).
- how can we make the problem more complex, such that monads help us solve our pain points? monads help here because we abstracted away the
if result != Noneplumbing, what other types of plumbing might I want to abstract, and how do monads (or 'monad combinators') help this pain?
I'm a bit underwhelmed.
# helpers for returning error codes def success(x): return (True, x) def fail(x): return (False, x) # bind knows how to unwrap the return value and pass it to # the next function def bind(mv, mf): succeeded = mv value = mv if (succeeded): return mf(value) else: return mv def lift(val): return success(val) def userid_from_name(person_name): if person_name == "Irek": return success(1) elif person_name == "John": return success(2) elif person_name == "Alex": return success(3) elif person_name == "Nick": return success(1) else: return fail("No account associated with name '%s'" % person_name) def balance_from_userid(userid): if userid == 1: return success(1000000) elif userid == 2: return success(75000) else: return fail("No balance associated with account #%s" % userid) def balance_qualifies_for_loan(balance): if balance > 200000: return success(balance) else: return fail("Insufficient funds for loan, current balance is %s" % balance) def name_qualifies_for_loan(person_name): "note pattern of lift-bind-bind-bind, we can abstract further with macros" mName = lift(person_name) mUserid = bind(mName, userid_from_name) mBalance = bind(mUserid, balance_from_userid) mLoan = bind(mBalance, balance_qualifies_for_loan) return mLoan for person_name in ["Irek", "John", "Alex", "Nick", "Fake"]: qualified = name_qualifies_for_loan(person_name) print "%s: %s" % (person_name, qualified)
Irek: (True, 1000000) John: (False, 'Insufficient funds for loan, current balance is 75000') Alex: (False, 'No balance associated with account #3') Nick: (True, 1000000) Fake: (False, "No account associated with name 'Fake'")
|
OPCFW_CODE
|
PATH The Port Authority of NY NJ The Port Authority of New York and New Jersey World Trade Center, Greenwich Street, New York, NY Port Authority of New York New Jersey The Port Authority Moves the Region The Port Authority of NY NJ builds, operates, and maintains critical transportation and trade assets Its network of aviation, rail, surface transportation and seaport facilities annually moves millions of people and transports Noble Eightfold Path The Noble Eightfold Path Pali ariyo a ha giko maggo Sanskrit ry gam rga is an early summary of the path of Buddhist practices leading to liberation from samsara, the painful cycle of rebirth. The Eightfold Path consists of eight practices right view, right resolve, right speech, right conduct, right livelihood, right effort, right mindfulness, and right samadhi Redirecting Don t use this page directly, pass symbolId to get redirected. Paths SVG The outline of a shape for a path element is specified using the d property See Path data below. . Path data General information about path data A path is defined by including a path element on which the d property specifies the path data The path data contains the moveto, lineto, curveto both cubic and quadratic Bziers , arc and closepath instructions. Path of Exile Path of Exile is an online Action RPG set in the dark fantasy world of Wraeclast It is designed around a strong online item economy, deep character customisation, competitive PvP and ladder races. Untamed Path Adventures Private Custom Travel in South Since , Untamed Path has offered private, active, cultural, and nature based adventures in the Andean region of South America We are values driven and ignited by our passion for the wild places and native cultures of South America. Official Apple Support Apple support is here to help Learn about popular topics and find resources that will help you with all of your Apple products. sys System specific parameters and functions Python sysgv The list of command line arguments passed to a Python script argv is the script name it is operating system dependent whether this is a full pathname or not If the command was executed using the c command line option to the interpreter, argv is set to the string c.If no script name was passed to the Python interpreter, argv is the empty string. Numerology Life Path Numbers Astrology Numerology Life Path Numbers by Michael McClain The Life Path is the sum of the birth date This number represents who you are at birth and the native traits that you will carry with you through life.
[PDF] ✓ Free Read ↠ The Path: A New Way to Think About Everything : by Michael Puett Christine Gross-Loh ð 367 Michael Puett Christine Gross-Loh
Title: [PDF] ✓ Free Read ↠ The Path: A New Way to Think About Everything : by Michael Puett Christine Gross-Loh ð
|
OPCFW_CODE
|
Deployments to environments are behaving strange recently
Hey @freddydk,
We are seeing some strange issues with deployments to our PROD environments.
Firstly, it takes really long time now 1h+ for the minor changes. Normally it runs between 15..30 minutes.
Eventually it errors out.
After restarting the job, it tells that my first extension is already in, and quickly deploys the second one.
If i check installation status, it shows Failed with timeout error.
But, it is actually installed
This behavior started this week.
Best Regards,
Gintautas
Is this on V24?
I think there has been problems on 24 like this - but they should be fixed yesterday or today
No, it's v23
Again, issues with deployments. Deployments stopped working to our QA environment since yesterday.
Pipeline fails with useless error:
##[error]Deploying to QA failed. Unable to publish app. Please open the Extension Deployment Status Details page in Business Central to see the detailed error message.
Installation status shows another useless error
App insights shows another useless error
Just tried to deploy it manually, withouth AL-Go and it succeeded. Previously AL-GO failed multiple times
Is there something you could see on your side, @freddydk. There's must be something wrong with platform or AL-GO, since we are coming back to the deployment issues from time to time.
What does the log in AL-Go show for the publishing of this app?
Nothing special:
2024-05-07T06:57:54.3821107Z Downloading BcContainerHelper latest version from Blob Storage
2024-05-07T06:57:56.2054798Z Import from C:\ProgramData\BcContainerHelper\6.0.17\BcContainerHelper\BcContainerHelper.ps1
2024-05-07T06:57:56.4988390Z BcContainerHelper version 6.0.17
2024-05-07T06:57:56.6867654Z Setting useCompilerFolder = True
2024-05-07T06:57:56.8921477Z BC.HelperFunctions emits usage statistics telemetry to Microsoft
2024-05-07T06:57:56.9789664Z Running on Windows, PowerShell 5.1.20348.2400
2024-05-07T06:57:58.7788206Z project 'SRS.nl'
2024-05-07T06:57:58.7827819Z Apps to deploy
2024-05-07T06:57:58.7841440Z C:\a\srs-bc\srs-bc\.artifacts\SRS.nl-main-Apps-1.0.1565.0
2024-05-07T06:57:58.8667865Z Attempting authentication to https://api.businesscentral.dynamics.com/.default using clientCredentials...
2024-05-07T06:57:59.4831614Z Authenticated as app ***
2024-05-07T06:57:59.4971695Z EnvironmentUrl: https://businesscentral.dynamics.com/db1e96a8-a3da-442a-930b-235cac24cd5c/QA
2024-05-07T06:57:59.9489260Z Publishing apps using automation API
2024-05-07T06:57:59.9880900Z https://api.businesscentral.dynamics.com/v2.0/QA/api/microsoft/automation/v2.0/companies
2024-05-07T06:58:00.7722785Z Company 'CRONUS NL' has id 3b1e9576-41bc-ee11-907d-6045bde9cc0e
2024-05-07T06:58:00.7730130Z https://api.businesscentral.dynamics.com/v2.0/QA/api/microsoft/automation/v2.0/companies(3b1e9576-41bc-ee11-907d-6045bde9cc0e)/extensions
2024-05-07T06:58:01.9755659Z Extensions before:
....
....
2024-05-07T06:58:04.1986229Z WARNING: Dependency 6da8dd2f-e698-461f-9147-8e404244dd85:Continia Software_Continia Document Capture_<IP_ADDRESS>).app not
2024-05-07T06:58:04.1987557Z found
2024-05-07T06:58:04.5059508Z SRS_SRS.base_1.0.1565.0.app - Downloading AL Language Extension to C:\ProgramData\BcContainerHelper\alLanguageExtension\13.0.1007491.zip
2024-05-07T06:58:09.8638045Z using 7zip
2024-05-07T07:08:12.8888397Z upgrading..............................................................Failed
2024-05-07T07:08:12.9107797Z ERROR: Unable to publish app. Please open the Extension Deployment Status Details page in Business Central to see the detailed error message. [System.Management.Automation.RuntimeException]
2024-05-07T07:08:13.5983525Z
2024-05-07T07:08:13.5992372Z Extensions after:
....
....
The latest succeeding publish - how much time did that take?
This one fails after 10 minutes 6:58:09 -> 7:08:12
here are screenshot for the last two succesful ones:
Base app takes 6, & 8.5 minutes to deploy
Will investigate whether there are any timeouts on 10 minutes
Sendt fra Outlook til iOShttps://aka.ms/o0ukef
Fra: Gintautas @.>
Sendt: Wednesday, May 8, 2024 9:17:38 AM
Til: microsoft/AL-Go @.>
Cc: Freddy Kristiansen @.>; Mention @.>
Emne: Re: [microsoft/AL-Go] Deployments to environments are behaving strange recently (Issue #1056)
here are screenshot for the last two succesful ones:
Base app takes 6, & 8.5 minutes to deploy
image.png (view on web)https://github.com/microsoft/AL-Go/assets/16342128/dcfe2bc4-052f-4b76-9be9-ca67547584c7
image.png (view on web)https://github.com/microsoft/AL-Go/assets/16342128/fa91ab10-b55e-49dd-88de-f3c8272d68ce
—
Reply to this email directly, view it on GitHubhttps://github.com/microsoft/AL-Go/issues/1056#issuecomment-2100013128, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACSGUA2V46DZLRSZQB247ZTZBHNSFAVCNFSM6AAAAABG26EMQCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBQGAYTGMJSHA.
You are receiving this because you were mentioned.Message ID: @.***>
Just happened again
This time it was running <10min
I recently was able to get a repro of this including 11 AppSource apps and 16 PTEs and found a place, where I could add some resilience to the publishing logic, will close this with the assumption that it is fixed now.
Let me know if that is not the case.
|
GITHUB_ARCHIVE
|
Enhancing Cybersecurity with AI: Simulation and Vulnerability Analysis
Explore AI's role in revolutionizing cyber security with simulation, code generation, and vulnerability analysis. Discover powerful tools defending our digital world.
Cybersecurity is a constant battleground where attackers continually develop new techniques and tools to breach systems, steal data, and disrupt operations. In response, organizations are turning to Artificial Intelligence (AI) as a powerful ally in the fight against cyber threats.
AI can simulate potential cyber attack scenarios, generate secure code examples, and assist in analyzing large datasets to identify security vulnerabilities, making it a crucial asset for modern cybersecurity professionals.
Simulating Potential Cyber Attack Scenarios
AI-driven simulation tools are pivotal for cybersecurity professionals in understanding and mitigating potential threats. These simulations enable organizations to create controlled environments where they can test their defenses against a wide range of attack scenarios. By leveraging AI, these simulations can become increasingly sophisticated and realistic.
Behavioral Analysis: AI can analyze historical attack data to identify patterns and behaviors associated with different types of cyberattacks. It can then simulate these attack patterns to assess an organization's vulnerability.
Red Team Operations: AI-driven red teaming allows organizations to simulate real-world attacks by mimicking the tactics, techniques, and procedures (TTPs) of actual threat actors. This helps in identifying weaknesses in the security posture.
Predictive Analytics: AI can predict potential attack vectors and vulnerabilities by analyzing current security configurations and historical data, helping organizations proactively bolster their defenses.
Generating Secure Code Examples
Writing secure code is a fundamental aspect of cybersecurity. AI can assist developers by generating secure code examples and identifying vulnerabilities in existing codebases.
Code Review and Vulnerability Detection:
- Static Analysis with AI: AI-driven static code analysis tools scan source code files to identify security vulnerabilities. These tools use machine learning algorithms to recognize patterns and anomalies indicative of common coding mistakes, such as SQL injection, cross-site scripting (XSS), or buffer overflows.
- Pattern Recognition: AI can identify patterns in code that resemble known vulnerabilities, helping developers spot potential weaknesses before deployment.
Secure Code Templates:
- Code Snippet Generation: AI-powered tools can generate secure code snippets for common programming tasks. For instance, they can provide developers with secure code to handle user authentication, data validation, and encryption.
- Language Support: AI can generate code in multiple programming languages, ensuring that developers adhere to best security practices regardless of their coding preferences.
Contextual Code Generation:
- Natural Language Processing (NLP): AI models equipped with NLP capabilities can understand the developer's natural language queries and generate secure code based on the context provided.
- Dependency Management: AI can analyze third-party library dependencies and suggest secure alternatives or configurations to minimize security risks.
Analyzing Large Datasets for Identifying Security Vulnerabilities :
The amount of data generated in today's digital landscape is immense, making it challenging to identify security vulnerabilities manually. AI-powered solutions excel in processing and analyzing large datasets to uncover hidden risks.
Data Aggregation and Integration :
- Data Sources: AI can aggregate data from various sources, including network logs, system logs, user activity logs, and external threat intelligence feeds. This holistic approach provides a comprehensive view of an organization's security landscape.
- Normalization and Integration: AI-driven solutions normalize and integrate disparate data sources, ensuring that data from different platforms and systems can be analyzed together effectively.
Threat Detection and Pattern Recognition:
- Behavioral Analytics: AI models can learn from historical data to establish normal behavior patterns for users and systems. They can then identify deviations that may indicate security threats or vulnerabilities.
Signature-Based Detection: AI algorithms can recognize known attack signatures and patterns within the data, allowing organizations to detect common threats quickly.
- Anomaly Detection: AI can identify unusual patterns in data, such as network traffic spikes, unexpected file access, or unusual login attempts, signaling potential vulnerabilities or intrusions.
- Machine Learning Models: AI employs machine learning models that can predict potential vulnerabilities based on historical data and emerging trends. These predictions help organizations prioritize their security efforts.
- Vulnerability Forecasting: AI can forecast vulnerabilities by analyzing factors such as software versions, patch management, and threat intelligence, helping organizations preemptively address issues.
Other Applications of AI in Cybersecurity
Beyond simulation, code generation, and data analysis, AI is instrumental in several other cybersecurity domains:
- User Behavior Analysis: AI can identify deviations from normal user behavior to detect insider threats and compromised accounts.Automated Incident Response: AI-driven incident response systems can rapidly detect and contain threats, minimizing damage and downtime.
- Phishing Detection: AI can analyze emails and websites to detect phishing attempts and malicious URLs.
- Zero-Day Vulnerability Detection: AI can proactively search for vulnerabilities in software, even before they are publicly disclosed.
Artificial Intelligence is revolutionizing the field of cybersecurity by enabling organizations to simulate potential cyber-attacks, generate secure code examples, and analyze vast datasets for vulnerabilities.
As cyber threats continue to evolve, leveraging AI in these ways is not just an advantage but a necessity to stay one step ahead of adversaries. Incorporating AI into cybersecurity strategies can enhance protection, reduce risks, and safeguard valuable digital assets.
|
OPCFW_CODE
|
Using the following technique will allow you to use all 12 presets on the MC6 when toggling pages, i.e. you don’t have to allocate a dedicated footswitch to the task and lose 2 presets.
The core of this technique consists of using a Release Action (A=R) combined with a Long Press Release Action (A=LPR). A=R sends its message when the switch is released. But when used in combination with Long Press Release the A=LPR acts as a fence and only its message gets sent on a long press and the switch is released. In other words you can have 2 independent actions associated with one switch. (Note that when using a Press Action a message will always be sent which may or not be desirable.)
Here’s as an example of scrolling up through HX Stomp presets (I have included a photo from the manual shows what different CCs and values do) but the technique is a feature of the MC6 and isn’t restricted to the device being controlled.
The effect of the following command set on a short switch press is to put the HXS in Scroll mode, to simulate pressing Footswitch 2 on the HXS which advances the preset, and then to put the device in Stomp mode where the 3 footswitches are available to perform whatever task assigned to them. The effect of the command set on a long switch press is to toggle the MC6 page.
MSG1: ACTION=Release; TYPE=CC, controller=71, value=1, channel=1 MSG2: ACTION=Release; TYPE=CC, controller=50, value=127, channel=1 MSG3: ACTION=Release; TYPE=CC, controller=71, value=0, channel=1 MSG4: ACTION=Long Press Release; TYPE=Toggle Page
A short press on the switch will send messages 1-3 and increment the the preset on the HXS. A long press will toggle the page on the MC6. A preset on the new page will have to follow the same technique to toggle back. The simplest scenario is where a single message is used to do something like activating the 1 Switch Looper Play Once for the HXS.
MSG1: ACTION=Release; TYPE=CC, controller=62, value=127, channel=1 MSG2: ACTION=Long Press Release; TYPE=Toggle Page
The easiest way to see the effect of these commands is to connect the MC6 to a MIDI monitor. The MC6 sends the commands on the USB interface which can then be monitored on a computer, tablet or phone given the requisite application. There is no need to have the device you want to control connected to see the sequence of messages. I found that useful as there were less wires to contend with.
|
OPCFW_CODE
|
Preparing your Geeklog Plugin Distribution
The plugin tarfile
All Geeklog plugin tarfiles should use the following naming convention:
<plugin name>_<plugin version>_<geeklog version>.tar.gz
<plugin name>: this is one of the single most important values you will choose for your plugin as it dictates the following:
- The exact API function names that the Geeklog code will try to call for your plugin
- The exact directory within the webtree to put all your plugin code
- The exact directory within the admin directory to put your admin code
- If using moderation, the exact table name main table being moderated
- If using moderation, the submission table will be <plugin name>submission
<plugin version>: used during the installation process to determine if you are attempting to upgrade a plugin or do a new installation. It is also checked to verify that you aren't trying to install and old version of the plugin when a new installation already exists.
<geeklog version>: this is the Geeklog version the plugin works under.
The organization of your tarfile is standardized as well. For each directory and file a description is given. Your base plugin directory when you create the tarfile should be <plugin name>. Under there you will have the following:
config.php: configuration page for your plugin. We'd prefer you to data-drive most the values if possible but using config.php is fine. This file can be called whatever you want...you are not restricted.
functions.inc: this is the file where you implement the Geeklog API and where your plugin code should reside. It MUST be named this because we automatically include all enabled plugins function.inc files at the bottom of lib-common.php. Note that this means you have access to all the functions in lib-common.php in your plugin code.
lang.php or language/ directory: the language file(s) for your plugin. You should include this file in your functions.inc. We recommend that you use a language directory and have a seperate language file for each supported language (english.php, etc.), mirroring Geeklog's behaviour and selecting the language file based on the user's preferred language (falling back to english.php if you can't find a language file for the selected language).
table.sql the DDL needed to modify the Geeklog database so that your plugin will work. Note: you must provide an entry in the plugin table in your database. Without it, Geeklog will not know your plugin exists. Example: REPLACE INTO plugins (pi_name, pi_version, pi_gl_version, pi_homepage, pi_enabled) VALUES ('photos', '0.1', '1.2.2', 'http://www.tonybibbs.com', 1);
data.sql sample data for your plugin
README standard readme for software
/docs: includes any documentation you may want to provide for your plugin such as history, to-do, etc
/admin: includes only your admininstation pages
/public_html: include your regular web pages
/updates: includes all update sql and scripts. if you are writing an update SQL script be sure that you name it update_<previous version>.sql. The way this work is if you have version 0.1 installed for a plugin and you are installing version 0.2 the code will look for the update script for the currently isntalled version (0.1) and if it finds it, in this case update_0.1.sql then it will execute it automatically.
|
OPCFW_CODE
|
const { assert } = require('chai');
const request = require('./request');
const { dropCollection } = require('./db');
const { checkOk } = request;
function save(actor) {
return request
.post('/api/actors')
.send(actor)
.then(checkOk)
.then(({ body }) => body);
}
const makeSimple = (actor, films) => {
const simple = {
_id: actor._id,
name: actor.name,
dob: actor.dob,
pob: actor.pob
};
if(films) {
simple.films = [];
simple.films[0] = {
_id: films._id,
title: films.title,
released: films.released
};
}
return simple;
};
let chuckNorris;
let billMurray;
let WalkerTexasRanger;
let KickPunchStudio;
const KickPunch = {
name: 'Kick Punch Studio',
address: {
city: 'LA',
state: 'CA',
country: 'USA'
}
};
const chuck = {
name: 'Chuck Norris',
dob: new Date('1953-07-04'),
pob: 'Kyle, OK'
};
const bill = {
name: 'Bill Murray',
dob: new Date('1963-09-20'),
pob: 'Boston, MA'
};
describe('Actors API', () => {
beforeEach(() => dropCollection('actors'));
beforeEach(() => {
return request
.post('/api/studios')
.send(KickPunch)
.then(checkOk)
.then(({ body }) => KickPunchStudio = body);
});
beforeEach(() => {
return save(bill)
.then(data => billMurray = data);
});
beforeEach(() => {
return save(chuck)
.then(data => chuckNorris = data);
});
beforeEach(() => {
return request
.post('/api/films')
.send({
title: 'Walker Texas Ranger',
studio: KickPunchStudio._id,
released: 1999,
cast: [{
role: 'Badass',
actor: chuckNorris._id
}]
})
.then(checkOk)
.then(({ body })=> WalkerTexasRanger = body);
});
it('saves an Actor', () => {
assert.isOk(chuckNorris._id);
});
it('gets an actor by id', () => {
return request
.get(`/api/actors/${chuckNorris._id}`)
.then(checkOk)
.then(({ body }) => {
assert.deepEqual(body, makeSimple(chuckNorris, WalkerTexasRanger));
});
});
it('gets a list of actors', () => {
return request
.get('/api/actors')
.then(checkOk)
.then(({ body }) => {
assert.deepEqual(body, [billMurray, chuckNorris]);
});
});
it('deletes an actor', () => {
return request
.delete(`/api/actors/${chuckNorris._id}`)
.then(checkOk)
.then(res => {
assert.deepEqual(res.body, { removed: true });
return request.get('/api/actors');
})
.then(checkOk)
.then(({ body }) => {
assert.deepEqual(body, [billMurray]);
});
});
});
|
STACK_EDU
|
How to install the Ringdesk 3CX connector on the 3CX server
All Ringdesk CRM integration solutions (Zendesk, ODOO, ZoHo, ….etc ) need a special Plug-in to be installed on the 3CX server. In this manual, you will read how to do this.
Why do you need this Plug-in and how does it work?
In order to receive 3CX server-side events and control one’s extension on the 3CX server, 3CX offers a server-side API. Ringdesk uses its own small application to connect to this API. The Plug-in needs to be installed on the 3CX Linux server. The Ringdesk Plugin will create an encrypted & secure outbound connection to api.ringdesk.com first and then connect to the configured ringdesk proxy-server (callcontrolxx.ringdesk.com ) to route all events and commands between your 3CX server and your CRM client. The Plugin identifies itself with a unique username and password that we will be naming GUID and Secret in this manual. Keep your GUID and Secret safe and never share this information unneeded.
Ringdesk communication & components diagram ( Blue = Ringdesk components )
In the manual below, the server is named “UNILINXX – 3CX” and the administrator (logged in with “root”) is connected with the Linux server using “OpenSSH Client” (you can use any other preferred command-line tool).
<< If you have a hosting partner and you do not have Linux access rights, please ask your hosting provider to do the installation for you >>
Please go through the following steps
- Connect to the 3CX Linux server with Administrator rights using OpenSSH Client (or an other command-line tool).
Copy the Ringdesk Connector for 3cx (downloaded Zip file) to the Linux server ( in ./var ) and Navigate to the folder location.
- Navigate the ./var folder. Command: cd var
- Unzip the file using the command: unzip Ringdesk3CX.zip
The content will be unzipped in the folder “./var/Ringdesk3CX”
- Navigate the unzipped folder. Command: cd Ringdesk3cx
- Make Script executable. Command: chmod 477 Ringdesk.sh
- Execute the script. The installation will start. Command: ./Ringdesk.sh
- Enter GUID and Secret if asked for. If you do not have a GUID or Secret, follow the following steps:
- Login on https://portal.ringdesk.com
- Add “3CX Cloud” at myPBX;
- Select tab-page OAuth 2.0
- Create a Secret (GUID will be provided);
- The connection will be tested after entering. Example:
- The installer will try to use System Control to run all installer commands. It will try using System Control SystemCTL first, if it encounters an error it runs with System Control PM2. Ringdesk Connector for 3CX run with either SystemCTL or PM2 showing the following result (example on Dutch Linux server):
- Check the connection in the Ringdesk portal. Go to “myPBX” and select the 3CX connector. The Connected-Button must show “Connected” like the image below:
- Test from https://sdk.ringdesk.comBefore you start, make sure your user profile contains the correct internal phone number. This phone number is the only mapping for the 3CX server API with your user account!
In the Ringdesk portal, menu “users” go to your user profile and add the internal 3CX number you want to map:
Log in with your username (email ) and password on https://sdk.ringdesk.com and you should be able to control your 3CX extension from here:
- At Ringdesk we can make some additional settings like number formatting. If you have any issues regarding the number formatting, please send an email to email@example.com
- If you have the 3CX Browser extension installed you might create double dials when hitting phone numbers. Please exclude URL’s that are used with Ringdesk to avoid this from happening;
|
OPCFW_CODE
|
PyLucene org.apache.lucene.benchmark is missing
i just installed PyLucene 4.9 (and also tried with 4.8) via Makefile on Ubuntu 14.04, everything is running fine except that i am missing the modules in org.apache.lucene.benchmark.
The PyLucene documentation says it's there: PyLucene Documentation
But when i open up ipython and tab through "from org.apache.lucene." i only get these results from autocomplete:
In [3]: from org.apache.lucene.
org.apache.lucene.analysis org.apache.lucene.queries
org.apache.lucene.codecs org.apache.lucene.queryparser
org.apache.lucene.collation org.apache.lucene.sandbox
org.apache.lucene.document org.apache.lucene.search
org.apache.lucene.expressions org.apache.lucene.store
org.apache.lucene.facet org.apache.lucene.util
org.apache.lucene.index
So i am assuming something went wrong with my installation but i cannot figure it out. Has anyone experienced this kind of problem and may be able to help?
Okay, i was able to figure it out by myself.
If you want to use the benchmark module, you have to edit the Makefile in the following ways:
1.Find the JARS section,the items look like this:
JARS+=$(ANALYZERS_JAR) # many language analyzers
remove the comment before JARS+=$(SPATIAL), now add the line:
JARS+=$(BENCHMARK_JAR) # benchmark module`
2.Find the JAR-path section where the items look like
LUCENE_JAR=$(LUCENE)/build/core/lucene-core-$(LUCENE_VER).jar
add the following line to this section:
BENCHMARK_JAR=$(LUCENE)/build/benchmark/lucene-benchmark-$(LUCENE_VER).jar
3.Find the ANT-section where the text looks like:
$(LUCENE_JAR): $(LUCENE)
cd $(LUCENE); $(ANT) -Dversion=$(LUCENE_VER)
append the following text at the end of the section:
$(BENCHMARK_JAR): $(LUCENE_JAR)
cd $(LUCENE)/benchmark; $(ANT) -Dversion=$(LUCENE_VER)
4.Right below, to the JCCFLAGS?= add --classpath "./lucene-java-4.9.0/lucene/spatial/lib/spatial4j-0.4.$
5.At the GENERATE section, add the following excludes(if you need these Modules to work with in Python you might have to download extra .jar files and add them to the jcc classpath, i didn't need them for my task":
--exclude org.apache.lucene.benchmark.byTask.utils.StreamUtils \
--exclude org.apache.lucene.benchmark.byTask.utils.LineDocSourceTest \
--exclude org.apache.lucene.benchmark.byTask.utils.WriteLineDocTaskTest \
--exclude org.apache.lucene.benchmark.byTask.feeds.LongToEnglishQueryMaker \
--exclude org.apache.lucene.benchmark.byTask.feeds.LongToEnglishContentSource \
Everything should be working now
I am facing a similar problem with missing org.apache.lucene.analysis.opennlp from PyLucene 8.6.1 and I'm using Ubuntu 18.04. Here is the question that I asked. Can you give me any directions about what I need to do?
|
STACK_EXCHANGE
|
Did you ever feel that if you could just get organized your whole life would suddenly make sense, that you would become rich, famous, and proficient in the art of love making? While the number of notebook/outliner/organizer applications available for OS X may not be indicative of the validity of that belief, certainly an argument can be made that people are willing to spend money for such software. Enter Hog Bay Software and the release of Mori 1.2.
Mori uses the ubiquitous three-pane layout and the word "view" many times in its user interface, the viewer window composed of three—suprise—views: Sources View, Entries View, and Note Text View. In overly-simplified terms, folders have entries which have notes, but oversimplification fails to capture the open-ended, free-spirited, almost hippiesque (but without the ugly clothes and STDs) quality of Mori.
Sources View is the apex of Mori's hierarchy, displaying folders and files in the familiar tree structure, but with Mori you are not limited to that structure. First, there are Smart Folders, saved searches with live updating. Second, there are aliases, which allow for multiple views of data within the hierarchy. Taking an example from the User Guide, a book might be divided into chapters and subchapters in the Sources View. A new folder might then be created in which aliases to all subchapters related to a particular character can be listed. Changes made to an aliased entry are reflected in all other aliased entries. For me, Sources View is the table of contents for any book, review, or other writing project.
Entries View alternately displays the contents of a selected folder from the Sources View, or the results of a search. Entries are whatever you want them to be, an outline element, a diary entry title, a to-do item, a recipe ingredient, whatever. The characteristics of an entry are defined using columns, including user defined columns that have all sorts of Big Brain fields associated with CoreData. As for searching, results are ranked and it's fast, as in seemingly instantaneous when searching through a 70,000 word novel. For me, entries are the perfect outline, located next to both the table of contents and the relevant chapter text.
Note Text View displays the text and other media associated with a selected item. The word processor is TextEdit with extras, the basics being rulers, simple styles and lists. The extras are basic tables, highlighting, and linking, both within a notebook and outside it. The linking is pretty cool; a contextual menu with cascading folders allowing quick connection to other parts of a project. As a word processor, Mori is lean, focused on writing over styles and formatting. For me, the text editor is perfectly suited to writing and not made by Microsoft (that would that be a tautology, I think).
Mori's innards are built on CoreData technology, whatever that means besides better searching and the fact each digital notebook is an OS X package containing two files, a search index and an SQLite database. This is probably good for technical reasons that are behind my comprehension. What's not so good about Mori is exporting, which creates individual files for individual entries in several formats, including TXT, RTF, DOC, OPML, XML, HTML, but exporting could do better at assembling a document. There are AppleScripts for related functionality, but appending entries into a clipping is a pretty lame form of output. The good news here is that Mori is pretty extensible. Besides AppleScripts, there are plug-ins, as well as a plug-in architecture for developers. Hopefully, someone will create a plug-in that can create a single document more easily, or such a feature will be implemented due to demand under Mori's user-driven development model.
So, who is Mori for? Well, it depends on your needs. For me, the goal is creative writing but the requirements (word processor, outliner, simple database) are likely familiar to any number of tasks. Mori is not so great as a general-purpose database of all media. However, while feature importance is subjective, what can be objectively compared is price:
- Circus Ponies Notebook $49.95
- Omni Outliner $39.95
- DEVONthink Personal $39.95
- Yojimbo $39.95
- Mori $27.95
- Outliner4X Pro $24.95
- Chalk and Sidewalk $0.99 (free if you steal from children)
Mori not only falls toward the low end of pricing for digital notebooks, the developer offers Mori under a pricing scheme that is one step away from an unreconstructed communist utopia. Upgrades are free; the developer asking for donations when users find features useful. You can even get the source code if you are a registered user, so if price is a feature, then Mori compares favorably to the competition. Comparing Mori beyond price is difficult, as choosing a digital notebook is an intensely personal choice, like a favorite soda or condom. I chose Mori because its width and girth are a good fit for me, but were I to try to descibe it objectively I would say Mori helps get you organized without getting in the way.
Share this story
|
OPCFW_CODE
|
compatibility with webpack.DefinePlugin
Library Affected:
workbox-webpack-plugin
Browser & Platform:
node (webpack build)
Issue or Feature Request Description:
Webpack provides https://webpack.js.org/plugins/define-plugin/ to create global constants which can be configured at compile time. It seems that workbox-webpack-plugin doesn't use it when processing service worker source file swSrc and generating swDest.
I used workbox-webpack-plugin 2.0.1
@elf-pavlik What would you expect it to do?
Given in webpack.config.js
const escapeStringRegExp = require('escape-string-regexp')
const config = require('./config.json')
const SRC_DIR = 'src'
const DIST_DIR = 'dist'
module.exports = {
// ...
plugins: [
new webpack.DefinePlugin({
ESCAPED_SERVICE_URL: JSON.stringify(escapeStringRegExp(config.service))
}),
new WorkboxPlugin({
globDirectory: DIST_DIR,
globPatterns: ['**/*.{html,js,css}'],
swSrc: path.join(SRC_DIR, 'service-worker.js'),
swDest: path.join(DIST_DIR, 'service-worker.js')
})
]
}
I thought ESCAPED_SERVICE_URL in the swSrc file would get substituted with value provided to DefinePlugin in generated swDest file. On second thought I very likely made wrong assumption, should I create qestion on StackOveflow instead.
@elf-pavlik Workbox isn't using webpack to compile the service worker so any webpack related plugins / config will likely miss the service worker (unless it's explicitly passed to workbox build via the WorkboxWebpackPlugin).
@goldhand would your work current set of suggestions work here?
Workbox isn't using webpack to compile the service worker so any webpack related plugins / config will likely miss the service worker (unless it's explicitly passed to workbox build via the WorkboxWebpackPlugin).
I didn't understand the unless it's explicitly passed... part. Can I define a chunk in webpack which will have the source service worker file (which I currently pass to WorkboxBuildWebpackPlugin via swSrc) as its entry point and this way webpack.DefinePlugin will apply to that source file.
I noticed in sw-precache-webpack-plugin config importScripts option which supports chunkName (and also works when using [chunkhash]).
For my particular use case (including URL from app config file in the definition of a runtime caching route), having similar importScripts option in workbox-build might allow me to define that route in a dedicated chunk and just register it in the source service worker passed to workbox-build in webpack plugin as swSrc. Something in lines of:
new WorkboxPlugin({
globDirectory: DIST_DIR,
globPatterns: ['**/*.{html,js,css}'],
swSrc: path.join(SRC_DIR, 'service-worker.js'),
swDest: path.join(DIST_DIR, 'service-worker.js'),
importScripts: [
{ chunkname: 'workbox' },
{ chunkname: 'dataServiceRoute' }
]
})
Actually having importScripts available in workbox-build config, I would not need the swSrc any more which would allow me to use runtimeCaching config option directly in webpack.config.js (having now generateSW() instead of injectManifest())
new WorkboxPlugin({
globDirectory: DIST_DIR,
globPatterns: ['**/*.{html,js,css}'],
runtimeCaching: [ /* ... */ ],
swDest: path.join(DIST_DIR, 'service-worker.js'),
importScripts: [
{ chunkname: 'push-notifications' }
]
})
@elf-pavlik I just meant config that was passed inside the plugin configuration. Other plugins won't affect the WorkboxPlugin because it is not in webpack.
Thats a good point with the importScripts. I can see a use for that, especially for assets that use chunkHash
Unlikely to make it for v3 but would be good to address after.
@jeffposnick @goldhand is this still an issue with V3?
Thats a good point with the importScripts. I can see a use for that, especially for assets that use chunkHash
If you decide to add importScripts support for webpack chunks, it may make sense to open new issue or re-open #584
Soon I may need to share some chunks between service worker and the main app so it may provide another use case that could need such future.
This issue here probably should stay closed as it started with my misunderstanding of how WorkboxPlugin plays together with the rest of webpack build.
Let's use https://github.com/GoogleChrome/workbox/issues/933 as a general issue for tracking opting-out of using the CDN copy of Workbox in favor of using custom Workbox bundled code.
I'm using workbox-webpack-plugin 3.6.3 and having the same issue. Variable from webpack define plugin is not replaced as expected, although I found recommendation to use it together here: https://stackoverflow.com/questions/48398658/read-env-variable-from-service-worker-file-using-workbox
|
GITHUB_ARCHIVE
|
Life Of Philip Melanchthon. -- By: B. B. Edwards
BSac 3:10 (May 1846) p. 301
Life Of Philip Melanchthon.
Professor at Andover.
It was the remark of a zealous adherent of Luther, Professor Mayer of Greifswalde, that for the Reformation of the Church, three Luthers would be worth more than three hundred Melanchthons. This observation of the eager partizan contains some truth and some error. That Luther merits the first place as a reformer, there can be no doubt. That he could perform the work assigned him far better without Melanchthon, than Melanchthon could undertake it without Luther, is alike unquestionable. To expect to demolish the errors and abuses of the Romish hierarchy with a cautious and lenient hand, would be a mere delusion. An earlier period had shown, that even men of an intrepid character, with their writings filled with admonitory voices, could pass by and leave few traces behind. A man of dauntless courage, who could wield the club of Hercules, was needed,—one who would stand firmer and more erect, in proportion to the number and fierceness of the assaults which should be made upon him. Such an heroic spirit was Luther, and distant ages will not forget that it was he who broke the fetters of superstition, and led Christendom once more into the light of civil and religious freedom.
But it must not be forgotten, that Luther was one of those excitable spirits, who are inclined, in the violence of passion, to break over all restraint. It was a wise arrangement of Divine Providence that Melanchthon should appear, a spirit of gentler mould, who could, with a wise hand and at the right moment, calm and direct the vehement feelings of his great leader. Luther’s excessive zeal was tempered by Melanchthon’s mildness, while Melanchthon’s yielding nature was quickened and invigorated by the courageous bearing of his friend. Luther alone, or two leaders like Luther, might have rushed into perilous extremes, and occasioned the ruin of the edifice which they were at so much pains to erect. A striking example of Melanchthon’s
BSac 3:10 (May 1846) p. 302
happy influence over Luther is mentioned by the former. “Luther, on one occasion, seemed to be angry beyond measure. A deep silence reigned around among all. At length I addressed him with the line,
‘Vince animos iramque tuam, qui caetera vincis.’
Luther, laughing, replied: ‘We will dispute no further about it.’”
Another ground of the necessity of Melanchthon’s influence in the Reformation, consists in his extraordinary ability to present related truths in their due order and logical method. Luther, in his unceasing contests, had little leisure to investigate fundamentally and develop fully the truths w...
Click here to subscribe
|
OPCFW_CODE
|
Firefox 3.0 Bugs
Posting ads for 48 years
Firefox 3.0 has many bugs, slows the users computer, doesn't load some pages, error messages, add ons problems, and the list goes on...
Reason for uninstalling
Some features didn't work
Hard to use/confusing (menus, display, etc.)
Performance (load delays, memory usage)
Plugin compatibility (Flash, Adobe Acrobat, Windows Media Player, etc.)
Some web pages wouldn't work
Just temporary, I'm planning to install Firefox again soon
Fix the bugs
Make it start up.
Finish beta testing.
it sucks the old version is good
I don't know the reason but it certainly works slowly that my google toolbar or Yahoo and it mess up some of my programs. I have to uninstall it !
Better test on oldest OS, firefow was more light and quick of other browsers and it could be the best choice for olds pc, but only if new version has full supports. I' Tried 3.0 on a IBM PC model Netvista 6794-72g (P4 1.5Ghz with nvidia vanta l6mb agp graphic cart, graphic drivers nvidia from windows update)
Opera is better
After installing firefox, I tried to browse a few web sites, but failed.
I really like the browser but uploading new pages takes much too long time to be acceptable in my work. Regardless closing down AV and Anti-Spyware,
Firefox 3.0 takes considerable time to upload a new page. It seems Firefox is requesting one information string at a time as the network connection is occupied during the page uploading process which
takes 5 - 10 seconds at a time.
I have to return to IE7 (Sigh!)...
fix slowdown on FF3. It is unusable at this stage. Am switching back to FF2
Be more honest in what your product does when installed, what it overwrites, what it does and does not do.
Honesty and straightforwardness, forthrightness, honor would go a long way.
where are your designers ?
Put them all out !
Or sell mozilla to windows (perhaps, it is already made)
Because the new display of firefox 3 is now the worst display in the world of web browsers:
the zooming is impossible
the icones are too big and horrible
and the worst of the worse: when zooming - to see (try seeing) all the page, the display has a bug and we can't return to the original page !!!
Make very fast a NEW 3 verison or pass to version 4 immediately
I hope that you arrive to save the famous mozilla of this dramatic damage
Eric DELORME architect france
There are a lot of usefull extensions, but it happens again and again, that extensions does not work any more with a newer version of firefox. So it is better to use no extensions and then firefox is a little poor.
I like use Firefox, but i have problem with import of certificate. I had for longer time normal used the certificate, then i have changed my proceeding
and the certificate i have after use exported on the montable medium and deleted from file <certmgr.msc> (for security reason). This i have do several times and at once it does not go install
on Firefox. Sure i have do a little error because i have after export deleted this certificate from file <certmgr.msc> but not from Firefox. But fact is, that in other applications is the
certifikate OK. Even so i consider the FF the best.
I have tried uninstall and again install the Firefox without result. Sure if i would delete all setup and Firefox folder and install the Firefox clean, it would be OK. I dont want this.
This bug apply on FF3 and also on FF 126.96.36.199 and 188.8.131.52
If you know better solution without all uninstall, please write me this on email <verygreenmartian@****.com>. Thank you.
it seems you are sponsorised by windows IE now and made firefox 3 to be less comfortable, less beautiful, lesser than less. Whaou !!!
make sure i dont get booted out of pogo games all the time as it kept conflicting with java, my email is bobbench052@****.com , if u can let me know when the bugs have been corrected or what i can do to make it work
I enjoyed using Firefox. I did not have any problem using Firefox until I installed Firefox 3. After that I could not get it to open. When I tried to open it all I could get to open was the crash Box. So I checked the two boxs and submitted the problem to Firefox. I have Uninstalled Firefox 3 and will try to reinstall Firefox 2. My problem now is getting back my favorites.
I dont need google search field.
I need my computer's memory for other purposes.
Please speed up version 3. It looks great, but is nothing compared to version 2 in speed.
I like mozilla firefox. But 3.0 is having lot of issues. Its crashed/hanged sometimes automatically which irritate me a lot. Its not like the previous version. Please fix these issues at the earliest in order to satisfy firefox customer.
Sell it to Microsoft
Sometimes i can not download the TORRENT from the web while IE make it.
I tried repeatedly to connect to the internet after updating from ver2 to ver3. My ver 184.108.40.206 was working fine until the update to ver 220.127.116.11. then
it all started. It could not connect to the internet again. I do not ve anti virus running and the windows firewall is off. I am using windows xp service pack 2 and am on verizon FIOS fibreoptic
network. I searched hours of the mozilla, google and yahoo forums, it seems many users are having the same problem, no fix yet. If you have a fix for this problem I would greatly appreciate it. I
have even completely removed it and re-installed ver. 3, still same problem
Sorry to say, I really loved using firefox, now i have to go back to using IE6.
My E-Mail address is ajpuni@****.com If you have a suggestion or a fix for this problem.
Have the option of restoring the V2 theme(ugly green back button), get rid of the "awesome bar" and replace it with the v2 equivalent, truly clear out all private data("awesome bar" dug up a website I visited ONCE and had cleaned out my private data 6 times beforehand), missing Go button, website blocking due to a "bad" certificate(fucking blocked me out of my bank's website, addons.mozilla.org, and MY OWN FUCKING ROUTER!). Going back to V2 when it was perfect and had no issues.
Firefox simply was not ready for release - it's a pity that the old version still isnt available. Now I go back to IE!
I think that Firefox is perfect as is with possible customization. Sometimes I wish it was a little faster, though.
The browser drains a lot of memory causing everything in my computer slow down. It's never happened before in previous versions.
it less heavy mozilla firefox and that the pages load faster Internet
Lots of bugs in Mozilla Firefox and my computer get in a crash very often.God bless!!!
Make it leaner and more stable. I used to love previous versions of mozilla but now it is bulking up and starting to resemble its competitors.
Resolve the bugs ;-)
I've had pages that worked perfectly under FF2 but under FF3 they kept reloading and the links on the page didn't work. Sites Such as:
metal-archives.com (I think that's the addrrss)
These are some basic sites that I use and any link that I clicked on the front page reloaded with nothing new.
I'm going to wait for FF3.1 to be released before I go back to it again. Until that time i'm going to use FF2
Never faced aby issues with firefox 2.0 or earlier versions
had problems with 3.0 when I set a default dir for the downloads (instead of the desktop). It locked up the computer when I chose an existing dir on my d drive and started a download.
Be 110% like IE but make it faster. :-)
|
OPCFW_CODE
|
Exclude an RGB color from a set
I'm currently implementing an algorithm to split an image into smaller chunks, based on straight line separators.
Here's the image I'm processing. It's very small, so you may want to save it and enlarge.
If you look at this image, you can clearly see 3 different zones. But because I used JPG compression, the separator lines have variable pixel colours.
With the adequate colour difference threshold, it's easy to find these separators. Here below are 10 identified separators, or more specifically the average RGB colour of the pixels they contain.
# the three big horizontal separators
row 1 : (64, 43, 32)
row 5 : (60, 46, 29)
row 10: (53, 46, 46)
# now if you use these horizontal lines to split the image,
# you still have two images (do not include the horiz. separators)
# the top one contains three vertical separators
col 1 : (54, 45, 42)
col 8 : (152, 124, 81)
col 10: (43, 49, 43)
# and the bottom contains four
col 1 : (53, 48, 36)
col 5 : (53, 50, 30)
col 6 : (52, 46, 32)
col 10: (43, 52, 45)
As we can see, one of these color average stands out. And as you may see on the picture, it's not really a separator, but only a column of three pixels alike.
I need to discard this fake separator from the set, but I'm having a rough time figuring out an algorithm for this. Initially I'd base this on a standard deviation, removing the ones that are too far away. I don't want to do an average here, as there could be few separators in some cases, and it may lead to false positives.
But I'm lacking mathematical education here. How can I do that on vectors, can anyone provide a working (or better) solution?
I suggest you ask this question on: http://dsp.stackexchange.com/ with an image processing tag.
This is probably best asked in a Machine Learning group, I would suggest trying the stats stack exchange site. What you're looking for is a clustering algorithm, to accept the data points that are close together in your space, and reject outliers. This can be a tricky problem, and the stats people will have a much better background for answering your question than just pure mathematics.
Thanks! FYI - http://stats.stackexchange.com/questions/76552/exclude-an-rgb-color-from-a-set
|
STACK_EXCHANGE
|
OUGPY - Oracle Groundbreakers - Continuous Load Testing for Containerized Java Eclipse MicroProfile Applications
Conference Name: Oracle GroundBreakers
Conference website and social media links:
https://twitter.com/ougpy/status/1150861210490851328
About the session Title: Continuous Load Testing for Containerized Java Eclipse MicroProfile Applications @cbustamantem
Date and hour: 8 Ago Asunción - Paraguay
Twiiter : https://twitter.com/cbustamantem/status/1150870593635532800
Tweet message: Cordialmente invitados a la charla
Continuous Load Testing for Containerized Java Eclipse MicroProfile Applications
@MicroProfileIO
#Java #Docker
@ougpy
@cbustamantem do you have a date for this season? Also what is your twitter?
Hola Carlos @cbustamantem,
It seems that there is no set time for your MP session. That is ok.
Please confirm below, i found data so that @rstjames gets it out via media. :) Gracias for el MP ticket!
Name of speaker: Carlos Bustamante
Twitter handle: https://twitter.com/cbustamantem
day & time (all day not set times): Aug 8th, 2019 -- 9am to 6pm Paraguay time
Push via twitter with all speakers: https://twitter.com/ougpy/status/1151228405129646081
Greetings Ryan
My twitter is @cbustamantem https://twitter.com/cbustamantem, the date
of the event will be 08-Ago-19, and will take all the day from 09.00 am.
Thanks for your support
Gracias Amelia por las instrucciones y coordinación.
Saludos cordiales
Carlos Bustamante
El jue., 18 jul. 2019 a las 11:50, Ryan St. James<EMAIL_ADDRESS>escribió:
@cbustamantem https://github.com/cbustamantem do you have a date for
this season? Also what is your twitter?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/eclipse/microprofile-marketing/issues/141?email_source=notifications&email_token=ABBF3UUCOZLJV5DE3MW7343QACGNNA5CNFSM4ID26QO2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2I5UXQ#issuecomment-512875102,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABBF3URLTYVBEN7UXG5XGRLQACGNNANCNFSM4ID26QOQ
.
@cbustamantem @aeiras
Ok with this image?
Tweet:
Tweet message: Cordialmente invitados a la charla "Continuous Load Testing for Containerized Java Eclipse MicroProfile Applications" @cbustamantem @ougpy
@MicroProfileIO #Java #Docker #opensource #oss @java
Greetings Ryan
Is a nice surprice to see this level of support, thank you in advance,
but a i have a request, let me explain in this event, i will talk about
Eclipse MicroProfile with my partner (Arnaldo Ayala @aayalapy
https://twitter.com/aayalapy) for that reason, if is not a big trouble,
if you can add the name of my partner and his picture (the same of the
twitter account).
Thanks for your support
Carlos B.
El jue., 25 jul. 2019 a las 5:49, Ryan St. James<EMAIL_ADDRESS>escribió:
@cbustamantem https://github.com/cbustamantem @aeiras
https://github.com/aeiras
Ok with this image?
[image: MicroProfile-session-CarlosBustamante]
https://user-images.githubusercontent.com/12132146/61864516-de26e680-aed1-11e9-9ddf-a9c4cf19d0c4.png
Tweet:
Tweet message: Cordialmente invitados a la charla "Continuous Load Testing
for Containerized Java Eclipse MicroProfile Applications" @cbustamantem
https://github.com/cbustamantem @ougpy
@MicroProfileIO #Java #Docker #opensource #oss @java
https://github.com/java
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/eclipse/microprofile-marketing/issues/141?email_source=notifications&email_token=ABBF3UUYHJOW7ONQWK2QS63QBFZLHA5CNFSM4ID26QO2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2Y7AVQ#issuecomment-514977878,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABBF3UTLQBNUGRMU3OAH3QLQBFZLHANCNFSM4ID26QOQ
.
updated image:
@cbustamantem
Hi Ryan
Excelent, that is awesome!!, again, thanks in advance for your support
and for your patience in this request.
Best regards,
Carlos Bustamante
El vie., 26 jul. 2019 a las 3:30, Ryan St. James<EMAIL_ADDRESS>escribió:
updated image:
[image: MicroProfile-session-CarlosBustamante]
https://user-images.githubusercontent.com/12132146/61934182-01ac6880-af88-11e9-9cd3-c8a3b137a3f6.png
@cbustamantem https://github.com/cbustamantem
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/eclipse/microprofile-marketing/issues/141?email_source=notifications&email_token=ABBF3USQLB2T3PKN3XOKUTDQBKR2VA5CNFSM4ID26QO2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD23YC4A#issuecomment-515342704,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABBF3UR7SNLOTE4Y7EGZVV3QBKR2VANCNFSM4ID26QOQ
.
all scheduled to go out 1 day before and 4 days before. @cbustamantem
|
GITHUB_ARCHIVE
|
Personalized Community is here!
Quickly customize your community to find the content you seek.
Have questions on moving to the cloud? Visit the Dynamics 365 Migration Community today! Microsoft’s extensive network of Dynamics AX and Dynamics CRM experts can help.
2022 Release Wave 2Check out the latest updates and new features of Dynamics 365 released from October 2022 through March 2023
The FastTrack program is designed to help you accelerate your Dynamics 365 deployment with confidence.
FastTrack Community | FastTrack Program | Finance and Operations TechTalks | Customer Engagement TechTalks | Upcoming TechTalks | All TechTalks
Is there anyway autocomplete functionality can be implemented from an external API call in dynamic, Like when typing, a prediction of result from the remote server showed drop down in dynamic and then the user can pick one ?. Someone with better knowledge showed please help.
What version you are on.
If you're on version 9 you can try creating a custom control using PCF framwwork.
If you are not on latest version then try following sample.
However with this approach you can fetch all data from external api using a custom action driven by a workflow.
And then filter based on entered text.
@ketan I am using V9, the first link in your reply is empty. With the gallery I think I should be able to use that. Where can I find the latest doc on pcf?
You can find official documentation on below link.
@ketan, i saw this in one of the countrols Gallery - "A control to select a value from a predefined list of values and automatically filters the values based on what the user types.". Does it mean that external calls using an API cant be made ? I am completely new to PCF, never written a code to develop PCF before.
It is not the case, you can have dynamic values for your custom control.
You can refer same pcf control from pcf galary and retrieve data dynamically instead of pre defined.
Refer code :github.com/.../Autocomplete
In that cods in file index.ts
Replace following line.
var optionsPropValue = context.parameters.options.raw;
Here you can call your api and get data and set response Jason as optionsPropValue.
And in menifest.xml you can comment out following line.
<property name="options" display-name-key="optionsValue" description-key="List of options separated by comma" of-type-group="optiondatatypes" usage="input" required="true" />
If this helps, please mark verified.
Ketan A quick question, how do i reference a field that i need to pass in the API inside the index typescript file ?. And the index file isnt in the AutocompleteControl solution file that willl be inported into dynamics, would a ny change in the index file affect the solution, since the index file is outside the solution.
Whichever fields you want to refer for a control you can declare that in a Menifest.xml as bound parameter.
You could declare unbound parameters as well like flags, api url etc as input parameter.
Changes in index.ts won't affect packaged solution which is ready for import.
@ketan thanks for responding swiftly. since the index.ts is outside the solution file, it means any of my modifications wouldnt change anything in the solution right ?. sorry about all these questions, this is a diff level of coding for me.
Yes, to add new changes to solution you would need to build and package solution again.
You can test your custom control before packing using following command.
1) npm run build
2) npm start
Thank @ketan. At this point, i will go with your second option - "However with this approach you can fetch all data from external api using a custom action driven by a workflow.", , as i have no exeperience in creating my own managed solution. Would be in my to-do list though, to create my own solution. I can fetch all the data from the external api and then save them in an entity, use OOB autocomplete to lookup the entity. Any example (link) of how/where this action is executed ?
Following link will help you get data from external api
@ketan thanks buddy!. which i could implement the PCF solution, but time constraint, i need to deliver this for a client in a day, and the custom approach above wouldnt work n my case, as it doesnt download all the data as expected. I ahve marked the your approach as answer.
Business Applications communities
|
OPCFW_CODE
|
Ender 3 - Installing a bootloader with a Raspberry Pi
If you have a Raspberry Pi, you're probably already using it for OctoPrint and likely using
the OctoPi SD card image. This is good - because it already has most of what we need.
- Log into your OctoPi running on your Pi, then become the root user:
sudo su -
- Install avrdude if it is not already installed:
apt-get install -y avrdude
- Copy the avrdude config file to the root home directory:
cp /etc/avrdude.conf ~/avrdude.conf
~/avrdude.conf and find the 'linuxspi' programmer. Change the baud rate in the block to 115200. The block should look like the following:
id = "linuxspi";
desc = "Use Linux SPI device in /dev/spidev*";
type = "linuxspi";
reset = 25;
- At the bottom of the same file, add the following:
id = "pi_1";
desc = "Use the Linux sysfs interface to bitbang GPIO lines";
type = "linuxgpio";
reset = 17;
sck = 24;
mosi = 23;
miso = 18;
- Disconnect the Ender 3 from the power supply. This is important! It is sufficient to unplug the main XT60 connector.
- The pins on the Ender 3 board are laid out as:
RST | SCK | MISO
GND | MOSI | VCC
Ground and VCC are closest to the edge of the board, GND at the display connector side, VCC at the USB connector side.
- Connect the dupont cables to the following pins. You can use this guide to locate the correct pins on the Raspberry Pi
Raspi => Ender 3
3v3 => VCC
GND => GND
GPIO 17 => RST
GPIO 18 => MISO
GPIO 23 => MOSI
GPIO 24 => SCK
- Attempt to talk to the Ender 3 chip over this connection:
avrdude -p atmega1284p -C ~/avrdude.conf -c pi_1 -v
If this fails, check the connection of your dupont cables and try again.
- Download the bootloader onto the Pi:
- Flash the bootloader:
avrdude -p atmega1284p -C avrdude.conf -c pi_1 -U flash:w:optiboot_atmega1284p.hex:i
- You can now connect your Pi via USB to the Ender 3, download a firmware file from this site and flash via:
avrdude -p atmega1284p -c arduino -P /dev/ttyUSB0 -b 115200 -U flash:w:firmware.hex:i
Just remember to change
firmware.hex to match the file you have downloaded.
- I would suggest installing the Firmware Updater plugin for OctoPrint to update your firmware easily in the future.
|
OPCFW_CODE
|
View Full Version : Problems sending to some recipients using CDO?
05-26-2007, 04:08 AM
I am using XP's SMTP service to send emails via ASP and CDO. I am able to send to one of my email accounts, but cannot send to another. The email server I cannot send to is a public server, at a well-known ISP. Memory is foggy, but I think I remember something about that server not allowing relayed messages? I'm not sure.
Anyway, I was hoping others who have ad this type of trouble could shed some light? Thanks.
05-29-2007, 05:58 PM
I think i remember reading something about that, but i can't find where anymore. It said something about configuring the smtp server differently to allow relaying to that server (yeah okay so im repeating your thing). Or it could just be that your "from" is not a known address (something to do with the server DNS) to that well known isp and it just rejects it totally. If I can refind that page where i read about it i'll link you to it.
05-30-2007, 09:48 AM
Could the ISP be doing a reverse DNS lookup on the email? Is this page any help?
AOL, Hotmail, Yahoo, and some other ISPs perform a HELO lookup when receiving messages. If the lookup is not successful, they simply reject to deliver the message to the recipient without sending any error message. There are three possible ways to solve this problem.
1. You can select the "Resolved Internet IP" option in the HELO handshaking settings in the Settings/Advanced screen. The program will perform a DNS query to find out which address points to your IP. This option sometimes does not return the correct values if you are behind a router. If that is the case, you can use the http://network-tools.com/ service to check your IP address and look for "Host name" which should then be copied into the "Use this Identification" box in HELO handshaking settings.
2. Try to change the server identity in the HELO handshaking settings in the Settings/Advanced screen to the "mail.domain.com" format. For example, if your ISP provides e-mail address such as firstname.lastname@example.org, set the HELO handshaking identification to mail.domain.com. Try also with only 'domain.com' format.
3. If you have a domain name that points to your computer's IP address, then enter that domain name in the HELO handshaking settings in PostCast Server. You can use the no-ip.com service to host a domain name on your computer.
05-30-2007, 02:11 PM
If you are sending this from a standard XP box, then it's probably much more sinister than just RevDNS and ISP blocking.
Depending on your ISP, they may be blocking outbound port 25. This is the standard SMTP port (you can change this in IIS). Most companies nowadays use 587 for outbound client mail to their mail servers, and even then they don't always relay.
Not all mail services use RevDNS, as it is an expensive operation. If they can't check your RevDNS, it usually just gets sent to your Junk Mail. Each ISP is a little different, some delete, some bounce, some Junk Mail, etc.
Even more sinister, it is a sure bet that companies are using blacklists, especially if you are on an ISP's DCHP. Comcast, especially. Comcast is known for its lack of enforcing SMTP rules and spam filtering, so almost all of its DHCP address blocks are on blacklists. You can use Spudhead's examples above, but it won't help any if your IP is on a blacklist, whether you put it there or not. Trying to remove a DHCP address from a blacklist is like pulling teeth with a mousefart and a fishing line -- it's a long, painful, and unproductive process. Besides, IF you get the IP removed and someone else is spamming in the same address space -- guess what?? You are back on the list.
Ways to fix this --
1) Request a static IP from your ISP. These typically are not on blacklists and you can easily get them removed if they are.
2) Set up a true mail server, not just your own workstation. If you set up a mail server, then all of the configuration options above with HELO and EHLO will be set for you.
3) Don't use CDO. There are a lot of 3rd party FREE mail systems out there. They will allow you to setup and configure their options much better than XP's IIS SMTP service. Try Win2k3 has IIS 6, in which the SMTP service has more configurable options than XP's IIS 5.1, and it's a lot more secure.
05-30-2007, 07:25 PM
More likely the target SMTP server does not allow emails sent from ANY dynamic IP (rather than just those known to spam in the past). For example AOL, walla.co.il and bellsouth all block emails sent from a dynamic IP address in an attempt to reduce spam.
This only applies if your CDO is sending from your SMTP server, if you log in and send from your ISP's SMTP server (SMTP is pretty much based around the idea that everyone sends from their ISP's SMTP server, which is just rubbish) or a public SMTP server to which you have access, this wouldn't be the problem (but you said you were sending from your box, so it probably is).
Your ISP is not blocking port 25, or you wouldnt be able to send to anyone.
Using an alternative to CDO would make no difference, it is still originating from an SMTP agent in the dynamic IP range. Changing your SMTP settings would also not make a difference for the same reason.
Your best bet is to use your ISP's SMTP server (if you still have your account details) or a mail server on an alternate address (for example gmail if you have a gmail account, though i think gmail now filters messages with non gmail from addresses going through their servers, there are a lot of other ones that don't)
05-31-2007, 01:16 AM
I appreciate all of your responses. They all give me insight.
Ghell is correct that it is obvious that port 25 is not blocked, as I am receiving mail on one remote network account.
The machine I have been referring to is my home box, which I am using to build a web application. Once finished, I will be moving the app to my production web server at the office, which is static. Hopefully these problems will no exist there.
05-31-2007, 06:06 PM
It would be best to test using an existing server (if your production server already has SMTP set up, it may be possible to get it to use it now) before you deploy the application.
Powered by vBulletin® Version 4.2.2 Copyright © 2017 vBulletin Solutions, Inc. All rights reserved.
|
OPCFW_CODE
|
describe("Javascript Data Structure Suite", function() {
describe("Queue", function() {
beforeEach(function() {
queue = new window.DS.Queue();
});
it("should add 5 to the Queue with enqueue method", function() {
expect(queue.enqueue(5)).toBe(1);
});
it("calling clear or empty method should clear the Queue", function() {
queue.enqueue(4);
queue.enqueue(5);
expect(queue.clear()).toBe(null);
queue.enqueue(4);
queue.enqueue(5);
expect(queue.empty()).toBe(null);
});
it("should remove an item by using dequeue from a queue that has 2 elements and the queue should have one item left", function() {
queue.enqueue(4);
queue.enqueue(5);
queue.dequeue();
expect(queue.length()).toBe(1);
});
it("a queue that has 2 elements (4 and 5) should give 5 as back and 4 as front item", function() {
queue.enqueue(4);
queue.enqueue(5);
expect(queue.back()).toBe(5);
expect(queue.front()).toBe(4);
});
it("a queue that has 2 elements (4 and 5) should keep 5 after dequeue method is used", function() {
queue.enqueue(4);
queue.enqueue(5);
queue.dequeue();
expect(queue.front()).toBe(5);
});
it("should add an item to an empty queue and the queue's length or size should be 1", function() {
queue.enqueue(4);
expect(queue.length()).toBe(1);
queue.empty();
queue.enqueue(4);
expect(queue.size()).toBe(1);
});
it("should return true for an empty list by using isEmpty method", function() {
expect(queue.isEmpty()).toBe(true);
});
});
});
|
STACK_EDU
|
Data Augmentation for NLP with CamemBERT
Updated: Dec 11, 2022
In the previous article we've seen how to implement the data augmentation for Computer Vision. In this article we are going to see how to augment textual data. Up we go!
There's an excellent reference explaining how data augmentation for NLP works. However, instead of using a ready-to-use library, it is more interesting to develop everything from scratch. Moreover, the fact that you've implemented the algorithm yourself allows you to adjust the algorithm according to your specific context.
In short the data augmentation techniques include:
Shuffle Sentences Transform etc.
There's also a python package that allows you to do some basic and advanced augmentation, called NLPAug. NLPAug offers three types of augmentation:
Character level augmentation
Word level augmentation
Sentence level augmentation
According to Jakub Czakon, the author of the discussed reference:
From my experience, the most commonly used and effective technique is synonym replacement via word embeddings.
Sounds reasonable. But as mentioned in the introduction, we will try implementing all these steps ourselves.
Step 0: Imports
import torch import random import re import pandas as pd from collections import Counter from math import floor, ceil
Step 1: Contextual Word Embeddings:
For this purpose we are going to use the CamemBERT, a french extension of BERT. A short quote from the official website:
CamemBERT is a state-of-the-art language model for French based on the RoBERTa architecture pretrained on the French subcorpus of the newly available multilingual corpus OSCAR. We evaluate CamemBERT in four different downstream tasks for French: part-of-speech (POS) tagging, dependency parsing, named entity recognition (NER) and natural language inference (NLI); improving the state of the art for most tasks over previous monolingual and multilingual approaches, which confirms the effectiveness of large pretrained language models for French. CamemBERT was trained and evaluated by Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
So, the first thing is to download the model and intialize the camembert object.
camembert = torch.hub.load('pytorch/fairseq', 'camembert') camembert.eval()
N.B. there're some additional dependencies like hydra-core or omegaconf to install. Important: do not confuse with the hydra package, otherwise, you will have to uninstall everything.
Step 2: Mask Random Word and Generate Synthetic Texts
This step is quite straightforward, simply mask a random word and send this masked line to camembert to let the language model guess the masked word. These guessed words are ordered by their probabilities in the descending order.
Mask random word in a line
def mask_random_word(text_line): text_array = text_line.split() masked_token_index = random.randint(0, len(text_array)-1) text_array[masked_token_index] = "<mask>" output_text = " ".join(text_array) return output_text
Generate synthetic text for a text line
def generate_synthetic_texts(text, number_of_examples): output = for i in range(number_of_examples): masked_line = mask_random_word(text) top_unmasked = camembert.fill_mask(masked_line, topk=1) output.append(top_unmasked) return output
Step 3: Get Majority Class and Execute
We are almost done. The only thing left is to detect the majority class and create N items per minority class according to the size of the majority class.
Read the text dataset
myModel = pd.read_csv('model.csv', sep=';')
Get the majority class
counter = Counter(myModel['Result']) max_intent = '' max_intent_count = 0 for intent in set(myModel['Result']): if max_intent_count < counter[intent]: max_intent = intent max_intent_count = counter[intent]
Calculate the number of items to generate per minority class
threshold = floor(max_intent_count/2) minority_intents = set(myModel['Result'])-set([max_intent]) for intent in minority_intents: print (intent, "started") intent_examples = myModel[myModel['Result']==intent] intent_count = intent_examples.shape examples_to_generate = max_intent_count - intent_count if examples_to_generate > threshold: body_examples = intent_examples['Body'] number_of_synthetic_per_example = ceil(examples_to_generate/intent_count) else: body_examples = intent_examples['Body'][:examples_to_generate] number_of_synthetic_per_example = 1 print (intent_count,examples_to_generate, number_of_synthetic_per_example) syntetic_bodies = for body in body_examples: syntetic_bodies.append(generate_synthetic_texts(body, number_of_synthetic_per_example)) # flatten arrays syntetic_bodies = [item for sublist in syntetic_bodies for item in sublist] augmented_df = pd.DataFrame() augmented_df['Body']=syntetic_bodies augmented_df['Result']=intent myModel = alexModel.append(augmented_df) print (intent, "processed")
N.B. the threshold is needed to define whether we generate synthetic items for each item in the minority class or, only on the selected sub array. For instance, if our majority class is composed of 500 items, and our minority class has 350, then, the most logical way is to select a subarray of 15 items and generate 1 synthetic example for each.
The execution takes some time (40 minutes for a dataset of a couple thousands of lines), so it may be interesting to try parallelizing the execution (maybe using Spark on Azure Databricks). I will try to publish an article on this in on of the next tutorials.
Hope this was useful
|
OPCFW_CODE
|
Watch 1000's of web design videos, download books and get support from development experts. Web Design Training: WordPress, Drupal, Joomla, HTML, CSS, PHP, SEO SPECIAL OFFER: Only $69 for access to everything in OSTraining for 1 year! Jan 15, 2013 · For example, to get rid of the words “noble” and “lord”, you could use these commands: lords <- tm_map(lords, removeWords, “noble”) lords <- tm_map(lords, removeWords, “lord”)
Which style to use for unused return parameters in a Python function call. Ask Question ... unused, unused = foo() (not very appealing, I think, same goes for ... I'd keep the dummy names there so that the parameters can be used in case need arises. Slicing a list will have a little overhead. Perhaps you can spare a few bytes to receive the ...Plants get rid of waste through a process called excretion. Different waste products are excreted in different ways. Plants break down waste products at a much slower pace than animals. They convert a lot of their waste into useful substances through photosynthesis.
SoftwareDistribution Folder is the folder where all the updates and installation Files are stored. Most of the time, you can safely Delete this folder to get rid of errors that are caused while updating windows. Go to C:\Windows\SoftwareDistribution and delete all files from the folder. Now, Delete the folder itself. Compiler warnings C4000 Through C4199. 04/21/2019; 10 minutes to read +2; In this article. The articles in this section of the documentation explain a subset of the warning messages that are generated by the compiler.
The C++ compiler only generates code for the 4 extern declared variables (I currently have no explanation for that)! The same exercise can be constructed for structs, with the same results. When the initialization goes along with the declaration C++ issues the warning, when declaration and definition/initialization are separated C++ has no issues. Nov 23, 2017 · If it doesn’t have a place to live, get rid of it. “With the spare TV, find somewhere it can be used and if you find it’s not serving your vision of you home then let it go,” she said ...
The fundamental problem solved by the Named Parameter Idiom is that C++ only supports positional parameters. For example, a caller of a function isn’t allowed to say, “Here’s the value for formal parameter xyz, and this other thing is the value for formal parameter pqr.” All you can do in C++ (and C and Java) is say, “Here’s the ... Welcome to the Norton Community - a place where Norton customers, employees and other people interested in dialogue can meet online to discuss our products and related topics. Whether you have a problem with your Norton product, you have a system tune-up question, or you're looking to scrub some malware from your PC, the Norton Community is the ...
Fitting to a lower order polynomial will usually get rid of the warning (but may not be what you want, of course; if you have independent reason (s) for choosing the degree which isn’t working, you may have to: a) reconsider those reasons, and/or b) reconsider the quality of your data). Fix #4: Run chkdsk. To learn more about running chkdsk on Windows 7 systems, read the chkdsk guide. Click Start to open Command Prompt; Make sure you run Command Prompt as Administrator: right-click on the item and click Run as administrator
Compiler warnings C4000 Through C4199. 04/21/2019; 10 minutes to read +2; In this article. The articles in this section of the documentation explain a subset of the warning messages that are generated by the compiler. They'll use it to get from Point A to Point B in the case of a right of way. This is known as a gross easement, and again, it conveys no rights of ownership. An appurtenant easement joins two separate parcels of land, and it goes with the property owner if the property should change hands.
The unused width parameter on the Channel ctor looks like a bug, actually. It will even cause TypedChannel to not work correctly for any type that is not the same size as void *. Thanks for finding this! So, probably the best place for an UNUSED macro is at the top of ar_internal.h, next to the other attribute macros. Your program might issue warnings that do not always adversely affect execution. To avoid confusion, you can hide warning messages during execution by changing their states from 'on' to 'off'. To suppress specific warning messages, you must first find the warning identifier. Each warning message has a unique identifier.
There is also some interesting research that suggests caffeine may enhance the warning symptoms of low blood sugar in patients with type 1 diabetes. ... to get rid of it. ... tea and caffeine on ... So a possible reason for an unused variable is that there is a mistake in your code. That is why the compiler warns you. If you "don't do" unused variables, it is even a dead giveaway. When you see such a warning, you should verify your code and either fix it or remove the offending variable.
When used with the -Include parameter, the -Recurse parameter might not delete all subfolders or all child items. This is a known issue. This is a known issue. As a workaround, pipe `Get-ChildItem -Recurse` to Remove-Item source code in mri_fwhm.c!) [For Jatin Vaidya, March 2010] ALSO SEE: * The older program 3dFWHM is now completely superseded by 3dFWHMx. * The program 3dClustSim takes as input the ACF estimates and then estimates the cluster sizes thresholds to help you get 'corrected' (for multiple comparisons) p-values. >>>>> Powershell script - Deprecated parameter We just upgraded ConfigMgr from v.1602 to v.1702. I went to run our powershell command to create the software Update deployments and it tells me, "WARNING: The cmdlet 'Start-CMSoftwareUpdateDeployment' has been deprecated and may be removed in a future release.
<variable> *is* *assigned* *a* *value* that is never used in function main This kind of warning is definitely generated by C# compilers, and by more pedantic c compilers. If a non-volatile variable is assigned, but never used for anything else, then the compiler can easily spot this and issue a warning.
I got rid of the errors by reverting back my settings before I started getting the static library to work. I would be grateful if someone explain or provide a step-by-step on setting up CodeBlocks to build GLFW as a static library. Sep 08, 2015 · 2) Use @SuppressWarnings at the smallest scope. Which means if the warning can be removed by putting @SuppressWarnings at a local variable then use that instead of annotating whole method. It's also worth knowing that suppress warnings annotation can be applied to class, field, method,...
How To Eliminate Unused Variable Warnings In GCC By jdfulmer | Published September 23, 2014 | Leave a comment Your JoeDog uses gcc v4.8.2 on his snazzy System 76 laptop .
What are these and how can I get rid of them? Stack Exchange Network Stack Exchange network consists of 175 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
How do I get rid of this C# warning: warning CS0169: The private field 'Foo.cbSize' is never used . This is an unused member of a struct that is used with interop, but cannot be removed because it defines a "placeholder", so to speak, in the struct's bit image. In C++ I could #pragma the warning off in code, then restore the warning afterwards. Sep 01, 2019 · “Patient not able to self-treat” is defined as an event requiring the assistance of another person for treatment. In a 26-week pediatric placebo-controlled clinical trial with a 26-week open-label extension, 21.2% of Victoza treated patients (mean age 14.6 years) with type 2 diabetes, had hypoglycemia with a blood glucose <54 mg/dL with or without symptoms (335 events per 1000 patient years).
|
OPCFW_CODE
|
"getline" prompt gets skipped, not working as intended
Here are my codes:
#include <iostream>
#include <string>
using namespace std;
int main()
{
int age1;
int age2;
string name1;
string name2;
cout << "Please enter the name for one people: " << "\n";
getline (cin, name1);
cout << "Please enter the age for this people: " << "\n";
cin >> age1;
cout << "Please enter the name for another people: " << "\n";
getline (cin, name2);
cout << "Please enter the age for this people too: " << "\n";
cin >> age2;
if ( (age1 <= 100 || age2 <= 100) && (age1 < age2) )
{
cout << name1 << " is younger!" << "\n";
}
else if ( (age1 <= 100 || age2 <= 100) && (age1 > age2) )
{
cout << name2 << " is younder!" << "\n";
}
else if ( (age1 <= 100 || age2 <= 100) && (age1 = age2) )
{
cout << name1 << " and " << name2 << " are of the same age!" << "\n";
}
else
{
cout << "You've got some really old people that are well older than 100!";
}
}
The first getline and cin works fine. I am able to be prompted to input.
However, the second getline and cin are prompted at once, thus I can only input for cin. (The second getline is skipped!)
If I use four cins, the program will work properly.
Reminder: the streams controlled by cin and cout aren't actually related at all; the appearance of such is an artifact of how your console works and typical I/O patterns. It is very easy to confuse yourself by assuming that the individual uses of cin and cout bear a relationship they don't have. (the objects themselves do have a relationship in being tied, although that isn't really relevant here. It means that if cin runs out of data, it will flush cout before it requests more data from the operating system)
cin >> age1; does not read the newline character following the number. The newline remains in the input buffer, then prematurely stops the second getline.
So, your program already works as long as you enter the first age and the second name on the same line.
One solution would be to skip whitespace after the numbers:
cin >> age1 >> ws;
Live demo.
first: cin>>age; It takes the number and stores into age but at the same
time it leaves the newline character in the buffer itself. so when there is prompt for next name cin finds that left over newline character in the buffer and takes it as the input. that it why it escapes the name2 prompt.
cout << "Please enter the name for one people: " << "\n";
cin>>name1;
cout << "Please enter the age for this people: " << "\n";
cin >> age1;<<--**this left the new line character in input buffer**
cin.get();<<-- **get that newline charachter out of there first**
cout << "Please enter the name for another people: " << "\n";
getline (cin, name2);
cout << "Please enter the age for this people too: " << "\n";
cin >> age2;
now i give name1-> shishir age1->28
name2->ccr age-> 22 it prints ccr is younder!<-- the spelling is wrong too :D
for more info on getline and get() read c++ primer plus listing 4.3, 4.4, 4.5
Happy coding
You need a ; after getline (cin, name);
hope this helps
I forgot the ";" when I rewrote that two lines. In the original program there are ";"s. And I just tried again, even with ";", it still skips the second getline. I edited the code to add ";".
I would suggest using cin.ignore(100,'\n'). It ignores the amount of characters you specify when you call it(100 in the example above), up to the char you specify as a breakpoint. For example:
cout << "Please enter the name for one people: " << "\n";
getline (cin, name1);
cout << "Please enter the age for this people: " << "\n";
cin >> age1;
cin.ignore(100, '\n');
cout << "Please enter the name for another people: " << "\n";
getline (cin, name2);
cout << "Please enter the age for this people too: " << "\n";
cin >> age2;
cin.ignore(100, '\n');
|
STACK_EXCHANGE
|
How to handle multiple widgets with same "implicit" ID
This isn't actually a bug but it may look like one.
Some widgets get same "implicit" id and usually this is the value of widget's label parameter. For instance:
if imgui.button("OK"):
print("OK1")
if imgui.button("OK"):
print("OK2")
All above buttons will have same label "OK". Both buttons will look like clickable but only the first one will trigger on clicking. This would be a problem for common labels like yes/no/OK/cancel etc. but these usually are on different windows and windows have different IDs scopes. e.g. if we use following code:
imgui.begin("Win1")
if imgui.button("OK"):
print("W1-OK1")
if imgui.button("OK"):
print("W1-OK2")
imgui.end()
imgui.begin("Win1")
if imgui.button("OK"):
print("W2-OK1")
if imgui.button("OK"):
print("W2-OK2")
imgui.end()
We will see W1-OK1 and W2-OK1 in output but no W1-OK2 nor W2-OK2. ImGui exposes following functions to maintain such ID scopes:
void PushID(const char* str_id)
void PushID(const char* str_id_begin, const char* str_id_end)
void PushID(const void* ptr_id)
void PushID(int int_id)
void PopID()
In order to solve such problems they need to be ported. We already have additional context managers for managing push/pop actions on fonts and styles so we could also create custom scope or scoped context manager that would make code more readable. We would then have two ways to interact with scopes:
if imgui.button("OK"):
print("OK outside scope")
imgui.push_id("my scope")
if imgui.button("OK"):
print("OK in scope")
imgui.pop_id()
# or ...
if imgui.button("OK"):
print("OK outside scope")
with imgui.scope("my scope"):
if imgui.button("OK"):
print("OK in scope")
Off-topic: Good to have you back. I was beginning to think you gave up on pyimgui.
This happens only in pyimgui? I mean, it does not occur in C++'s imgui?
Oh, I didn't gave up on this project :). I simply had a very intense period at work so had to put few open source projects on hiatus. But I'm trying to get back to them.
Regarding the issue. I'm pretty sure this happens in barre C++ ImGui too because the this is the way how most immediate-mode GUIs are implemented i.e. every widget get's it's unique and (usually) explicit identifier. In this case the labels are used as identifiers. Usually this is acceptable simplification because it isn't common to have multiple same-labelled widgets in the same context (window, modal etc.)
I like the idea of imgui.scope, but I wonder if we should put it into a different namespace? Having the python imgui module be a combination of imgui-cpp and pyimgui helpers might be confusing for coders?
Maybe imgui.helpers.scope ? or imgui.py_scope? I don't know; naming is hard.
It is definitely a good idea to have the extra functions and utilities that do not exist in C++ API in separate module. It will improve code organization and help in documenting it properly. We already have few of such functions e.g. font, styled, istyled, vertex_buffer_*.
Still I would not care much about developer confusion at this moment. I think that benefits of having them importable from one place instead of multiple submodules outweights the cons. This is also in spirit of the one-header libraries like ImGui itself.
My idea is to have at least two Cython modules:
core.pyx - for everything that exists directly in core C++ API
extras.pyx - for anything extra (python-specific helpers, context managers) that is not part of C++ API
But combine them in __init__.py in following way:
from imgui.core import * # noqa
from imgui import core
from imgui.extras import * # noqa
from imgui import extras
That way we will help to maintain backwards compatibility, make possible for some developers to be very specific about core/extras usage, and also make possible for other developers to have very simpe one-import code.
I've submitted a PR for a patch to expose PushID and PopID and a context wrapper as discussed.
It's also possible to append some unique text to a name after ## to make ids not collide. E.g. imgui.button('foo##1') and imgui.button('foo##2') don't collide, but are both displayed as foo.
Closing this as we already have imgui.extra submodule and I have added new scoped context manager.
Feature available in 1.2.0.
|
GITHUB_ARCHIVE
|
Senator Josh Hawley chastised Google’s senior policy counsel Tuesday over the company’s reliance on what they perceive to be overly
The group that trained Juan Guaidó and his allies laid out plans for galvanizing public unrest in a 2010 memo,
http://www.youtube.com/watch?v=76OconSyB7w Claim your LIMITED .999 Silver Trump 2020 Coin at http://TrumpCoin2020.com SUB TO BACKUP CHANNEL HERE: https://www.youtube.com/channel/UC7hGbKgaf_DK3w_gwQweICg?sub_confirmation=1 Sub to main
Washington doesn’t like its Muslim Arabs to take pride in their heritage or oppose the Israeli occupation, writes As’ad AbuKhalil.
An Ohio Court of Appeals on Tuesday ruled that the state can cut off funding to Planned Parenthood because it
http://www.youtube.com/watch?v=Ox4Adz6pJkQ Cut your insurance rates substantially and save BIG with http://PatriotRates.com SUB TO BACKUP CHANNEL HERE: https://www.youtube.com/channel/UC7hGbKgaf_DK3w_gwQweICg?sub_confirmation=1 Sub to main
Chelsea Manning has done a great service in finally stripping away the last vestige of excuse from the figures who
Facebook on Monday removed, and subsequently reinstated, three ads purchased by Senator Elizabeth Warren’s campaign that called for the breakup
Whether Biden can win the 2020 Democratic presidential nomination will largely depend on how little voters know about his actual
http://www.youtube.com/watch?v=UaTnNFa6lzM Claim your LIMITED .999 Silver Trump 2020 Coin at http://TrumpCoin2020.com SUB TO BACKUP CHANNEL HERE: https://www.youtube.com/channel/UC7hGbKgaf_DK3w_gwQweICg?sub_confirmation=1 Sub to main
House Speaker Nancy Pelosi told the Washington Post Magazine that the national division wrought by impeachment would likely outweigh any benefits.
One half of voters are independents. Good bills only have a 30% chance of congressional approval? 80% of Americans want to end corruption.
Opposition leader Juan Guaidó said that the 17 people who reportedly died as a result of the country’s electricity blackout were
The military community relies on it, our history reminds us of the need for it, and basic logic demands it.
http://www.youtube.com/watch?v=lr-F-DbRdVs The Syrian president says his country now faces an economic war as it is emerging victorious from the years-long
Despite growing Trump administration tensions with Venezuela and even with North Korea, Iran is the likeliest spot for Washington’s next
|
OPCFW_CODE
|
Challenges of Public Cloud
We need to recall a benefit of cloud is utilizing the cloud resources efficiently. We can achieve the high utilization of physical servers of cloud by aggregate the demands from users as many as possible. At the view-point of tenant, it is reasonable that they want to know the capacity of cloud physical servers to get the vision of free CPU capacity in order to utilize their instances in case of scaling out or up. It is definitely necessary for cloud operator as well but as a role of cloud operator, of course, they would not tell the CPU utilization of physical server.
One of the issues in public cloud is over-capacity. Of course when a customer purchase an instance with “unlimited capacity” from any cloud provider, there will not be any information about latency, throughput, etc. The term “unlimited” relating to elasticity does not seem reasonable because there always exist a “limit” even though they are massive cloud providers. For example, Amazon has EC2 to measure the CPU utilization in public cloud. The idea is that it measures the temperature of CPU to get to know the utility of CPU. It is obvious that if a CPU operates at high-capacity, it get hot over time, otherwise if it is idle, it cools off. By the help of thermometer in the modern chips like AMD or Intel, we can easily get the CPU utilization. Back to the issue of limitation, EC2 provides the CPU allocation to the instances over time. Even if the underlying physical server has spare CPU capacity, it would not freely allocate the new cycles of CPU to your instance. Since in public cloud, the isolation is really important to avoid the noise between instances (It is also another issue of public cloud – stability). If this isolation is not proper, another user could be greedy to get the your instance ‘s CPU capacity. That means it must have any limitation of CPU capacity for instances so that it is very difficult to utilize the CPU of physical servers. The solution for this problem is that all the instances utilize CPU at the same time, but it seems impossible because the usage of applications running on instances fluctuate to be predicted.
The concept of capacity is not latency, throughput but about the ability in which the needs of customer meets the operator threshold. Over-capacity means when the “unlimited” model contracted between cloud provider and customer overwhelm the capacity threshold and they operators have no any solutions to add more capacity to satisfy the needs of customer. Because of above issues, over-capacity is not handled properly.
Summary, in order to build an IaaS, the performance must be taken carefully. It must have the capacity planning (type and amount of resources) for that, QoS must be calculated, etc. to make a decision in deployment. Benchmarking is a solution for that even it is sometimes flawed.
Benchmark as a Service – Openstack Rally
The general concepts, features, etc. of Rally, you can easily reference on the website of Openstack wiki. I only point out some main features which i think important here:
+The workflow of Rally:
Need a cloud/ Deploy a Openstack cloud ==> Check if the OS deployment is working ==> Execute the scenarios ==> Generate reports
+Rally uses JSON configuration files that means it is easy to modify or create the own conf file.
+It is written in Python and extensively can be re-written with your own classes.
+The “benchmark” part of Rally is Benchmark engine:
– Includes the scenarios
– Basic IaaS operations are covered.
– The context which runs with scenarios can be programmed with Python and easily extendable.
+ The workflow of Benchmark engine:
Context ==> Runner ==> Scenario ==> Context …
Context: Gives the context to run with scenario
Runner: Tells Rally how to launch scenario instances.
Scenario: The actual scenario instance that cloud will execute
|
OPCFW_CODE
|
How To Become A Big Data Engineer In India?: Engineers specialize in assembling and maintaining data pipelines for storing big data, making it easily accessible in the future. Data science relies on this infrastructure on a daily basis. Engineers develop, build, maintain, and test the architecture of large-scale systems such as databases. Modeling, mining, acquisition, and verification are all based on the data set processes that data engineers develop.
A data engineer collaborates with a data architect, a data analyst, and a data scientist. In terms of data architecture, data architects oversee a company’s data management systems, whereas data analysts analyze data and develop actionable insights. A data scientist is responsible for advanced statistical analysis and machine learning. In fact, more and more companies are consuming data visualization and storytelling at a rapid pace in an attempt to gain insights.
Refer to Course Details to know more about related courses and find details like Admission Process, Eligibility Criteria, etc.
When not analyzed and analyzed properly, data generation is useless. This arduous task is assigned to professionals in Big Data. Developing, testing, and evaluating Big Data infrastructures for a company ensures the data is fit for analysis, which helps the company to grow.
- Roles and Responsibilities of Big Data Engineer
- Skills for Big Data Engineers
- Eligibility Criteria for Big Data Engineer in India?
- FAQ’s on How to Become a Big Data Engineer in India?
A Big Data Engineer’s responsibilities are as follows:
- Implementing and maintaining software systems, including designing and implementing them
- Ingestion, storage, and processing of data require robust systems
- Processing and operations of ETL (Extract, Transform, and Load)
- Research on new methods to improve data quality
- The data architecture can support business requirements
- Integration of multiple programming languages and tools to create structured solutions
- Developing efficient business models through the analysis of disparate data sources
- Collaboration with Data Scientists, Analysts, and various teams.
Those working as Big Data Engineers sometimes require expertise across a wide range of areas. The following are 7 skills every Big Data Engineer should possess:
- The most important skill among Big Data Engineers is programming, which ranks first out of all of the skills. Generally, Big Data engineers need to have practical experiences in any popular programming language, such as Java, C++, and Python.
- Database and SQL knowledge comes after programming expertise. Understanding the workings of the database will help you better grasp the process. A Relational Database Management system requires the ability to write SQL queries. MySQL, Oracle Database, and Microsoft SQL Server are frequently used database management systems for Big Data engineering.
- A major responsibility of a Big Data Engineer is to be responsible for data warehousing and ETL operations. A data warehouse must be constructed and used for this.
- Knowledge of operating systems is your fourth skill. Big Data tools require operating systems. Thus, you must be familiar with Unix, Linux, Windows, and Solaris.
- Working knowledge of Hadoop tools and frameworks is required. It is common for Big Data engineering practitioners to use Apache Hadoop, which means you must possess knowledge of HDFS, MapReduce, Apache Pig, Hive, & Apache HBase.
- You must have experience with a real-time processing framework such as Apache Spark. When you work as a Big Data Engineer, you will need an analytics engine that works efficiently with batch and real-time data, like Spark. Several live streaming sources such as Twitter, Instagram, Facebook, and so on can be processed by Spark.
- Lastly, you will need to have experience with data mining, data wrangling, and data modelling techniques. Data mining and wrangling entail steps for preprocessing and cleaning the data, finding trends in the data, and preparing the data for analysis.
The majority of data engineers hold an undergraduate degree in math, science, or a business-related field. This kind of degree enables graduates to mine and query data using programming languages, and in some cases to use big data SQL engines. After completing their bachelor’s degrees, most data engineers enter a career as entry-level employees. The following five steps can help you become a data engineer:
- Work on projects after completing your bachelor’s degree.
- Become an expert in computing, data analysis, and big data.
- Take a job at an entry-level position.
- Obtain additional certifications related to big data or professional engineering.
- Obtain a higher education degree in computer science, engineering, applied mathematics, or physics.
- Colleges and universities generally require GRE and GMAT scores for entrance. To apply for the degree program, it is better to have one of these scores.
- English proficiency is required for students studying abroad.
What is the salary of a big data engineer in India?
Big Data Engineers in India are paid an average salary of Rs. 7,52,972 per year.
What is the job description for Big Data Engineers?
Big Data Engineers are responsible for the following duties and responsibilities according to the job profile:
- A methodology to select and integrate Big Data platforms and tools to provide desired services
- Identifying various methods of retaining data
- ETL process implementation
- Analyzing performances and recommending important changes to infrastructure
What is the scope of big data engineers in the industry?
After completing the Big Data engineering course, a Big Data engineer will have tremendous opportunities waiting for him/her. With almost all companies adopting modern technology, managing vast amounts of data has become an essential task. Thus, a Big Data engineer has a lot of scope in the current market.
Around the world, students are becoming increasingly interested in data science. Globally, all industries are lacking skilled Big Data engineers. As a result, the Big Data job description also includes various aspects, and the salary structure for Big Data engineers is highly variable. Students today can secure a prosperous future by studying Big Data engineering.
|
OPCFW_CODE
|
Changing the degree of parallelism at run time
You can change the width of a parallel region while the job is running.
Increasing the parallel region width can improve application throughput, but can use more resources. You can make more resources available to the parallel region when you change its width. Resources that are not needed are released automatically. Alternatively, you can decrease the parallel region width to reduce the required resources.
- You cannot modify the parallel region width of a running job when the original job was submitted
with the parallel region fusion (fusionType) parameter set to
channelExlocation(prevent fusion across channels). You must resubmit the job with the fusionType parameter set to
noChannelInfluence(default) or to
- You must have add and delete authority for the appropriate jobgroup_name instance object. By default, the InstanceAdministrator role has this authority. The user who submits the job also has this authority.
- You can add resources only if the instance's resource allocation mode is scoped to the job (instance.applicationResourceAllocationMode=job).
- Checkpointing is not supported in the parallel region, including collateral operators.
- Consistent regions are not supported in the parallel region, including collateral operators.
- You cannot modify the parallel region width of a running job when the original job was submitted with the parallel region fusion (fusionType) parameter set to
- When you change the width of a parallel region, the operators within the region are stopped and restarted, as well as the PEs that contain them. If those operators are in PEs that also contain operators from outside of the parallel region, those operators from outside of the parallel region are also stopped and restarted. This operation can result in the loss of tuples and the loss of operator state.
- Which operators are in each PE can change. The PE ID might not be consistent for the lifetime of an operator. You can use the capturestate command to determine the PE metrics or the streamtool lspes --long command to see the PE IDs.
- The operation might take some time, depending on the number of operators that are affected.
- Confirm with the application developer that changing the parallel region width will not interfere with any settings in the running application. For example, if the region includes sink operators that are matched with an externally partitioned system.
- You can change the parallel width at any level of a nested parallel region, but the change will apply to all parallel replicas at that level. For information, see Nested parallel regions.
You can change the parallel region width of a running job in the following ways:
- Run the streamtool updateoperators command. See information about the --parallelRegionWidth and the --addJobResources parameters: streamtool updateoperators.
- Create a job configuration overlay file that includes the targetParallelRegion and the numberOfResourcesToBeAdded parameters. Then you can run the streamtool updateoperators command with those parameters. For information about job configuration overlay, see Job configuration overlays and the Job configuration overlays reference.
- Streams Console: The action can be launched from multiple locations. One way is from the Application Dashboard. Open the Streams Graph and select the job. Open the job properties by hovering your mouse over the information (i) icon, and then click Modify Parallel Region.
|
OPCFW_CODE
|
Version 11.10.0 (June 10, 2020)
- Added console 'Search with Google' shortcut
- Added console clear shortcut
- Improved saving scheduler changes
- Improved update notifications
- The 'ConsoleReading' method is now the default ServerOnlineDetectionMethod (on new mcss instances).
- Reduced initial update checking delay by 20 seconds
- Auto update check every 3 days
- Fixed UTF-8 link name in program settings
- Fixed UTF-8 support for backup file...
MC Server Soft // Server Wrapper for Windows 11.10.0
The oldest maintained Minecraft Server Wrapper for Windows.
Version 11.9.1 (May 24, 2020)
- Main window UI tweaks
- Improved DPI scaling
insanerc likes this.
- Fixed cut-off player graph
- Fixed escape characters being shown in chat
- Fixed navigation hover ghosting
Version 11.9.0 (May 16, 2020)
- Added optional server description
- Reworked the Keep Online setting (None, Elevated, Aggressive)
- Made Kill option available to starting servers (top menu)
- Mcss will warn if it's running from the temp folder
- Main window can now be resized from all sides
- Mcss logs are now instantly written to disk
- Fixed migration status not updating when finished
- Fixed rogue PowerShell scripts not getting killed
Version 11.8.3 (May 1, 2020)
- Fixed crash when viewing backups on the first of each month
- Fixed mcss options window size issue
- Fixed EULA value saved with a capital letter
Version 11.8.2 (April 30, 2020)
- Added option to change close preference
- Increased visibility of selected server tab
- Updated patreon list, special thanks to HawkSlayer
- Removed the standalone MinimizeToSystemTray option (in favor of the close preference option)
- Fixed Discord link
- Fixed "No process is associated with this object." issues
- Fixed main window resize artifacting when maximized
- Fixed console not scrolling to...
Version 11.8.1 (April 25, 2020)
- Added link to FAQ in the top Help menu
- Added UTF-8 help details to the Console Text Encoding setting
- Fixed crash when the clipboard is used by another program
- Fixed broken 'Always skip this step' option in the update server window
- Fixed Process.get_ProcessName() issue
Version 11.8.0 (April 23, 2020)
- Added an option to change the way how online server are detected. Because older operating systems like Windows 7 and Windows Server 2012 don't support the new one (since 11.7.0)
- Added dialog at mcss start to determine if you are affected by the above issue
Balkin likes this.
- Reduced overall memory usage
- Mcss no longer needs a restart to apply some advanced/delicate settings
- Updated toggle button UI
- Updated patreon list
Version 11.7.1 (April 17, 2020)
- Fixed backup suspend when a backup is manually triggered
- Fixed online status not always getting triggered
Version 11.7.0 (April 15, 2020)
- Changed the way how servers detect their online status.
- Mcss no longer reads the console for server statuses
- Added 'update restart reminder' to the main window when updated
- Removed obsolete --nojline startup property (new servers)
- Removed stopping status detection from console-based stops
- Fixed backup suspend option for Proxy and Vanilla servers
- Fixed missing proxy server port in the UI
- Fixed console...
Version 11.6.0 (April 11, 2020)
- Added backups
- Added option to schedule backups
- Reverted default console encoding to 'System Default'
- You are actually reading this? Awesome, have a great day!
- Config Editor now also displays txt files
- Dropped support for legacy tasks (pre-11.1.0, if you used mcss after this, tasks will already have been converted)
- Fixed incorrect starting delay for Timed Tasks
- Fixed crash on server start/restart...
Page 1 of 6
|
OPCFW_CODE
|
P. 4685. The object of mass \(\displaystyle M=4m\), shown in the figure can freely slide along the horizontal. The curved part of its surface is horizontal at its right side. A small body of mass \(\displaystyle m\) is placed to the object of mass \(\displaystyle M\) at a height of \(\displaystyle H\), measured from the horizontal tabletop, and then the system is released. When the small object of mass \(\displaystyle m\) hits the table the two objects are at a distance of \(\displaystyle d=0.6\) m. Friction is negligible everywhere and \(\displaystyle h=0.2\) m.
\(\displaystyle a)\) What are the speeds of the two objects at the moment when the small one of mass \(\displaystyle m\) leaves the big one of mass \(\displaystyle M\)?
\(\displaystyle b)\) At what speed does the small object hit the table?
\(\displaystyle c)\) Calculate the height \(\displaystyle H\).
P. 4686. Two alike discs of mass \(\displaystyle m\) and of radius \(\displaystyle R\) touch each other and move with the same velocity perpendicularly to the line segment which joins their centres of mass, along the surface of a horizontal air-cushioned tabletop. There is a third disc of mass \(\displaystyle M\) and of radius \(\displaystyle R\) at rest, at a point on the perpendicular bisector of the line segment joining the centres of mass of the two moving discs. The two moving discs collides totally elastically with the third one, which is at rest. There is no friction between the rims of the discs.
\(\displaystyle a)\) If \(\displaystyle M=m\), what will the speed of the discs be after the collision and what is the direction of their motion?
\(\displaystyle b)\) What should the ratio of \(\displaystyle M/m\) be in order that after the collision the two discs of mass \(\displaystyle m\) move perpendicularly to their initial velocity?
P. 4689. The working substance in a thermodynamic heat engine is a sample of diatomic ideal gas, which is taken through a cyclic process which consists of two isobaric and two isochoric processes. The highest temperature of the gas in this cyclic process is 500 K. At those two states when the gas changes from the isochoric process to the isobaric process the temperatures are equal.
What is the lowest temperature of the gas during the cyclic process if the greatest possible efficiency of the heat engine working between the two extreme temperature values could be 9.9-times as much as the thermodynamic efficiency of the engine in the above described case?
P. 4690. Two well-insulated condensers are charged on order to experiment with them. If they are connected in series, such that the oppositely charged terminals are connected, then altogether 166 V is measured across them. If they are connected in parallel, such that the same terminals are connected then the voltage across them is 74.4 V. In this latter case the energy loss is equal to the original energy of that condenser of capacitance \(\displaystyle 10~\mu \rm F\), which had smaller voltage across its plates.
\(\displaystyle a)\) What was the original voltage of the two condensers?
\(\displaystyle b)\) What is the capacitance of the other condenser?
P. 4691. The total cost of building the Solar Park at Újszilvás was 618 million Forints. The peak power output of the park is 400 kW, and it generated 685 MWh electrical energy in the first complete year, after establishing it. The peak power output of the nuclear power station at Paks is 2000 MW and in 2013 it generated 15.37 TWh electrical energy.
\(\displaystyle a)\) How can it be reasoned that the ratio of the peak powers is not the same as the ratio of the annual energy generation?
\(\displaystyle b)\) What would the total cost of replacing the nuclear power station at Paks to solar parks be?
P. 4692. A straight coil of length \(\displaystyle L=20\) cm, of cross-section of \(\displaystyle A=12~\rm cm^2\) and of number of turns \(\displaystyle N=400\) can rotate in the horizontal plane along a vertical axle through its centre. The coil is in uniform horizontal magnetic field of induction \(\displaystyle B=0.05\) T. Initially the coil is perpendicular to the magnetic induction, and there are two unstretched springs attached to both of its ends, as shown in the figure. The other ends of the springs are also fixed and initially the springs are perpendicular to the coil. The spring constant of the springs is \(\displaystyle D=24\) N/m, and their length is \(\displaystyle \ell_0=20\) cm.
After the switch is turned on, there will be current in the coil. What is the current in the coil if the coil will be in equilibrium after turning an angle of \(\displaystyle \alpha=60^\circ\)?
Send your solutions to the following address:
KöMaL Szerkesztőség (KöMaL feladatok), or by e-mail to:
Pf. 32. 1518, Hungary
|
OPCFW_CODE
|
pdf.js screws up openload
Attach (recommended) or Link to PDF file here:
Configuration:
Web browser and its version: Chrome Version 66.0.3348.0 (Official Build) canary (64-bit), also in Chrome Version 66.0.3343.3 (Official Build) dev (64-bit)
Operating system and its version: Win 10 x64
PDF.js version: 2.0.301
Is a browser extension: Yes
Steps to reproduce the problem:
Go to https://openload.co/f/eWCiP4gSM80/NBA-2018.02.14_MIA-PHI-2.torrent
What is the expected behavior? (add screenshot)
A screen to download a file should open
What went wrong? (add screenshot)
openload thinks there's an embedding violation
Link to a viewer (if hosted on a site other than mozilla.github.io/pdf.js or as Firefox/Chrome extension):
I can't reproduce this in Chromium 64.0.3282.167 on Linux. I installed the PDF Viewer and uBlock Origin (as seen in your screenshot), visited the page and clicked on the Free Download button (I also tried waiting before clicking).
What makes you believe that this issue is caused by PDF.js?
I'm on Chrome Version 66.0.3346.8 (Official Build) dev (64-bit), Win 10 x64.
PDF Viewer is the only extension.
Without it there's no problem with openload.
With it, the problem occurs.
PS
1.No need for uBlock Original
2.It's Win 10, not Unix
3.It's Chrome, not Chromium
@qwer1304 If I look at your screenshot, I clearly see that uBlock Origin is installed (and at least one other extension). Did you try to reproduce this bug without those other extensions enabled? Or similarly, can you reproduce the problem if you temporarily disable PDF.js's PDF Viewer?
Before testing, could you visit chrome://net-export to capture the network requests, and upon completion of the test stop the capture and mail it to me? Then I can check whether there is anything special going on.
I repeated the experiments with ONLY PDF Viewer installed and the results are as reported above.
Sent you two logs w/ and w/o
The above log entry does not indicate that PDF.js itself is blocking anything. It merely shows that PDF.js was at some point notified of a request, and that the request handling was blocked until the PDF viewer extension returned.
I've managed to reproduce the issue in Chrome on Windows 10, and found a specific snippet in the page that is responsible for the behavior you're observing. Deobfuscated, the snippet is:
function isSandboxed(_0x7089x2) {
try {
if (window.frameElement.hasAttribute("sandbox")) {
_0x7089x2();
return
}
} catch (err) {};
if (location.href.indexOf("data") != 0 && document.domain == "") {
_0x7089x2();
return
};
if (typeof navigator.plugins != "undefined" && typeof navigator.plugins.namedItem != "undefined" && navigator.plugins.namedItem("Chrome PDF Viewer") != null) {
var _0x7089x3 = document.createElement("object");
_0x7089x3.onerror = function() {
_0x7089x2()
};
_0x7089x3.setAttribute("type", "application/pdf");
_0x7089x3.setAttribute("style", "visibility:hidden;width:0;height:0;position:absolute;top:-99px;");
_0x7089x3.setAttribute("data", "data:application/pdf;base64,JVBERi0xLg0KdHJhaWxlcjw8L1Jvb3Q8PC9QYWdlczw8L0tpZHNbPDwvTWVkaWFCb3hbMCAwIDMgM10+Pl0+Pj4+Pj4=");
document.body.appendChild(_0x7089x3);
setTimeout(function() {
_0x7089x3.parentElement.removeChild(_0x7089x3)
}, 150)
}
}
isSandboxed(function() {
location.href = "/embedblocked?referer=" + encodeURIComponent(document.referrer.substring(0, 150))
})
They are trying to detect whether the Chrome PDF Viewer plugin exists in the browser, and if so, they try to load the plugin and see if an error event is dispatched (and if so, assume that the embed is blocked).
As a side effect of how embedded PDFs are detected in PDF.js, the "error" event is triggered when PDF.js renders the viewer in <object> elements, and there is no good way to prevent the "error" event from triggering.
I'm closing this bug since I currently don't plan to add a work-around to support this website, but I will follow up by mail so you can use the site again.
Could you add a workaround for this? Or at least disable PDF.js from loading on openload.co?
|
GITHUB_ARCHIVE
|
l10n + search by dialCode + custom countries list + improvment
This contribution is a fork of #227 with some improvements. The initial PR message can be found at the end.
Modification
add autofillHint with the possibility of disabled it
search starting with '+' is treat as search by dialCode (prevent returning empty list of country when user type only "+")
fix sorting countries name after search
default validator return error if not numeric
add sizeBox before dropdown, better visibility and prevent flag to touch border if no dropdown icon
update countries argument :
After seeing a lot of issues requesting the change of min & max length (#225, #238,...), some PR (#218,#212, ...)and also personally wanted to change the file. I found it will be better changing the format of the argument countries to accept a list of Country instead of list of ISO code. Like this you can still provide a limited list of country but you can also change parameters of the initial country list. In addition after adding l10n, this change is perfect for adding new locales to the file. I think it will reduce the time of maintaining the package and if the package is no longer maintained people can still use it without fork it.
I hope this PR will help and feel free to ask me modifications or question !
Initial message
This contribution contains a set of improvements to the package,
It contains:
Added a new property to the Country model called nameTranslations, it is a map that has a language code as a key and the localized name that correspond to the language.
Contain search results when typing the name in any of the provided translations
Fix searching for country code does not work when there's a "+".
Use flag emojis instead of png to reduce the package size (Personal opinion).
Sort Countries ascending by name ... for all languages
@marcaureln @ vanshg395 Any chance you can take a look at this and hopefully merge?
@jimmyff for sure. I took a look, everything look great. @launay12u thanks for your work. Can you allow edits from maintainers? I'd a few commit to push before merge. The change is about flags png, the emojis are great on mobile, but they aren't rendering on browser. We got black and white outline drawings instead. So, we'll keep the images for now. But, if you have an idea on how to fix it, it'll be great!
@all-contributors please add @launay12u for code
Hey !
Thanks for looking to merge my PR 😄
I think you already allow to edit :
Do I need to do something else ?
As I say it's a fork from another fork is it a problem ?
I'm still not able to push my changes. Can you do I for me and I'll directly merge the PR.
This is what I did:
I first restore the assets folder and uncomment the pupspec.yaml
Update the lib/country_picker_dialog.dart and lib/intl_phone_field.dart to use the flag when running the app on browser, I dit something like this:
import 'package:flutter/foundation.dart' show kIsWeb;
....
kIsWeb
? Image.asset(
'assets/flags/${_filteredCountries[index].code.toLowerCase()}.png',
package: 'intl_phone_field',
width: 32,
)
: Text(
_filteredCountries[index].flag,
style: TextStyle(fontSize: 18),
),
...
@marcaureln It's done, I also resolve conflict on the lock file (take the current lockfile instead of mine).
Feel free to ask me other change.
Just I now run Flutter in version 3.10.0 so I can't run flutter pub get anymore (But if needed I can downgrade)
Maybe it will be good to accept new version of flutter 😄
PR merged, huge thanks to @launay12u and @jimmyff!
@launay12u I would be glad to merge another PR for Flutter 3.x :wink:
I've included the fix for Italy (along with Malaysia and Switzerland too) on my branch. Although my PR also includes a breaking API change, it now includes a validates bool in the onChanged callback:
https://github.com/vanshg395/intl_phone_field/pull/286
|
GITHUB_ARCHIVE
|
|PRICE:||R 18,000.00 Incl. VAT|
|MODEL||Single Action Army|
|CALIBER||.38 Special / .357 Magnum|
|CONDITION||Fair to good used condition|
FIREARM HISTORY AND FEATURES:
The Colt Single Action Army, also known as the SAA, Model P, Peacemaker and Model 1873, is a 6-shot, single-action revolver designed in 1872 by Colt’s Patent Firearms Manufacturing Company—today’s Colt’s Manufacturing Company—and is still in production to this day.
The Colt SAA “Peacemaker” revolver is a famous piece of Americana.
The revolver was popular with ranchers, lawmen, and outlaws alike, but as of the early 21st century, models are mostly bought by collectors and re-enactors.
Its design has influenced the production of numerous other models from other companies.
The first Colt Model 1873 was designed for the U.S. government service revolver trials of 1872 and was adopted as the standard military service revolver until 1892.
These models issued to the U.S. Cavalry were chambered in .45 Colt and had 7.5″ barrels.
Since its original introduction the Colt SAA has been offered in over 30 different calibers and various barrel lengths, but its overall appearance has remained consistent since 1873. Colt has cancelled its production twice, but brought it back due to popular demand.
The production of Colt SAAs can be differentiated into three generations with various, small changes to their manufacturing and range;
First Generation (1873 – 1941)
Second Generation (1957 – 1974)
Third Generation (1976 – present)
This particular revolver is a Second Generation SAA manufactured in 1970, with a 4.75″ length barrel.
This gun is in fair to good, used condition. It is mechanically sound and functional.
There is a fair amount of light wear along the left side of the barrel, particularly around the muzzle, as well as around the front of the ejector sleeve.
There is also a fair amount of uniform wear as well as wear marks to the cylinder, trigger guard and grip straps, however all this wear is just holster wear, with no deep scratches or damage.
The colour case hardened frame is still in good condition with no visible scratches or marks, and just some very light wear to the top strap.
The original Colt plastic grips are still in very good condition, with just the left panel being a bit dirty (easily cleaned).
The action still feels solid.
Includes a leather holster.
The firearm is suitable to be used for:
- Cowboy Action Shooting
- Target shooting
The revolver may also have some collectors value.
|
OPCFW_CODE
|
Stranded Deep is a survival game and the first release from Beam Team Games. It first entered early access for desktop in 2015, entering full release in 2022, and Beam Team released versions for PS4, Nintendo Switch, and Xbox One in 2020. In the game, you’re stranded on a tropical island full of natural dangers, and your goal is to survive and ultimately escape. There are a lot of Stranded Deep cheat codes that can make this task easier, or just let you explore the island setting at your leisure.
Stranded Deep Premise
The premise of Stranded Deep is simple, and it’s as old as Robinson Crusoe. You are the sole survivor of a plane crash, and you need to keep yourself alive long enough to find a way off of the Pacific island you’ve washed up on. Keep your hydration, hunger, sleep, and health bars as high as you can in the process. You can even explore the ocean, foraging for supplies in shipwrecks and plane wreckage. Because the world of Stranded Deep is procedurally generated, no two playthroughs are the same.
Stranded Deep Cheat Codes for Desktop
To enable Stranded Deep cheats on desktop, press the backslash button (\) to enable the admin console. This opens up a menu where you can enter text. When you’re done, type another backslash to set your changes. Note that you need to be in single-player mode for these to work.
The most useful codes to enter in Stranded Deep are:
- dev.god: Gives you the ability to fly around the map (make sure to be on the ground when you turn it off, though!)
- dev.console True: Displays the full developer console
- dev.console False: Hides the full developer console
- fps True: Displays frames per second
- fps False: Hides frames per second
- dev.log.dump: Spawns a log on the desktop
- devtools.components.camera.colorgrading True: Turns the fog on
- devtools.components.camera.colorgrading False: Turns the fog off
- dev.components.camera.reflections True: Turns on reflection effects in fog
- dev.components.camera.reflections False: Turns off reflection effects in fog
- dev.options list: Shows the current output for the game
- clear: Erases the history of your command entries
In addition, the command “help list” will display the entire command list for the game. Finally, you can spawn any item you want using the “dev.console True” command and then pressing the forward slash (/) key to open the item spawn menu.
Stranded Deep Custom Seed Codes for Xbox One and PS4
Because consoles don’t have a keyboard, you can’t access the admin console like you can for the desktop version. However, you still have access to Cartographer Mode, which allows you to create and customize islands. And through this mode, you can enter a custom world seed. Simply select “Random World” and follow the prompt to enter a new seed. Then, when you start your new game, make sure to check “Use Existing World.”
The best way to find the perfect seed code in Stranded Deep is on the game’s official forum. There, other players share codes that have made their game easier or harder. Some of these seeds generate useful loot near your spawn point, while others just create really gorgeous islands.
Stranded Deep Cheat Code FAQs
How do you activate cheats in Stranded Deep on desktop?
To enter cheat codes in the desktop version of Stranded Deep, press the backslash key (\) to enable the developer console. Then type in your cheat code. Type another backslash to hide the console.
The image featured at the top of this post is ©Stranded Deep press kit.
|
OPCFW_CODE
|
Is this project dead?
is this project dead?
why ?
i would like to see a coffeekup package for meteorjs at https://atmospherejs.com/
+1 Any word on how to use coffee(k|c)up with meteor?
why not jade?
If i can do everything using coffescript why should i learn another thing
only for css replacement?
doesnt make sense unless we want to go with mainstream bullshit.
Em 11/11/2015 13:06, "Asif Saifuddin Auvi"<EMAIL_ADDRESS>escreveu:
why not jade?
—
Reply to this email directly or view it on GitHub
https://github.com/mauricemach/coffeekup/issues/120#issuecomment-155781786
.
I agree with the sentiment above although not the hostility.
Hostility removed. Just a joke ;)
2015-11-11 13:50 GMT+00:00 Dom Vinyard<EMAIL_ADDRESS>
I agree with the sentiment above although not the hostility.
—
Reply to this email directly or view it on GitHub
https://github.com/mauricemach/coffeekup/issues/120#issuecomment-155788979
.
i liked the philosophy but using unmaintaine software in enterprise lvl is bit risky
this time i have to agree with u.
Mainstream to the rescue
2015-11-11 15:27 GMT+00:00 Asif Saifuddin Auvi<EMAIL_ADDRESS>
i liked the philosophy but using unmaintaine software in enterprise lvl is
bit risky
—
Reply to this email directly or view it on GitHub
https://github.com/mauricemach/coffeekup/issues/120#issuecomment-155815854
.
but i like coffeescriptish stuffs a lot. i use python hvyly 4 backend. vuejs+coffee for client side
you are a good guy :P
I´m doing some vuejs stuff as we speak.
check
https://github.com/zappajs/zappajs
Another great thing that was lost in time, still supports express 4
2015-11-11 15:33 GMT+00:00 Asif Saifuddin Auvi<EMAIL_ADDRESS>
but i like coffeescriptish stuffs a lot. i use python hvyly 4 backend.
vuejs+coffee for client side
—
Reply to this email directly or view it on GitHub
https://github.com/mauricemach/coffeekup/issues/120#issuecomment-155818403
.
thats awesome!! would help me a lot to use vue with coffee :+1:
I got a proposal, lets create a organization with coffeecup users and volunteers to maintain the good project for long officially with a team :)
I follow you :) how do you plan to do it ?
2015-11-11 15:50 GMT+00:00 Asif Saifuddin Auvi<EMAIL_ADDRESS>
I got a proposal, lets create a organization with coffeecup users and
volunteers to maintain the good project for long officially with a team :)
—
Reply to this email directly or view it on GitHub
https://github.com/mauricemach/coffeekup/issues/120#issuecomment-155823018
.
I have started some process please wait few moments and get notified and let me know what you guys thinks about the directions.
Great Asif!
2015-11-11 15:57 GMT+00:00 Asif Saifuddin Auvi<EMAIL_ADDRESS>
I have started some process please wait few moments and get notified and
let me know what you guys thinks about the directions.
—
Reply to this email directly or view it on GitHub
https://github.com/mauricemach/coffeekup/issues/120#issuecomment-155825589
.
new ogranisation created name coffeekup as cup is not available+ this will let the future user of the library
to commemorate the initial author and his naming convention.
2nd point what should be the library name? coffeekup? or coffeecup? which will be better for npm install? thought guys? who of you are willing to participate in the organization as dev members?
https://github.com/coffeekup
i want to invite some of you who worked before and willing to participate :)
#121
no one responded till now
Dont understand why.
I bet if facebook looked at it only once , "there would be thousands of
talented people here...
2015-11-12 17:58 GMT+00:00 Asif Saifuddin Auvi<EMAIL_ADDRESS>
no one responded till now
—
Reply to this email directly or view it on GitHub
https://github.com/mauricemach/coffeekup/issues/120#issuecomment-156184238
.
see this https://github.com/goodeggs/teacup
@auvipy how do we join? I want to port/use coffeekup to/with meteor!
you are well come on the team! I will send you a invite.
|
GITHUB_ARCHIVE
|
Biology, images, analysis, design...
|"It has long been an axiom of mine that the little things are infinitely the most important" |
One-way random effects ANOVA (Model II)On this page: Principles Model Formulae Estimating variance components Assumptions
In one-way ANOVA we have a single 'treatment' factor with several levels (= groups), and replicated observations at each level. In random effects one-way ANOVA, the levels or groups being compared are chosen at random. This is in contrast to fixed effects ANOVA, where the treatment levels are fixed by the researcher. Random effects ANOVA is appropriate in three situations:
The mathematical model for one-way random effects ANOVA is similar to (but not identical to) the model for one-way fixed effects ANOVA. It again describes the effects that the determine the value of any given observation, but this time the 'treatment' factor is random rather than fixed:
Expected mean squares
The methodology for working out sums of squares is identical to that used for fixed-effects ANOVA. Again we are not assuming equal sample sizes in each group.
These values are then inserted into the ANOVA table (see below), along with the degrees of freedom, and mean squares obtained by dividing the sums of squares by their respective degrees of freedom.
The F-ratio for the 'groups effect' is obtained by dividing MSBetween by MSWithin. The P-value of this F-ratio is then obtained for k − 1 and N − k degrees of freedom.
Estimating variance components
Since we are now assuming random 'treatment' effects, there is no point estimating the magnitude of those effects (that is the means), nor the differences between means. For example, if we are making (n =) 2 measurements of weight on each of (k =) 20 subjects, we are not interested in which subject happens to be the heaviest. What is of interest is the amount of variability between subjects compared to the variability between the paired measurements on each subject. In other words, we need to estimate the variance components.
The variance within groups is estimated by MSW. The variance between groups is known as the added variance component and is estimated as shown below:
The added variance component (sA2) can be quoted as an absolute measure of the variability between groups, or it can be quoted relative to the total variability (s2 + sA2). When it is quoted as a proportion of the total variability, it is known as the intraclass correlation coefficient.
The intraclass correlation coefficient
The intraclass correlation coefficient is the proportion the between groups variance comprises of (between groups + residual) variance. When the coefficient is high, it means that most of the variation is between groups. Hence it is a measure of similarity among replicates within a group relative to the difference between groups. When subjects are the
'groups', and the replicates are repeated observations being made on each subject, the intraclass correlation coefficient provides another measure of
The intraclass correlation coefficient is calculated from the variance components derived from a random effects analysis of variance. For now we will only consider its estimation when we are doing a one way analysis of variance.
Note that the intraclass correlation coefficient is sensitive to the nature of the sample used to estimate it. For example, if the sample is homogeneous (that is the between subject variance is very small), then the within subject variance will be proportionally larger and the ICC will be low. In other words it's all relative. So whenever you interpret a correlation, remember to take into consideration the sample that was used to calculate it. The often-reproduced table which shows ranges of acceptable and unacceptable ICC values should not be used as it is meaningless.
One might think the Pearson correlation coefficient could be used to provide a measure of repeatability, at least when group size (n) = 2. Unfortunately that coefficient overestimates the true correlation for small sample sizes (less than ~15). In fact, the intraclass correlation is equivalent to the appropriate average of the Pearson correlations between all pairs of tests.
There are other intraclass correlation coefficients that can be used in special situations. Unfortunately these have resulted in a certain amount of confusion over the correct formulation for the most frequently used version of the ICC given above. For example there is an average measure intraclass correlation coefficient. This is appropriate if one wishes to assess the reliability of a mean measure based on multiple measurements on each subject. Some sources give this as [MSB-MSW]/MSW, or use what is known as the Spearman-Brown Prophecy formula (2*ICC)/1+ICC). One can also use different ANOVA models, for example a two way analysis of variance. Details are given in the references on the ICC given below.
In random effects ANOVA the groups (usually subjects) should be a random sample from a larger population. Otherwise, the same assumptions must hold as for a fixed effects ANOVA if one is to make valid statistical tests such as the F-ratio test, namely:
Note, however, that estimates of ICC for descriptive purposes only are not dependent on either normality or homogeneity of variances. They can for example be done on dichotomous data coded to 0s and 1s to perform the ANOVA. In this case of course the normal approximation confidence interval for the ICC (given by some statistical packages) would not be valid.
Related topics :
|
OPCFW_CODE
|
As I continue my enthralled read through “The Wisdom of Teams: Creating the High-Performance Organization” I am moved to share another core concept that deserves to be considered essential for Agile Work:
The Performance Goal
This concept and practice is an essential condition for a team to become a high performance team. The Performance Goal is a specific, measurable, challenging goal that is given to and/or adopted by the team. It is a statement or description of a goal that answers “why?” and “what?” questions, but specifically avoids answering “how?”. It is not a description of activities, it is a statement of desired results. The team is left with the full authority to answer “how?” and implement it.
This concept is essential for setting the initial boundaries of self-organization. By defining “what” and “why”, the team is left free to be creative about the solution. The Performance Goal is also essential to building team accountability (as opposed to individual or externalized accountability). Every action, plan, mistake and success are oriented around the Performance Goal.
From the book:
The hunger for performance is far more important to team success than team-building exercises, special incentives, or team leaders with ideal profiles. In fact, teams often form around such challenges without any help or support from management. Conversely, potential teams without such challenges usually fail to become teams.
I would also like to point out a great blog entry I found that shows some of the other side of dealing with teams and present some cautionary words about the potential pitfalls of working in teams.
In an Agile Work environment, the starting point for a performance goal is simply the delivery of valuable work at the end of their very first iteration. This is often a substantial challenge to a team and an organization. For some teams that have worked for a long time in a “waterfall” or phase-based project environment, it can be almost unthinkable that valuable results could be delivered in one tenth or one twentieth of the “normal” amount of time.
However, simply delivering value at the end of each iteration is probably not going to sustain the development of a high performance team for very long. Rather, the overall objective or goal of the project has to be important and compelling. Much work these days is _not_ important and compelling. In fact, many people become cynical about work because they are stuck doing a high proportion of work that is bureaucratic or due to chaotic circumstances.
As a reminder, the books “Good to Great” and “Built to Last” both discuss the importance of challenging, important goals. The wording is different, but the concepts all map to the idea of a Performance Goal. In “Good to Great” it is the “Hedgehog Concept”. In “Built to Last” it is the “Big Hairy Audacious Goals” (no kidding!). I imagine this concept comes up in many other good books about team and organizational effectiveness. I would love suggestions on other good books to read about this! Please write them in the comments.
I frequently work with organizations where a team has been formed up, told to use agile methods, and then also told how to do their work. Really great examples of this are things like: “we want you to self-organize, but you have to build this huge system using J2EE.” The the problem with this is simply that it may in fact be ten times less expensive to build the system with Ruby. However, someone has decided (possibly for defensible reasons) that J2EE is the technology platform that must be used. In this circumstance, someone external to the team has stepped over the boundary of “why” and “what” and also included some “how” in the team’s goals. The team is not even allowed to consider the possibility that something might work just as well and be much less expensive. Not only that, but the stakeholders haven’t even really stated “why” the system is being built and so the team can’t evaluate technology choices. There is no standard around which to self-organize. I admit that I am using a simplistic example here, but the pattern is something that I have seen over and over again.
|
OPCFW_CODE
|
A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Discussion in 'Assets and Asset Store' started by NemoKrad, Feb 1, 2016.
Please make it fast enough to be usable in real world scenarios or I'll cry
That is stunning
Just another shot. Props (mesh) painted with VTP. Terrain Shader is Distingo+VTP.
I have observed that the shader behaves very differently depending on the near camera plane settings. Is this normal? Is there a manner to control it? By the way, I have seen that this parameters affects a lot antialiasing filters.
Yes it is altering the depth perception of the shader I think. Not sure how I can guard against that, but I will look into it.
Actually I found if usefult as a parameter it it were independent of the camera near plane.
Thanks that's sorted it, no more errors and sorry about that. I started a new project and forgot, did it first time around!
will later versions of Distingo support muli-terrain scenarios?
Does it not now? Sure I ran it on more than one terrain
After stitching my terrains together using https://www.assetstore.unity3d.com/en/#!/content/42671 it seems to not allow for each terrain to change the settings for Distingo
I had to use the earlier posted version that someone else added and they worked.
OK, I have never used that tool. So does it just alter the two terrains so that when placed next to each other their vertex heights match, effectively leaving you with still, two terrains?
If that is the case, then all you have to do is add a Distingo script to the other terrain object, or are you asking that a single Distingo script controls ALL the terrain objects in a single game object?
Again some progress on the VTP Integration.
After use Distingo on any compuer / project (I check on 2 compuers and like 5 project ) and I have always thesame errors.
1. Shader error in 'Nature/Terrain/Distingo Standard': maximum ps_4_0 sampler register index (16) exceeded at Assets/Distingo/Shaders/Occlusion/DistingoTerrainSplatmapCommon.cginc(41) (on d3d11)
Compiling Fragment program with DIRECTIONAL SHADOWS_SCREEN LIGHTMAP_OFF DIRLIGHTMAP_COMBINED DYNAMICLIGHTMAP_ON _TERRAIN_NORMAL_MAP
Platform defines: UNITY_ENABLE_REFLECTION_BUFFERS UNITY_PBS_USE_BRDF1 UNITY_SPECCUBE_BOX_PROJECTION UNITY_SPECCUBE_BLENDING UNITY_TEXTURE_ALPHASPLIT_ALLOWED
2. after change on terrians options (cog) my inspector is stretched look on screen.
Any ideas ?
1. Is the Lightmap static off as described in the docs and the UI tool tips?
2. There is a fix to this in post in post no 500, it will also be in the next update.
a. 1. YES
b. 2. Ok
Are you using the Occlusion channels per texture channel? If not switch to the Global Blend option. If you are then something in your scene must be making Unity take up another texture register.
I am starting to think that I will take the occlusion shader out as it seems to be causing more issue than it is solving peoples problems.
Today I did an overview vid of Distingo, it will be going up on the asset store with the next update.
One problem I have is the content in the right rail... the component explodes width wise... so I need to scroll way over to see things in the inspector.
look up to post #564, it will direct you to post #500 where you can find the fix for that
It will be in the next update too
@NemoKrad , Shameless self promotion .......you should be proud of yourself The choice of the English accent was also a good touch
I have spent the week in the field, so this was a pleasant surprise, and a good refresher moving forward with the new features.
Any chance to increase the Splatting Distance to around 8000, find that value is the minimum required if doing a flighsim so not to have the terrain switching to standard shader to soon.
Ill do that now, Ill set it to 10K just for good measure
I have just updated my documentation ready for the 1.2 Update, if you want a sneak peek, then you can grab a copy either off the front page, or from here.
Cant wait for the VTP section in 1.3.
It will be in the 1.2 update
Uhhh, have to hurry then
VTP UI almost there...
Far UV Multiplier does not work in scene view but it does work in game view. I checked this in two projects and in unity 5.3.3p1 and 5.3.3p3. I remember that it has worked in scene view the first time i tried it. Any ideas?
No, nothing has changed regarding that mechanic since first drop....
I have a question about the use of the blend-map / color-map:
my understanding is that it is supposed to just blend another map on top of the whole terrain, and the slider determines how much it influences / blends the colors. If i turn the slider down with a blend-map applied, the terrain turns to black, where i was expecting that the blend have no influence instead. If i turn the slider up, after a certain point it appears as if the brightness just gets up, where i am expecting the blend-map to just overtake completely instead.
Am i understanding this mechanic wrong? How exactly does the blend work?
In the docs it states:
Color = Color + (BlendColor * BlendColor.Alpha * BlendPower
wich in my understanding should work exactly like i was thinking - if i set the the blendpower to zero it should not alter the color at all. Not sure why its turning my terrain black instead...
0 x XYZ = 0
Might be why?
Might be, but if i understand the formula "Color = Color + (BlendColor * BlendColor.Alpha * BlendPower) " in a correct way then if the blendpower is 0, it should be Color = Color + (0) --> therefor not influencing the color at all. That would be the way i think the feature should work. After all it should be another way to give more variations to the colors, and be able to control the amount of influence.
I agree with the logic, just saying, sometimes with shaders when you multipy into 0 the result is just black. Depends on whats going on in the shader. Might be an oversight on the developers part. I don't know. I haven't tried that part of this asset yet hehe.
Yes, I think I have miss represented that in the doc, the actual calc is this:-
Color = lerp(Color, Color * BlendColor * BlendPower, BlendColor.Alpha);
I will update the document accordingly.
Would you prefer the blend power to control the lerp?
That would actually make more sense wouldn't it? The lerp alpha would control the visibility of B over A. So if you drop the alpha of the lerp to 0 then it will infact still render A but not B at all... assuming I am remembering LERP properly.
yea, I think I did it that way as in meta it was suggested that the alpha be used to affect the blend, or at least that is how I interpreted the request
Ill move the lerp control to the slider, so it will look like this:
Color = lerp(Color, Color * BlendColor * BlendColor.Alpha, BlendPower);
Or would this be better
Color = lerp(Color, BlendColor * BlesnColor.Alpha,BlendPower);
Or just drop the alpha as a blend key all together, that may be more sensible...
I guess the later would be more natural, ill have a play and see.
Yes, so the formula would then be Color = lerp(Color, Color * BlendColor * BlendColor.Alpha, BlendPower);
wich would make more sense to me, and allow better control (i can control the blend-texture on a texture level with the alpha, and on an overall level with the blendpower)
edit: not sure about the color*blendcolor*alpha part...
i wouldnt drop the alpha part, as this allows to mask out areas in the blend-texture wich might be handy
slightly leaning towards the : lerp(color, blendcolor*alpha, blendpower) solution. But without testing wich works better its hard to say for me =)
I really must go to bed, I have been heading to bed for the past hour at least now... lol
Has anyone tested this with mobile?
Someone here said they ran it on Android, but I have never ran it on mobile.
Have you edited the editor file as I described in #500? Make sure you have not removed the set dirty method as that is what tells the editor to update the Scene view.
I'm waiting on this as well, before I buy it. Sadly, I had no full luck if it works or not. I would love to know as well. Thanks.!
I need to know if its going to work before I buy it.
If you have time, can you please do an android demo with this plugin in use. So we can test it and see its performance.
I don't have android, or ios devices, just a Win10 laptop I am afraid.
new blend calc looks like this:
Color = lerp(Color, Color * BlendColor, BlendColor.Alpha * BlendPower);
Online documentation updated.
Thats the same calc i ended up with while messing with your shaders =)
The whole section should look like this now, right?:
float4 c = tex2D(GlobalBlend,IN.texcoord);
sc = lerp(sc, sc*c, c.a*BlendPower);
One suggestion still: as the lerp only uses a float between 0..1 for the amount of blend, i ended up changing the editor scripts too, so the slider value for the blendpower is between 0 and 1.
This way, the blend works perfect.
Looking forward to the next update, so i dont have to use my own version. Thanks!
Yep, exactly that, and I capped the editor the same too
again some update about the Distingo and VTP collaboration. We are quite happy about the progress so far, UI is integrated in Distingo and the Heighbased Blending and PBS is fully working. Just some more work left to do on the Parallax. Sometimes blending is a bit off at the moment, but we are confident to have that fixed until the release. Thanks of course to Adam for his genius Gaia Asset. Support for Gaia will of course be also integrated
Lastly, two screenshots for comparison Feedback is - as always - highly appreciated.
Yeah it's okay I guess. It depends if you like that sort of realism in a 3D environment What I really like is that I now am going to have to shelve out more money, this is fantastic!
|
OPCFW_CODE
|
In the past, Microsoft encompassed a huge variety of different software platforms each targeting a different set of potential customers, partners, sales channels, industries and market segments. This natural segmentation was further exacerbated by acquisitions they’d made over the years, creating autonomous business units within the same company, much like many large enterprises.
Several years ago, a group of Great Plains ERP customers and partners formed a meeting (500 of them) they called Stampede. After Great Plains purchased Solomon, and Microsoft purchased Great Plains (as well as Navision and Axapta) the event blossomed into a much larger gathering aptly named Convergence. Unlike its other events, Convergence was content to simply target ERP and CRM partners who mostly resisted all efforts to consolidate their products into a single package. GP shops became Dynamics GP, Solomon partners sold Dynamics SL, and so on. For a company with an existing issue with silos, this meant that the partner community remained increasingly walled off. Microsoft Business Systems, where future CEO Satya Nadella worked, was an ambitious group but one that remained distinct from partner to partner, customer to customer.
Each year Convergence was held, the individual parts of the Microsoft partner community would descend to watch a series of keynotes, eat lunch and then depart into rooms next to one another. The conference was useful (especially as multiple partners supported the single CRM product, Dynamics CRM) but did little to connect the larger Microsoft strategy to the business leaders at organizations across the globe.
This year was different.
Almost from the opening keynote (Satya Nadella returned after an 8 year hiatus, this time as CEO) the audience could tell this conference was much better than in previous years. Instead of showcasing products with a Dynamics logo emblazoned on them, each speaker got up to show how businesses could use the entire Microsoft set of solutions to be more productive and drive home actionable intelligence to compete more effectively. Product names were largely behind the scenes, leading analysts and partners not in the know to speculate wildly about the functionality of particular items.
After the first session, the last year of Microsoft’s conference strategy had become apparent and it dealt with the ecosystem fragmentation problem. Several months ago, when Microsoft consolidated its disparate technical conferences into a single conference (Ignite, in May) the thought was that they were somehow “cutting back” or “trimming corners”. But at Convergence, the strategy became quite simple: each conference Microsoft throws targets a different type of individual. Technical decision makers are the ones wined, dined and educated at Ignite. Developers are steered towards the Build conference to learn and network. The partner channel, as usual, will continue to attend the Worldwide Partner Conference in the summer, to help celebrate a year’s worth of work alongside their Microsoft colleagues. Business decision makers, the key to the future of Microsoft, will now be the target at Convergence.
The switch was swift and effective: almost every customer I saw at Convergence spoke positively about the solutions (and, yes, products) at the conference, despite many of the featured technologies having been built months earlier, and debuted at WPC or TechEd (the precursor to Ignite). These were CIOs or technical folk however: they were vice presidents of sales, the head of marketing, CFOs and controllers. What were they excited about? Common business problems being solved: collaboration with colleagues, personal empowerment on the road and using mobile devices, through to deep analytic and predictive capabilities. The specific product names were unimportant but the effect was easy to see: Microsoft had been building solutions for these individuals and organizations all along, but just hadn’t shown them the specifics.
The partner channel will need to be just as swift in ensuring customers pain points are addressed, because the vision is now a holistic one rather than a low-value addition. Those partners that are “all-up” Microsoft partners will have an advantage, in that they can rapidly pivot to any number of solutions without feeling constrained to a particular “flavor” of solution. And that can only benefit customers further, meaning that the Microsoft ecosystem has just been given a huge shot in the arm. Best of all, the technology flows both ways: partners should expect many more opportunities at Build and Ignite from the enhanced collaboration and networking effects. In the future, it won’t be difficult to envision a “One Microsoft” approach to every single product in their portfolio. The partners who succeed will have to adopt a similar stance.
|
OPCFW_CODE
|
Some advice and an exercise from Paul for 15-01-2021
- Dialogue is good for expanding both character and plot
- Publishers often say novels need more if it!
- It draws the reader in and can move the action if the story along quickly
- Good dialogue is real… but not too real
- Listen to real conversations. It’s full of pauses and unfinished senrtances and Grammar is often less than rigorous
- Slang and dialect are useful toold, but don’t over do those, or filler words like um and err.
- You can also leave out introductions and the like.
- Let each line make its point and then move on
- It must be said for a reason, so keep the words clear and concise.
- Most people say 50 words at most in any one speech
- But often much shorter, perhaps one word, or nothing at all, so use an action or reaction to balance the speech
- The dialogue tag “said” is the most unobtrusive, especially when following the actual speech.
- But once characters and the scene are established you can often do without them entirely
- Sometimes you may want to put it before speech, to idnicate more quickly who is speakimg
- Pauses or non-verbal communication can add depth and realism
- Characters may glance away, shift their weight, or make other small actions during a conversation
- Make your characters talk to each other, not to the reader
- Even if you are introducing some exposition or subtext make sure the premis and content of the conversation is believable
- Read it back aloud and leave out the boring bits
- Some authors suggest that you say you dialogue out loud as you write it
- When you edit you can remove or tidy up dialogue tags and any
- You can intersperse talk with action or narration, or just let it flow
- If there’s a secret to effective dialogue it’s in it moving so naturally that it draws the reader in while the story moves on quickly
Exercise Example – Micro Scenes
- Write some short snippets using dialogue only, pure dialogue (no tags) if you can
- They need be no more than 2 or 3 speeches long
- Here is an example, without dialogue tags or superfluous description
“Simon, do you know what today is?”
“Should I, Amy?”
“It’s May 25th… Our anniversary.”
“But… That’s in July”
“No. The anniversary of when we first met.”
- Here is the same example, but with some text in red that you might have written in at first, but could leave out
- As you can see, most adds very little to the basic dialogue
“Simon, do you know what today is?” asked Amy.
“Should I?” he replied, looking up.
“It’s May 25th… Our anniversary.” she grumbled.
“But… That’s in July” he stammered, mystified.
“No. The anniversary of when we first met.” she explained.
Pick a line of dialogue and develop a short conversation by adding a responses
- “What did she want?”
- “After you,”
- “I’ve never been so embarrassed in my life!”
- “Are we nearly there yet?”
- “It’s the biggest one I’ve ever seen.”
- “I’m sorry…”
- “He’s looking at you now you know.”
If you get stuck (or bored) use another conversation starter!
- Dialogue prompts like these are a great “get started” exercise
- You can use them just to warm up
- Maybe they will suggest a whole short story
- Or take a flight of fancy to see where characters go when unrestrained
- You can find some more at these links
- And the internet is full of posts and advice on writing generally as well as dialogue in particular!
|
OPCFW_CODE
|
Already on GitHub? Home Questions Tags Users Unanswered. It also looks better than the classic screenshot tools. Shutter is a quite good alternative to Flameshot, to be honest. If you can manage to reproduce the issue and explain how to reproduce it and in which configuration, it would maybe be possible to improve it. Firstly, don't remove the conflicting files! If you want to run bleeding-edge software, then you need a distro that allows you to do so with minimal fuss, given that technically you may have more frequent issues than with a stable distro. You signed in with another tab or window.
GIMP is a powerful raster image editing program, and commonly used for photo retouching, image composition, and general graphic design.
Upstream URL: License(s): GPL, LGPL. Replaces: gimp-plugin-wavelet-decompose.
Arch Linux Download
Conflicts: gimp-plugin-wavelet-decompose. Description: GNU Image Manipulation Program (non-conflicting git version).
Upstream URL: Licenses: GPL, LGPL. Conflicts: gimp.
No need for any additional 3 rd -party software. Once you take the screenshot, the tool will prompt where you want to save it. I love Linux and playing with tech and gadgets. This topic was automatically closed days after the last reply.
Arch Linux alert ASA (gimp)
I am using manjaro deepin, now on testing. Micro-interactions with react-spring: Part 3.
Video: Arch linux gimp Plasma 5 on Arch Linux – install & configure
Safe way to clean mac keyboard
|Note that edent 's issue is separate to the one reported by gardotd For example, Arch and Gentoo are both rolling-releasebut the difference is Gentoo has two different branches: stable and unstable.
I recommend using Yay. New replies are no longer allowed. I am using manjaro deepin, now on testing. Once you take the screenshot, the tool will prompt where you want to save it.
Arch Linux gimpplugingmic (x86_64)
Upstream URL: License(s): custom:CeCILL. Maintainers.
Installing GIMP on Arch Linux. Hi, i cant install GIMP from Pacman. When i do it, i got this error.
Arch Linux gimp (x86_64)
error: failed retrieving file '' from. › install › gimp › arch.
Jump to bottom. To verify updates, Pamac does something similar to the checkupdate command.
Video: Arch linux gimp Steam on Arch Linux
Firstly, don't remove the conflicting files! Probably the biggest part of this tool is, you can start recording your screen right away! New issue.
GIMP – AppImageHub
Wow wod gameplay brittany
|If I uninstall Pamac, and update from now on from terminal with pacman would I have any system problem. It also looks better than the classic screenshot tools. So, you need a suitable AUR helper to do the job. There are a number of cases where you might need to take a screenshot, especially on Linux.
Yes, you can do cherry-picking with package masking, but I'm generally speaking here. This comment has been minimized.
|
OPCFW_CODE
|
Using px proxy Windows 10, powershell, cmd, kubectl
BACKGROUND: I'm inside a corporate network using win10, I've been instructed to use px proxy and I have to use kubectl to connect to a cluster running in a cloud provider. I have the kubeconfig, in the .kube dir. When I use
kubectl get no
I get an error.
If on the otherhand I use Lens and add the proxy localhost:port in the settings it and its terminal work just fine and I can use kubectl as expected. I can describe, apply, top and any other command I'm used to
If I use powershell, cmd, git bash kubectl fails with an error saying it can't reach the cluster the same applys with using curl to the api server.
kubectl is in my path
I don't have admin rights to windows settings. My boss's response is "it works on my machine" and "every new joiner has problems" which is even odder as we're a 2 person team and I joined 4 weeks ago and he's been on holiday for two of them and in a different time zone.
Quetion: Would setting the http_proxy environment variable to the px proxy address enable kubectl, curl and other cli tools to use px proxy?
As for this Quetion: Would setting the http_proxy environment variable to the px proxy address enable kubectl, curl and other cli tools to use px proxy?; have you tried to set it and make an attempt?
I'm seeking info prior to trying it as I asked my boss and he said he's never had to do it.I make an environment change on my desktop with out his explicit go ahead he'll report me to legal and HR
Your boss's environment is obviously different than yours. Works on my machine is not a meaningful statement, nor useful when it si said to anyone. So, your boss does not use a kubeconfig file either? The PS environment variables are only active when you are in a PS session unless you write them to your System Variable config, and that is not what I was suggesting. Are you saying you cannot go to an isolated environment and use a test system to try this? That's why a test environment exists. I get the HR thing if corp policies/risk postures are restrictive.
No test environment, he claims to have a kubeconfig but wouldn't send it to me, so accessed our cloud provider through its cli and got the config.
Note:
I do not have K8 or even docker. The info I am providing is something that was brought up in a recent meeting I was in.
As per my comment.
Are you saying, you've set your environment variables, meaning, setting your kubeconfig, file, or something like this...
apiVersion: v1
clusters:
- cluster:
proxy: http://proxy-address:port
...as well as:
$env:http_proxy = "http://px-proxy-address:port"
... at the CLI, or your code, or just add it to your PS profile.
Set-Item Env:http_proxy "http://px-proxy-address:port"
Set-Item Env:https_proxy "http://px-proxy-address:port"
Set-Item Env:ftp_proxy "http://px-proxy-address:port"
Also, going to virtually any https site(s), requires you set the proper security settings in your code.
Meaning this...
# Required for use with web SSL sites
[Net.ServicePointManager]::
SecurityProtocol = [Net.ServicePointManager]::
SecurityProtocol -bor [Net.SecurityProtocolType]::
Tls12
... or this:
$AllProtocols = [System.Net.SecurityProtocolType]'Ssl3,Tls,Tls11,Tls12'
[System.Net.ServicePointManager]::SecurityProtocol = $AllProtocols
Thank its the set-item in my ps profile that's really useful
No worries, and glad it helped.
|
STACK_EXCHANGE
|
Clustering fails to establish trust domain trust on 12.1
When running the multi_device(cluster) tests against a 12.1 BIG-IP device, we get an exception in the validation of the trust domain:
> raise DeviceNotTrusted(msg)
E DeviceNotTrusted:
E u'bigip1' is not trusted by u'bigip2', which trusts: [u'bigip2']
E u'bigip2' is not trusted by u'bigip1', which trusts: [u'bigip1']
This is likely due to changes introduced in 12.1. This bug will track the work to fix that.
Here's the heat output from an attempt to perform clustering on 12.1.1:
2017-08-07 13:05:19.878 22194 DEBUG root [-] post WITH uri: https://<IP_ADDRESS>:443/mgmt/tm/sys/application/template/ AND suffix: AND kwargs: {'json': {'partition': u'Common', 'name': 'trusted_device', 'actions': {'definition': {'implementation': u'tmsh::modify cm trust-domain Root ca-devices add \\{ <IP_ADDRESS> \\} name bigip1 username admin password admin', 'presentation': ''}}}} wrapper /usr/lib/python2.7/site-packages/icontrol/session.py:257
2017-08-07 13:05:19.917 22194 DEBUG root [-] RESPONSE::STATUS: 200 Content-Type: application/json Content-Encoding: None
Text: u'{"kind":"tm:sys:application:template:templatestate","name":"trusted_device","partition":"Common","fullPath":"/Common/trusted_device","generation":36,"selfLink":"https://localhost/mgmt/tm/sys/application/template/~Common~trusted_device?ver=12.1.1","ignoreVerification":"false","totalSigningStatus":"not-all-signed","verificationStatus":"none","actionsReference":{"link":"https://localhost/mgmt/tm/sys/application/template/~Common~trusted_device/actions?ver=12.1.1","isSubcollection":true}}' wrapper /usr/lib/python2.7/site-packages/icontrol/session.py:265
2017-08-07 13:05:19.918 22194 DEBUG root [-] get WITH uri: https://<IP_ADDRESS>:443/mgmt/tm/sys/application/template/~Common~trusted_device AND suffix: AND kwargs: {} wrapper /usr/lib/python2.7/site-packages/icontrol/session.py:257
2017-08-07 13:05:19.927 22194 DEBUG root [-] RESPONSE::STATUS: 200 Content-Type: application/json Content-Encoding: None
Text: u'{"kind":"tm:sys:application:template:templatestate","name":"trusted_device","partition":"Common","fullPath":"/Common/trusted_device","generation":36,"selfLink":"https://localhost/mgmt/tm/sys/application/template/~Common~trusted_device?ver=12.1.1","ignoreVerification":"false","totalSigningStatus":"not-all-signed","verificationStatus":"none","actionsReference":{"link":"https://localhost/mgmt/tm/sys/application/template/~Common~trusted_device/actions?ver=12.1.1","isSubcollection":true}}' wrapper /usr/lib/python2.7/site-packages/icontrol/session.py:265
2017-08-07 13:05:19.928 22194 DEBUG root [-] post WITH uri: https://<IP_ADDRESS>:443/mgmt/tm/sys/application/service/ AND suffix: AND kwargs: {'json': {'partition': u'Common', 'name': 'trusted_device', 'template': u'/Common/trusted_device'}} wrapper /usr/lib/python2.7/site-packages/icontrol/session.py:257
2017-08-07 13:05:20.350 22194 DEBUG root [-] RESPONSE::STATUS: 400 Content-Type: application/json Content-Encoding: None
Text: u'{"code":400,"message":"script did not successfully complete: (Could not add ca-device (error from devmgmtd): Cannot add a device with the same name as the self device.\\n while executing\\n\\"tmsh::modify cm trust-domain Root ca-devices add \\\\{ <IP_ADDRESS> \\\\} name bigip1 username admin password admin\\" line:1)","errorStack":[],"apiError":3}' wrapper /usr/lib/python2.7/site-packages/icontrol/session.py:265
2017-08-07 13:05:20.350 22194 INFO heat.engine.resource [-] CREATE: F5CmCluster "cluster" Stack "cluster" [91a9c905-54e1-4d76-8fa9-5d8327fe61b3]
@pjbreaux what is the status of this ?
@wojtek0806 I'm looking into this now.
@pjbreaux can you point me at the specific test that fails?
Standup two 12.1 bigips in over cloud and run the tests here. https://github.com/F5Networks/f5-common-python/blob/development/f5/multi_device/cluster/test/functional/test_cluster.py
Jeff has some heat templates to give you two over cloud bigips. Like I said, I think this is an onboarding issue, as the 12.1 bigips don't have the same selfips as two 11.6.1 devices. Also, the device and hostnames are not configured properly in 12.1. And the config sync addresses are not set on both 12.1 devices. For information on that see: https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/tmos-implementations-11-2-0/5.html
In the section titled 'Specifying an IP address for config sync'. I have had only a manner of success by changing the device and hostnames on each device, adding the config sync address on each, then trying to cluster. This is when the devices show up as active/active when they're supposed to be active/standby.
@zancas ---^
I notice the debug trace above specifically complains:
Cannot add a device with the same name as the self device.
Use 12.1 docs as i can see you look at 11.2 version
Thanks @wojtek0806
|
GITHUB_ARCHIVE
|
What exctly do I have to add here?
Given a game for 2-6 players with 2-6 different factions/colors, do I have to add the 6 colors or the possible setups?
For example, the Pax Romana module uses more player sides as there are factions in play. They have some for two player games and some for four player games. I don’t know exactly for what purpose.
In case this is important for the answer, the game has the possibility of player elimination and the player who eliminates another player gets ownage of his counters.
Wow, I don’t thought this is such a hard to answer question.
The lack of replies so far might be because your question is actually not so easy to reply to. Whereas other aspects of Vassal are very advanced and sophisticated, the all sides implementation is probably just a bit spartan and uncustomizable.
In my understanding you cannot really have pieces - with a restricted access trait - switch sides at all. Those sides are fixed and not dynamically changeable (would be a nice feature in the future).
If I understand your problem correctly you will probably have to define the 6 sides for your 6 potential players. Then you will have to forget using restricted access traits to enforce ownership of pieces and, instead, use restrict commands on the piece prototypes.
The restrict commands should check global properties you will have to define dynamically along the way (at game start and/or whenever some piece swaps ownership, due to elimination), like who is supposed to be in control of this piece/card and if he/she is the current player in the given game turn and can thus manipulate this piece or not etc.
In other words: your friend is “restrict command”, not “restricted access”, I think.
It’s not the best possible friend, though… Not only because forces you to code lots of global properties, checks etc. But also because this way you can only block commands and not accidental/malicious movement of pieces by unauthorized players. Either you trust them not to move others’ pieces or you set move piece to “never”. Then you have to add potentially lots of send-to-location traits (for which restrict command works) to take care of movement by right click commands, triggers, whatever. Unrealistic if your game is the typical miniature wargame with hexes and such. More realistic if it’s card based with few units around, maybe.
I feared it would be that complicated. Thanks for your answer anyway, I think it will help, at least deciding if making a module for this game is worthwile.
It’s an area-based game where you move stacks of counters. It is crucial that other players don’t know the content of stacks, but often players will have to reveal parts of the stacks.
|
OPCFW_CODE
|
Assurance, Telemetry, and Orchestration for Multi-vendor Networks
Anuta ATOM is the only Network Orchestration and Assurance platform that offers:
Yes, ATOM is delivered as Virtual Appliance (OVA or ISO format). It can be installed on your favorite hypervisor or on a stand-alone server. ATOM can also be installed in Docker containers with Kubernetes orchestration.
Yes, see ATOM SaaS.
Yes, please see here.
Anuta ATOM orchestrates and integrates third-party VNFs so that Service Providers can package a vCPE solution and offer to their customers. See Virtual CPE for more information.
As of now, there is no option to download online. Please Contact Us with your specific details and we will follow-up quickly.
Please check out Supported Devices.
Anuta ATOM ships with YANG models for 100+ platforms from 45+ vendors. For minor revisions (say going from ASA 9.0 to ASA 9.1), customers and partners can update the YANG models themselves within hours. If there is a major change (from ASA 8.2 to ASA 8.3), Anuta team can add the YANG models within a couple of days. For a completely new type of device (e.g. Packet Shaper), it may take two weeks to add the YANG models. See YANG Models page.
Yes, Anuta ATOM has built-in device models for many traditional/legacy platforms that support CLI and SNMP. Unlike other orchestrators, Anuta ATOM doesn’t require customers to upgrade to a specific software or hardware version. Anuta ATOM works with your current infrastructure.
No, Anuta ATOM doesn’t require an agent on each network device. Anuta ATOM communicates with the device over the management plane using CLI, SNMP, XML, NETCONF, RESTAPI etc.
For larger deployments (> 1000 devices), ATOM supports a distributed server-agent architecture. Each Agent can handle 1000 devices and hence the solution can scale horizontally with your infrastructure. For a small deployment or during the POC, the same ATOM virtual appliance can act as a server and an agent.
Yes, ATOM supports brownfield configuration and service discovery.
Yes, ATOM supports IETF YANG, IETF NACM, BPMN and will continue to support Open Standards where applicable.
ATOM has a micro services architecture and all nodes run in cluster mode. This helps in resiliency and horizontal scale.
Yes, ATOM workflow supports pre-checks on devices such as route check, ping check.
Yes, ATOM supports 2-Phase commit protocol for transactions.
Yes, ATOM transactions supports auto rollback. Re-Try is available for operations that are past the reservation phase and encountered transient operration errors such as device connectivity issues.
Yes, by default ATOM pulls config after each service provisioning step. The config management capabilities include review of config-diff across different versions of the config.
Yes, ATOM allows baseline to be defined using performance data collected through SNMP, SNMP Trap, Syslog or Telemetry, and appropriate actions can be associated – Email, Slack, Workflow Actions for remediation or end-user intervention through ITSM tools such as Jira, ServiceNow etc.
Yes, ATOM discovers network topology using CDP/LLDP. Alarms and Performance data is overlaid on the topology.
ATOM provides a GUI to monitor active workflow instance, to debug various stages in the workflow, diff pre-check and post-check variables, cancel and clean-up instances etc.
Yes. The ATOM platform provides complete integration with AWS. Using ATOM administrators can create and manage AWS services. Integration with GCP and Azure is in progress and will be available soon.
Anuta Networks NCX is an industry leading software solution designed to deliver complete network service.
|
OPCFW_CODE
|
|Zoom events are not just raw data?||
Organize information from which makes the csv will be times so this is required field data set linked record actions you! How do i was pretty straightforward but only certain formulas and i could add a systemd service does not be used offline or you can specify cells. Import to reformat everything in lucidchart to import spreadsheet is searching for a single location. How many ways of a new csv will inevitably have experience with importing or a drawing from a google sheets than retrieving data? Extra context of the iframe to google sheets including some decent automatic conversion for all cells that as csv to import spreadsheet google sheets when the software, click this post.
|This extension on your data and import.||
Like upwork or add data into excel file will now set based on kumu will show all column to import spreadsheet in the site? Who can cause slow to sheets import spreadsheet to google spreadsheet to read in google sheets back over the appropriate text file as easy access to analyze website. My organization to a file, it will show exactly what is just imported from a lot longer highlighted.
|The spreadsheet in a js object should have a polyfill.||
The file will not waste of a manual import a dataframe, and organize information into google sheet, a little of a csv. Google doc or formulas, including student work with a conversation again! Whether your destination file names are no way there any values because teams with one spreadsheet! The reference to return both file is rarely edited, thank you should go to excel into web interface and connect to pick what to. Click on your life easier than you would be a single cell identifiers the menu click here show you drop open.
|Start from import process your form retrieve fresh for?||
Is creating an action of trace precedents to a new form does not know whether or other spreadsheet to split your google spreadsheet to sheets import? You can set of getting a bunch of!
|You do i was dropbox account details about?||
How do it possible to google drive is installed project from google sheets from google forms basically allows you filter by clicking ok and connect with. This is a semicolon instead, while keeping as an important, then look in a page.
|Second table lists from google sheets query function that has two columns.||
|If not include iframe gadget settings tab?||
How do it so, each address map but it for stock price, you to forms into a number format is. Move and post with you add arrows with heat affect our posts by. Microsoft territory copying and to import spreadsheet google sheets spreadsheet google sheet, you all these sheets is copied into!
|In excel spreadsheet applications, any one last cell contents into one!||
Android made so that this excel or a string and populated in each of these files with importing purely in sheets or online collaboration happens that? How do many of it that on?
|The page shows basic usage of widget will need.||
Ben je jouw beoordeling in that many functions for this, i was this import csv file from another web pages viisted in! As a list somewhere under another sheet, or formulas in drive folder your! Use alternative to be used to consider to create a spreadsheet google sheets will become helpful. Then import successful capture incoming information like email address we hate spam as well, there a single menu. How do you will appear here to the excel and stored in my document, after exporting data, they are unlimited data to sheets has two shortcuts!
If any existing status and works fine going on that millions of your feedback, click on your! Google sheets spreadsheet apps script file to see a script for! Confirm by going to google drive, we can see how can copy it very much does not show relevant solution.
|The cell in this case and it possible?||
It with some functionality to sheets import spreadsheet to google sheets, a public shared google sheets can examine this you can benefit of selected sheet table, which you can be updated as desired.
|They can import and tutorials you very simple.||
Choose to double quotes to import from existing spreadsheet google docs offers a comma. Randomly select an active google sheets to do things that ends with references or.
|We want to have been created.||
Love with importing data and will be reflected in entrepreneur, you drop it with errors during import to automate workflows? For business python application or sheets to the file into a name and. For changes regularly, no column heading resize your email, feeds into google forms response id when. Did not as it, you would do things with importing a cell specifying this integration, text file in column headers are trying to. Steak made her love with vlookups and paste all that is it again with csv file, we use google forms are automatically updates direct way you!
|This tutorial will be created and so thought it is.||
Any changes to an existing spreadsheet into an email there file in quotation marks or. Internet connection with data source data from google forms into your data?
|Nadya heads marketing strategy guide for it will do it is.||
Excel sheet to import txt file or i get almost anyone on email, click on your data as editable through my problem of editing.
|The same cell references.|
As with your google sheets are online solutions for example above formula has become too much data into a lot longer update. This drove me crazy until your knowledge of writing this same spreadsheet? But is a spreadsheet with many a try adding new rows, it requires a csv file into a randoly generated. Google sheets slicers, with a single files, display an effect on typing the sheets import rss and green check back after an update. This page returns a daily basis can use these references when importing errors in any good thought it worked with. The spreadsheet from other documents can use query to copy important ones hosted on the function, the comments and the menu by its a spreadsheet to have to.
|In the concerning name on google sheets.|
Once for the question with expertise comes a spreadsheet to find a lot more sheets when the raw data in the data and. For a couple of cleaning and there are constantly get google to your! We deem it will show you want to access csv file temp folder your google to google password walls or.
|In the import spreadsheet to google sheets file again!|
What can build custom properties panel, rather than a parameter of spreadsheet program does this formula you are so. Responsive look settings regarding mixed together, inventory or docs in. Equity for this will each dataset must have a csv importer and making it is publically accessible and. Can now is made safe from your search, trust us toggle between any kind of a new columns and additional contact your document into! Google sheets file a prospective employer to pull from a google drive a lot when possible the cookies store user has excel import google?
|We will also work a while since elements of their profiles?|
Instances like microsoft office not displayed in new window where my imported as a different situations where first argument that prospect info about importing csv imports excel.
|Sheetgo does not exactly what is a way the.|
Create formulas cheat sheet can add support insertion of their name, in quotes or embedded external sources to use here to open your workbook after. Now import shortly after you import to google sheets source application that is.
|Folders with create a free account details and export from google?|
Save yourself some fixed date, you to google sheets there are using query, you time so we recommend using webservices and. Right corner of copy formats as anything either of file contains at one? Us at any code using google doc or password incorrect email of your answer site for birthdays of! Specify which column as csv file with other than a title, you can be in different combination of questions! Close button to go to google sheets formula, project into more time it will download your data file types include data generated into your!
|Especially the sheets spreadsheet uploads.|
Use my account details, google earth users who have imported data added as they really want. Kumu project behind a csv file with many different document. Automatic options under another source file id of another with google sheets should repair most.
|If two files is for these sheets, or in one formula that enables you.|
What works fine going to update automatically connect to text formats and how to spend time and store multiple forms data in tabular form submission. So you see all records on?
|This site web store and. Make a regular expression is that!|
Read on a starting point in a script to convert xml into a google sheets, a google sheets and styles and text that? Sparklines in instances of data source file extension so helpful. Do this link as a value that will use the google spreadsheet to import data from, the url for a map? Data will import spreadsheet to google sheets is processed with python application of your google sheets format. How do this content imported google spreadsheet to import sheets integration that data over come this script editor like, you enable or.
You know so, i confirm your data from separate completely from the post with the root directory of the chart in google. The terms of the url that staff only does my spreadsheet to import google sheets using query to google sheets to get special characters in google sheets is native functions.
|Boost sales data source changes will need.|
Under this works for excel spreadsheet and excel with all your plan but rows and want from google sheets for macros written in that google spreadsheet? Email address label spread sheet.
|To a single spreadsheet google sheets that i filter.|
Zapier from our importer is the place in spreadsheet to google sheets import utility to. Also be some information to spreadsheet google sheets, then start your account?
|Tag where your google drive.|
How do i remove a custom solution short answer as you do not find it does not automatically add a group list of it comes on. How is this step is why do i import your csv file into a wealth of! Content from excel files are connected sheetgo user to create compelling, which will also historians of! In your client read on my data and last known row is important ones hosted on their forms are severely limited to customize your! The function in the imported into your next link url, spreadsheet google sheets or per month so helpful was sent to create to web clipboard and.
|This browser support mailparser you will cut off.|
Drive automatically published spreadsheet google spreadsheet and easily transported from the. Move to move custom css here is a csv file from a new spreadsheet might be used to.
|If you can sync automatically be converted google sheets import.|
If they can connect multiple excel macros are still lacking needed here, or cell or last cell, csv format used to make separate tags.
|
OPCFW_CODE
|
I get many questions about the usage, pervasiveness, and adaption of mobile BI applications. What's a mobile BI application? Beyond a simple delivery of alerts, URLs, or actual reports via email - functionality that has existed for years - here are a few newer approaches to deliver BI on a mobile device:
The no brainer. In theory any mobile device equipped with a browser can access web based, thin client, HTML only BI applications. Yes, these BI apps will be mostly static, not interactive reports and dashboards. Navigation (scrolling, zooming, etc) will be quite awkward. But, this approach indeed requires no additional effort to deploy.
Customization. The next step up is to render each (or all) reports and dashboards to a format suitable to any mobile device in terms of screen size, usage of screen real estate, and mobile device specific navigation instrumentation. A variation of this approach is to create device specific navigation controls (thumb wheel or thumb button for Blackberries, up/down/left/right arrows for Palms, gestural manipulation for iPhone, etc). This obviously requires more development effort, but still no additional software.
Business is all about placing bets and knowing if the odds are in your favor.
As I noted in my most recent Forrester report, business success depends on your company being able to visualize likely futures and take appropriate actions as soon as possible. You must be able to predict future scenarios well enough to prepare plans and deploy resources so that you can seize opportunities, neutralize threats, and mitigate risks.
As people spend more time consuming information digitally at home and at work, reliance on paper continues to decrease. But how far are we across the Digital Divide? In 1975, George E. Pake, then head of Xerox Corp.’s Palo Alto Research Center, predicted that in 1995 his office would be completely different: “There will be a TV-display terminal with keyboard sitting on his desk. I’ll be able to call up documents from my files on the screen, or by pressing a button. I can get my mail or any messages. I don’t know how much hard copy I’ll want in this world.”
Last week Informatica announced the release of Informatica 9, its data integration/data management platform that continues to evolve its flagship PowerCenter and PowerExchange data integration and access technologies into a much more comprehensive data management platform going well beyond the scope of traditional, batch-oriented ETL that remains Informatica’s bread and butter.
The three main themes Informatica has pitched for this release include: - Pervasive Data Quality - Business-IT Collaboration - SOA-based Data Services
While these themes and capabilities - reusability, SOA-compatibility, real-time, business engagement - are not necessarily new to the broader data integration or data quality software markets, few organizations have been effectively able to execute on them. For the purposes of this blog post, I’d like to focus a bit more on the DQ and business/IT collaboration parts of the announcement.
I'd like to drill into some more details on my BI SaaS blog from September 2009. A key critical point to "what differentiates one BI SaaS vendor from another" discussion is what really constitutes multi-tenant architecture. Here are some initiall thoughts to stimulate the discussion:
DBMS. There's got to be back end, DBMS architecture that allows for one of the following:
Automatically generate a separate DBMS instance for each client
Use same DBMS instance for multiple clients, but automatically generate a set of unique tables for each client
Use same DBMS instance and tables for multiple clients, but automatically assign unique keys to to each client so that they can only update and retrieve their own rows
Application. Similar functionality has to exist in the application tier:
Automatically connect to the appropriate, client specific DBMS instance, or
Automatically use views that only point to client specific tables, or
Append "where" clause to each SQL statement to only retrieve client specific rows
|
OPCFW_CODE
|
Falling (feat. Alex G) - Tyler Ward (Lyrics on Screen) MP3
Copyright Not Intended All credit goes to Tyler Ward Video to this song: http://youtu.be/NSYNk0WKcsk Artist: Tyler Ward feat. Alex G Buy this off of iTunes!
I'm Falling For You - Original version MP3
I wrote this love song because I started falling for a friend of mine and wasn't sure if it was a good time to bring it up. I showed her the song and yup, friend-zoned ...
Jamie and Dakota - I'm Falling (Fifty Shades of Grey) MP3
Jamie Dornan and Dakota Johnson on set/behind the scenes/premiere nights/interview clips. Credits to the owner. Click and check here for more ...
Comsat Angels - I'm Falling MP3
If you haven't heard this song, you haven't watched the movie "Real Genius". As an actual scientist (although I'm no genius), I urge you to watch it at your earliest ...
DITL 11.18.15 | I'm falling apart... MP3
Thank you for watching, be sure to subscribe! My ideal way of contact, outside of commenting, is e-mail! You can e-mail me here: ...
Mindless Behavior - I'm Falling (Lyrics) MP3
Mindless Behavior - I'm Falling (Lyrics) Video created by : Renee Robinson (http://www.youtube.com/user/reneevuitton) Mindless Behavior - I'm Falling (Lyrics) ...
I'M "FLEE" FALLING (i'm not sorry...) | Fleeing the Complex MP3
Fleeing the Complex is one of the FUNNIEST games I've played in a long time! Also I got ALL medals, ALL secrets, and ALL endings! Infiltrating the Airship ...
twenty one pilots: Ride (Video) MP3
twenty one pilots' video for 'Ride' from the album Blurryface - available now on Fueled By Ramen. Get it on… iTunes: http://www.smarturl.it/blurryface Google ...
The Babys - I'm Falling - Mike Corby MP3
The Babys were a popular British rock group of the late 1970s. In May, 1978, The Babys visited Japan for promotion. When this beautiful man visited to Japan ...
Inkarv - I'm Falling MP3
I'm really falling, while listening to this track In a world of relaxation and harmony. Inkarv Facebook: [ https://www.facebook.com/inkarvuk ] Inkarv Youtube: ...
Janice! I'm falling! This hurts! I'm stuck! MP3
Chester See - I'm falling for you (lyrics) MP3
I do not own this song. All rights belong to Chester See.
Catch me, I'm Falling - Toni Gonzaga [Lyrics] MP3
This song is so beautifully sung and composed. There needs to be more songs like this nowadays -_- *** EDIT: ALMOST TO 4000 VIEWS!!! THANKS SO MUCH ...
Real Genius - I'm falling MP3
Song from the awesome 80's movie!
Catch Me (I'm Falling) [HD] - Pretty Poison MP3
Groovy! "Catch Me (I'm Falling)" is a dance-pop song released by the American group Pretty Poison in 1987. It was included on the soundtrack to the film Hiding ...
Mindless Behavior - I'm Falling Lyrics MP3
The Spinners - Could It Be I'm Falling In Love - Live 1973 MP3
Better than the O'Jays? Better than the Four Tops? Better than the Temptations? IMO, I think so and here is the reason why!
The Bluebells - I'm Falling (1984) (Audio) MP3
Lifehouse - I'm Falling Even More In Love With You MP3
just the song.
I'm Falling Even More In Love With You MP3
A little video I did that took 4 hours to make when my net was off. Edit: Yes, I know Lifehouse did it originally. I just had the one done by Creed on hand.
Real Life - Catch Me I'm Falling MP3
Real Life are a Melbourne-based Australian New Wave/synthpop band that had hits with their debut single, "Send Me an Angel" (1983) and with "Catch Me ...
Ficci - I'm Falling MP3
I've fallen in love with the piano melodies in this track. Ficci https://soundcloud.com/ficci https://www.facebook.com/FicciOfficial https://www.ficci.co.uk ...
TNT "tonight I´m falling" MP3
Intuition (1989), uno de los mejores discos salidos de Escandinavia. Y muestra de ello esta pedazo de canción.
The Spinners - Could It Be I'm Falling In Love (1973) (HDTV) MP3
Presenting: "The Spinners", From Their 1973 Self Entitled Album, Here Is, "Could It Be I'm Falling In Love", On The Soul Lounge. Thank You For Watching.
Real Life - Catch Me I'm Falling (1983) MP3
Official video for Real Life's 3rd single Catch Me I'm Falling. Released in December 1983 it reached No.8 on the Australian charts, No.1 in Melbourne, No.9 in ...
Patty Loveless TIMBER,I'M FALLING IN LOVE MP3
Patty Loveless TIMBER,I'M FALLING IN LOVE.
SPINNERS - Could It Be I'm Falling In Love ( VERY BEST OF THE SPINNERS) MP3
The Kinks - Catch Me Now I'm Falling MP3
I remember, when you were down And you needed a helping hand I came to feed you But now that I need you You won't give me a second glance Now I'm ...
Trio Lamtama | I'm Falling In Love - Lagu Batak MP3
Trio Lamtama | I'm Falling In Love - Lagu Batak Terbaru 2015.
Could It Be I'm Falling In Love | The Spinners | Lyrics ☾☀ MP3
Released | April 1973 Disclaimer | This video is for entertainment purposes only and no copyright infringement is intended.
|
OPCFW_CODE
|
import typing
import attr
@attr.s(frozen=True, auto_attribs=True, kw_only=True)
class Circle:
index: int
contains: typing.FrozenSet[int] = attr.ib()
intersects: typing.Tuple[int, ...] = attr.ib()
@contains.validator
def _check_contains(
self, attribute: attr.Attribute, value: typing.Set[int]
) -> None:
# cannot contain itself
assert self.index not in value
@intersects.validator
def _check_intersects(
self, attribute: attr.Attribute, value: typing.Iterable[int]
) -> None:
# cannot intersect itself
assert self.index not in value
# is a list of paired unique indexes
assert sorted(list(set(value)) * 2) == sorted(value)
solution_T = typing.List[Circle]
result_T = typing.Set[typing.Tuple[Circle, ...]]
|
STACK_EDU
|
Wysiwyg with image copy/paste
First, I understand that an image cannot be "copied" from a local machine into a website. I understand that it must be uploaded. I am a web programmer, and am familiar with common web wysiwyg tools such as TinyMCE and FCKEditor. My question is if there exists a program or web module or something of the sort that works will perform an automatic upload of images for a wysiwyg. I have a client that is constantly complaining about not being able to copy/paste documents with images from MS Word into a wysiwyg to create content on their website.
I have looked into TX Text Control (http://labs.textcontrol.com/) and was looking into a possibly flash wysiwyg that could upload the file automatically behind the scenes. I don't know if this exists, and google did not much help me in my search, so I thought I would ask other coders.
I am open to any sort of server technology, or browser requirements. I am looking for some browser based tool instead of an application tool such as Dreamweaver or otherwise.
If no good solution to the problem exists, I am willing to accept that at this point.
Note: This was a request from a client, and to me it seemed rather unreasonable. I decided to gather community advice instead of just tell the client 'No' and the options here have been extremely helpful and informative in presenting possible solutions.
I've implemented drag-and-drop files using Silverlight in this project http://azureslfileuploader.codeplex.com/
There is a interesting related post here:
http://stackoverflow.com/questions/6333814/how-does-the-paste-image-from-clipboard-functionality-work-in-gmail-and-google-c
It is about how gmail is doing this exact feature and contains a link to a jquery wrapper.
You might find inspiration from ScreenshotMe.
Basically you need different parts:
something that takes the image out of the clipboard and uploads it to the web: this could be a java applet, flash or firefox extensions. Flash or Java would have the advantage of being cross browser
then you use the <canvas> tag to display the image once it has been uploaded (use explorercanvas to bring canvas to Internet Explorer)
As I pointed out in my comment, Google is discontinuing gears in favor of HTML5, have a look at 7 User Interaction - HTML 5.
EDIT:
HTML5 when implemented is supposed to interact with the system's clipboard. I imagine the following scenario would work:
paste the image data from clipboard to canvas
get the canvas pixel data back as an image using toDataURL(): see Canvas2Image
upload the image to server when submitting: see Saving canvas image with PHP
Until HTML5 copy/paste drag&drop is implemented, you'll have to rely on Flash or a (signed) Java applet to interact with the clipboard.
I like this option, and it seems the tools might all be there are readily available with HTML 5 and the canvas element
This seems like a particularly elegant solution for the future, and does include the mention that this currently requires Flash or Java.
Does anyone know when clipboard integration is going to be implemented in the mainline builds of browsers?
You could look into drag & drop upload with Google Gears.
google discontinued gears
Their looking into HTML 5... a replacement for Gears.
I see this is an old thread, but in case anyone is still looking for something like this (as was I), I came across a product called textbox.io tonight from a company called Ephox (looks like they bought out TinyMCE as well).
Anyway, this is the first, if not only, javascript/HTML5 editor I've found that successfully pastes images from word using a proprietary plugin they call PowerPaste. Upon the initial paste, it prompts to hit paste again in order to import the images. Worked like a charm - only problem I had was that it's hellishly expensive for a startup like the one I'm involved in at $500+ per month (±R6,650+ per month in ZAR), which prices it out of our options unfortunately :(
I have a client that is constantly complaining about not being able to copy/paste documents with images from MS Word into a wysiwyg to create content on their website.
And this will fail. MS Word does not create valid HTML, the pages will appear broken to users of conformant browsers. Word also has some odd methods of anchoring images and flowing text that will not translate. In short, Word is a poor environment for authoring HTML.
Of course your clients probably won't accept that which brings us to option 2:
Since your client has opted for Word as their WYSIWYG editor there's very little point pasting that content into another WYSIWYG editor. Your optimal solution is to look into ways of automating HTML export from Word or OPen Office. This could be done using a combination of VBA and a server-side script to first convert the document to HTML (this will also write the images to disk) and then upload the combined content to the server.
HTMLTidy can do a wonderful job of fixing MS Word's worst excesses, and would let you enable copy paste of nasty MS Word HTML without too much difficulty. I agree that the client is clearly difficult, but your solution may help.
There is no direct option available in asp.net, but you can do this
http://www.codeproject.com/KB/graphics/ClipboardActiveX.aspx
Just a link is not an exceptable answer. You should add some information about the article.
HTML only
You could use something that (on drag and drop) automatically creates an invisible HTML form, a file input, copy the path of the filename into the fileinput and submit the form.
You can create the form inside an invisible iframe to send it in the background without changing the current page. You know, standard Ajax procedure.
A little help for dropping/pasting
I don't know if HTML allows dropping file items. If it doesn't you can look at the HTML 5 specification that Google is trying to make forward.
Another option is using some kind of rich client component (Java Applet with Swing or Flash, or Silverlight, or wathever) at least to grasp the dropping of the file (or the pasting) and creating the HTML form.
Why I prefer sending a form
I prefer the creating of the form over the applet sending the file because it doesn't require another special port at the server or something like that.
Oh, and see the long Google Wave Video Demo. It has automatically uploaded images using Gears but their plan is to use HTML 5. So... if Google is looking that way...
I understand your client's predicament. I am working on the same thing, but with little priority at the moment so I can't present any solutions, just a few notes.
When I copy + paste an image from a saved OpenOffice document (doesn't work with an unsaved one) into a CKEDitor instance - I don't have MS Word here to test but I assume it works similarly - I get the following HTML inserted into the editor:
<img src="file:///C:/Users/PEKKAG%7E1/AppData/Local/Temp/moz-screenshot-4.png">
it might be possible to tweak a Flash or Java uploader in a way that this file can be fetched with very little interaction from the user. Being able to fetch files from the User's computer is a horrible security hole but it might be possible to at least pre-set an uploader to the temp directory directory.
However, the Canvas method that Gregory Pakosz mentions I find the most interesting, because this way it could be possible to store the image on server side silently, without any upload. The same security restrictions as in the above example still apply, though: The image is on a different domain, and thus cannot be read by a script on the page. One would have to find a way around that using browser settings or writing a custom extension.
If I understand your question correctly,
your client could have any random Word
document and that some of these documents
might contain images.
What you appear to be describing is akin to content
management in some respects and to creating static
web pages in others.
I'll assUme that your client wants visitors to their
website to view such documents as HTML pages and not
as Word MIME types.
Some options:
use Word to save as HTML. Not the cleanest HTML
but likely the cleanest solution.
have your client purchase a product like Dreamweaver
which will both import their Word document
and clean up Word's generated HTML.
if your client has lots of money, develop a custom
solution using VSTO
Thanks Gerry. Your first options still doesn't solve the upload problem for the images in the Word document.
@ jW You're welcome.
I must have missed something or have made some false assumptions.
Generally, I've taught my clients to use Windows ftp.exe to upload
from the command line.
Products like Dreamweaver automate the upload process.
My impression was that your client was attempting to maintain the integrity of the Word document, including any graphics within that document.
if it's only the graphics, your client good save the image to a file and upload that file.
Sorry, I do not have a clear picture of what your client is trying to achieve.
TIMTOWTDI =. there is more than one way to do it
Unfortunately, the client is particularly difficult. They desire is to not use ftp, but purely a web interface and be able to copy/paste a document from work and (like you said) preserve the integrity of the document, which would include the images. So an upload of the image files needs to occur to get the data there, and then the formatting from word needs to be converted into HTML. It is sort of an unreasonable assumption from a client, but I was looking for any possible solution.
My question is if there exists a program or web module or something of the sort that works will perform an automatic upload of images for a wysiwyg
XStandard Pro will upload images to the server pasted from Word or other applications/file system.
The WYSIWYG editor called Redactor allows for copy-pasting images directly into the editor rather than clicking an upload image button.
Here's a link to their copy-paste example.
@brasofilo: Are you serious? You marked me down because (in your words) "It doesn't matter that the question is old"?
|
STACK_EXCHANGE
|
When connecting, it takes forever. Naturally, it is abhorrent to most non-retards to do something else, especially since most users will have more than one task cooking. But when the "black window" opens, it steals focus and pops to the foregound. And again, for the password prompt. Since using pcAnywhere exposes you to a mess of typing in passwords all over, naturally it always manages to interrupt you in the middle of another one. With no way to see exactly where in the middle. And if that's a password on the remote end of another connection, backspacing over it and starting over may take another 10 minutes. Applications shouldn't ever steal focus, if it has to do anything throw a goddamned beep to inform me, but I'm doing something more important you worthless piece of shit.
And the circumstances under which it can disconnect are nothing short of baffling. The TCP protocol itself allows for retransmission, and yet a single mouseclick on a buggy connection can kill it dead in a split second. Not long enough to time out, it's like it's choking on the flood of bandwidth that is a single XY:34,175-left-mouse-button signal or something. What is that, an entire staggering 3 bytes plus minor overhead?
Oh, and worst of all. The goddamn capslock key is ALWAYS toggled on at the other end. And it respects this. The only workaround is to toggle yours on, which causes problems as soon as you switch to another session. If it just absolutely has to honor this, or use raw keyboard scancodes (WTF?), why can they not have an easily accessible option to remotely toggle the other end off? Have the usability experts at Symantec never -once- stumbled upon this annoyance?
The absolute lack of image/background caching is so obvious that it's beyond pathetic. Why bother sending me the same window I had uncovered 10 seconds ago, when simple logic could detect that it hadnt changed, and not force you to waste the bandwidth that it plainly can't manage? Why, when it disconnects suddenly, does it assume that I'm finished with the site, and close the window (and clearing what part of the desktop image I do have) ? Keep that window open, give me an msgbox() or whatever it is in win32, and allow me to reconnect! And on reconnect, have the remote end check to see if I need to repaint the entire screen! Christ... nothing is worse than getting only the first top inch of the screen, losing it, and starting over.
I'll not even bother with whining about image compression. Maybe the 1 cent per unit royalties on all the common algorithms would bankrupt them. Or maybe they've never even heard of RLE.
But most of all, if any Symantec people somehow discover this and read it: If you're too fucking stupid to write code that can check if the screen needs repainted, put a goddamn button on it, so I can force it to do so. Between 15 minutes to reconnect and force it that way, and blindly clicking hoping I can guess the location of an "ok" button that I cannot see, I am so sorely tempted to choose a third... committing suicide.
I could go on, but somewhere around 3 paragraphs it started sounding pathologically slanted. Those of you that have used this piece of garbage (used? suffered?) know I'm not lying.
|< A bug list | BBC White season: 'Rivers of Blood' >|
|
OPCFW_CODE
|
Let's say I have a data entry from a pool of employees:
table is as follow:
EmpNo Branch Date Amount
1 A101 11/30/2007 $0.90
1 A101 11/30/2007 $1.20
2 A101 11/30/2007 $0.90
3 A101 11/30/2007 $0.80
How can I select the whole table and only take in 1 unique latest entry if there are multiple entries for the same day, same branch under same employee number?
What do you mean by"select the whole table"? If you are hoping to get the "1 unique latest entry" the two statements contradict each other in my mind!
For example in your quoted sample data, what would be the result you would expect to get back? Just one row containing the $0.90 entry?
Edited to say "what consitutes 'latest' (and 'unique', come to think of it) as far as you are concerned?" If there are 14 entries for employee ID 1 on 11/30/2007 and some of them are for the same amount... what are you hoping to select? If you aren't using any kind of identity field then you must have some other way to make a record unique, right? If this is date & time, then you just select "top 1" or "max" based on the date/time field...? I must be missing something!
the first item shall not be selected as there is a "latest" entry with the same employee number, same branch and same date. I only want one unique and latest entry from each employee in a given day. Why would that happen? somebody hand itchy and keyed in twice, or he/she keyed in the first time wrongly, and trying to key in again to replace the first one.
I have no control of the data given.....this is a chuck of "dirty data" given to me. And yes, there is no time factor in this case.....
I think it's possible...was trying to look for a combi of "TOP" function and others. It's not going to be straight forward.....
Er... but if you have no time component and no unique ID...
Hang on... where has this "dirty data" come from and what format is it in? You seem to be treating the order of the data as significant... is it some kind of flat file in which case the later rows are more recent? If so, there's something to work on, if not, there isn't! Sorry, but if you look at a given employee and there is more than one transaction on a day then what on earth separates the values? Surely you can't hope to do it on value alone?
oh yah, the "dirty data" are flat files and generated from a DOS based programme. The sequence of the flat file is in proper order. Yes, for sure we could introduce some index to it while doing bulk loading to the database, but what shall I use? even time index alone could be tough to achieve what I wanted.....
Well, unless I'm missing something here, if you simply whack it into a table with an indentity field in it, that will automatically give you a unique index, then you just need to do something like this:
Assuming fields are something like ID (new identity field), EmployeeID, Branch, TDate, Amount
select distinct t1.EmployeeID, t1.BRANCH, t1.TDate, (select top 1 amount from flattable t2 where t2.EmployeeID=t1.EmployeeIDand t2.branch = t1.branch and t2.TDate=t1.TDate order by [id] desc) as amount
FROM flattable t1
ORDER BY t1.EmployeeID ASC
This will pick out the distinct employee, branch and date records and then let you pick off the latest transaction (it'll have a higher generated ID) if there is more than one transaction for that employee, branch and date...
|
OPCFW_CODE
|
German counterpart of Reddit
Just about to recap on my German which I have been learning for eight years in my childhood and teenagehood, most of which I happily managed to forget.
All the standard German language courses I have checked are extremely boring, so the plan is that I will find some social networking website in German to first read then participate in discussions that I have genuine interest in, that is on the subject other than the language itself :) — check the German Subreddit to see what I mean.
Ideally that would be a site like reddit.com (Wikipedia article) but entirely in German. Are there such sites?
Social news site might be the best label for what I am looking for.
Similar sites in other countries:
Spain: meneame.net (Wikipedia article)
Poland: wykop.pl (Wikipedia article)
Hello and welcome to [german.se]! I have taken the liberty to reformat your question a little. If you have objections to my edits, you can roll back in revision history
What's wrong with http://www.reddit.com/r/de/?
Sooo. To clarify, what you are looking for is not actually reddit-like, but more facebook/twitter-like with a bit of reddit??
@karoshi and Vogel612: I am looking for a reddit-like clone that 1) is not about German language or Germany(in general) itself as these, while being very interesting topics, were studied thoroughly by me in the past. 2) is written, ideally entirely, in German 3) offers variety of topics 4) will allow me to participate in discussions when I reach that level
@matcheek so http://www.reddit.com/r/de/ seems exactly what you're looking for.
@karoshi: yes, you're right. I was referring to reddit.com/r/german in my previous comment not to reddit.com/r/de, so the is the closest match to what I requested. Thanks.
Combine that with something that is relevant to your other interests:
Interested in IT => Golem.de
Golem.de + wanting to participate in discussion in german language => http://forum.golem.de/
I'd suggest you find a site that is relevant to your hobbies, favorite sports, etc. and join its forum.
I am not really sure if there exists a better site for your needs than the German subreddit.
Perhaps you can just like German news pages on Facebook that are related to your interests (sports, politics, whatever). As you will know there are plenty of discussions below Facebook posts, although the quality of the discussion as well as the language are not high quality.
If you need help finding a German Facebook site matching your interests let me know.
|
STACK_EXCHANGE
|
Recently I helped out a friend on Twitter with an Azure PowerShell issue they were having logging in to their subscriptions with the ‘Az’ PowerShell module.
It should also be noted that you can easily use other tools like Windows Terminal to access CloudShell or access it directly from https://shell.azure.com
However this scenario is for where PowerShell is required locally. And more importantly you need to avoid SSO (Single Sign-On) on devices from just bypassing the Azure AD login pages as you need to login as a different user or to a different tenant etc…
Once they had confirmed my proposed solution/idea fixed the issue with not being able to login to Azure via PowerShell and being able to access what they wanted, I thought it would be rude not share my useful tip with the rest of you; so here we go.
How I Always Login To Azure PowerShell
Below I will share with you how I login to Azure via the ‘Az’ PowerShell module every time without fail to avoid access issues.
- Open an In-Private Browser Window
N.B. make sure you don’t have any other In-Private tabs logged into Azure or Office 365 open at the same time in the same browser client when doing this.
2. Browse To: https://portal.azure.com
3. Login To The Azure Account You Want To Access Via PowerShell
4. Confirm You Are Logged In To The Correct Azure Account
5. Open PowerShell
6. Run the command: Login-AzAccount -UseDeviceAuthentication
7. You Will Be Given Instructions To Follow To Complete The Login
8. In The In-Private Browser Tab You Have Open Browse To: https://microsoft.com/devicelogin OR https://aka.ms/devicelogin
N.B. It’s covered in red on my screenshot (for security reasons)
10. Confirm The Account To Login To PowerShell With
N.B. This is the account that is already signed in from earlier
11. You Should Then See A Confirmation Page Stating You Are Logged Into Azure PowerShell
12. You Can Now Run Az PowerShell Commands
Other Useful Az PowerShell Commands
- Select-AzSubscription -SubscriptionId ‘SUBSCRIPTION-ID-GUID’
N.B. Use Get-AzSubscription to get a list of all subscriptions and their ID’s in the AAD Tenant you have logged into and have access too (as shown in step 12)
About the Author:
Hi, I’m Jack Tracey, I am 26 and live in West Sussex, England, United Kingdom.
I am currently a Solutions Architect for an MSP based in the UK (headquartered in London). I am responsible for assisting customers migrate, build & expand to mainly Azure IaaS & PaaS services as well as assisting with the various networking elements required when moving to the cloud.
I am also a co-founder of the Sussex Azure User Group where we host monthly meetups to discuss all things Azure in a safe, friendly and chilled environment. Check out our website at https://sussexazure.uk or follow us on Twitter at @SussexAzure
Tracey, J. (2019). Logging In To Azure PowerShell. Available at: https://jacktracey.co.uk/how-tos/logging-in-to-azure-powershell/ [Accessed: 6th January 2020].
|
OPCFW_CODE
|
The CDC reports that over 4 million vaccine doses have been administered in the U.S., far short of the government’s goal of 20 million by the end of 2020. WATCH FULL EPISODES: http://abc.go.com/shows/good-morning-america Visit Good Morning America’s Homepage: https://www.goodmorningamerica.com/ #GMA #COVID19 #Coronavirus #Pandemic
The Centers for Disease Control and Prevention issued a new warning after daily virus cases nearly doubled in just over a month from 100,000 on Oct. 30 to 196,000 on Dec. 2. WATCH THE FULL EPISODE OF ‘WORLD NEWS TONIGHT’: https://bit.ly/3gaSdMB WATCH OTHER FULL EPISODES OF WORLD NEWS TONIGHT: http://abc.go.com/shows/world-news-tonight WATCH WORLD NEWS TONIGHT ON […]
U.S. Surgeon General Dr. Jerome Adams and the vice president’s wife also received the vaccination to encourage Americans to get vaccinated. #ABCNews #BreakingNews #Pence #COVID19Vaccine #COVID19 #OperationWarpSpeed #Pfizer
Dr. Anthony Fauci shares the latest facts on the vaccine, including when we will know how long immunity lasts. LEARN MORE: Coronavirus live updates: Fauci calls vaccine rollout ‘bittersweet’: https://abcn.ws/3gORBfS Subscribe to GMA’s YouTube page: https://bit.ly/2Zq0dU5 Visit GMA’s homepage: https://www.goodmorningamerica.com Follow GMA: TikTok: tiktok.com/@gma Facebook:facebook.com/GoodMorningAmerica Twitter: twitter.com/gma Instagram: instagram.com/goodmorningamerica #GMA #Fauci #PfizerVaccine #COVID19 #COVIDVaccine
California, which has set a new record for coronavirus infections, is preparing for new strict stay-at-home orders. Subscribe to GMA’s YouTube page: https://bit.ly/2Zq0dU5 Visit Good Morning America’s homepage: https://www.goodmorningamerica.com/ Follow GMA: Facebook: https://www.facebook.com/GoodMorningAmerica Twitter: https://twitter.com/gma Instagram: https://instagram.com/goodmorningamerica Watch full episodes of GMA: http://abc.go.com/shows/good-morning-america https://hulu.tv/2YnifTH #covid #california
Plus, a new report reveals President Trump considered a military strike against Iran days after losing the election and Hurricane Iota batters Central America.
The Centers for Disease Control and Prevention stopped releasing new health guidance on COVID-19 after a change from the Trump administration as cases are increasing. READ MORE: https://abcn.ws/33lMkHI #ABCNews #COVID19 #CDC
Dr. Anthony Fauci told The Washington Post that the U.S. is “in for a whole lot of hurt” and “could not be positioned more poorly” as families head inside and prepare to celebrate the holidays. WATCH THE FULL EPISODE OF ‘WORLD NEWS TONIGHT’: https://bit.ly/385uGL5 WATCH OTHER FULL EPISODES OF WORLD NEWS TONIGHT: http://abc.go.com/shows/world-news-tonight WATCH WORLD […]
|
OPCFW_CODE
|
My hobby project is to create a website for my girlfriend's business. I don't want to reveal too much but the website will need to display the usual corporate information like services available, contact details, previous client testimonials etc. There will also need to be a section where potential clients can view available timeslots and apply for a booking with a consultant at any of those timeslots. This will be the hard part.
I don't want a username login sorta thing as that's too complicated. I think it will be enough to show an Outlook-style date/time calendar showing what times are available and allowing the user to select from the remaining timeslots, providing their name and contact details so a consultant can call them back and arrange the booking. Once the booking is arranged, the consultant will have to update the system so that the timeslot is shown as unavailable.
As these is a certain amount of dynamic data involved in the site, I'll use Python and a web framework to make serialisation to a database easier. After shopping around I have found many people happy with Django. The needs of this website are very modest so when people complain about Django not holding up under heavy load those arguments don't really apply here. I'm all about building this site rapidly and robustly.
I have decided that Django is the tool for the job as I have no need to AJAX functionality, just basic client/server communication with a little pizazz on the client end to make it look nice.
(edit) In further reading it appears these are pretty different tools- Pyjamas is a front end development and Django is a total solution. All the better for me.
First step: Django Tutorial
I installed Django 1.0.2 from the Ubuntu repo and some of the other Django related bits and pieces that sounded interesting. I'll stick with SQLite for now until I need a proper RDBMS. I see that Django is dependent on JQuery which is good- I was intending to use that to flash up the site.
I have set up a Pydev project and tried to run the django-admin.py command to create a Django site in that location but it can't find the file django-admin.py. First I had to find out how to search the filesystem for a file (find -iname "filename") then I had to find out how to add that location to the system path.
Now the django-admin.py file is on the system path but when I cd to the project folder and run the command I get access denied. Next I sudo the command and it says it can't find the command. WTF?
So I find the file and edit it and the first line (which I believe indicates which binary should execute it) says "#!/usr/bin/env python". I had a look and /usr/bin/env doesn't exist! No idea why the default installation of Django would reference Python from a location that doesn't exist. Maybe I'm supposed to make a symlink there that points to my Python interpreter of choice? I'll learn how to create symlinks then try it out.
I don't have to create a symlink, it turns out that /usr/bin/env is some sort of environment variable list allowing me to change the location of the Python executable easily. It's a level of indirection so that the location of Python doesn't have to be hardcoded into .py files.
After much toiling around I have found that I can get it to work by running "/usr/bin/python django-admin.py createsite xxx". Not sure why I have to manually invoke Python, I would have thought it would automatically be associated with .py files? Anyway, that was needlessly difficult but I finally have a site up. I'll continue the Django tutorial and get started on the real work.
|
OPCFW_CODE
|
PicoScope 7 Software
Available on Windows, Mac and Linux
I2C (Inter Integrated Circuit) is a low-speed serial data protocol, commonly used to transfer data between multiple components and modules within a single device.
Developed in the early 1980s by Philips Semiconductors (now NXP), I2C employs 2 signal wires to transfer “packets” of information between one or more “master” devices such as microcontrollers, and multiple “slave” devices such as sensors, memory chips, ADC and DACs.
Multiple “master” and “slave” I2C devices are connected to the bus using two lines:
Signaling voltages are typically 0 V for logic low and +3.3 V or +5 V for logic high.
Pull-up resistors keep both lines at logic high level when the bus is idle.
I2C bus speeds range from 100 kbit/s in Standard mode, 400 kbit/s in Fast mode, 1 Mbit/s Fast mode plus, and 3.4 Mbit/s in High Speed mode.
Each device on the bus is recognised by a unique 7-bit or 10-bit address.
Data is transferred in “packets”, which include the address of the device, a read/write command, acknowledgements and the data being transferred.
The diagram shows the structure of a single packet of I2C data.
At the start of packet a master device takes control of the bus by driving SDA low while SCL remains high. This indicates that a message will follow.
Next a 7 (or 10) bit address is transmitted followed by a R/W bit to indicate whether it is a read (1) or write (0) instruction.
The addressed slave device then transmits an acknowledge (ACK) bit by pulling the SDA line low. If the line remains high, the master can infer that the slave did not recognise the address and corrective action needs to be taken.
After the address is acknowledged by the slave, the master continues to generate the clock and depending on the R/W bit either the master or the slave will send data over the bus. After each byte of data sent, an ACK is generated by the receiving device.
The end of packet is recognised by the SDA line going from low to high when the SCL is already high.
I2C serial decoding is included in PicoScope as standard.
To get started, open Serial decoding from the tool park.
Select I2C from the selection of serial decoders (left).
In the Configuration page (middle), select the data and clock channels.
The Threshold, Hysteresis will be automatically detected, but can be adjusted along with the Invert toggle and Bus Speed if not standard-mode.
On the Display page (right), give your decoder a name, and set the formats for both Graph and Table displays - Hex/Binary/Decimal/ASCII or Off. Your decoding selection can also be adjusted to a single or all buffers, and/or between rulers if setup.
Shows a list of the decoded frames, including the data and all flags and identifiers. You can set up filtering conditions to display only the frames or data you are interested in, search for frames with specified properties, or use a Link File to translate frame ID and hexadecimal data into human-readable form.
For more information on PicoScope's serial decoding capabilities, see Serial bus decoding and protocol analysis - overview.
|
OPCFW_CODE
|
Better harmony in version two of collaboration software
Monday, May 6, 2002
NEW YORK -- Imagine working on a business proposal on your computer. While you're typing, somebody else's text also appears, paragraphs away from your cursor.
It's not a ghost in the machine, just your colleague on the other side of the country, pitching in with her part. You're creating and editing the document together, using Groove, a sophisticated but very accessible piece of software.
When it was first released a year ago, Groove was billed as the program that would change the way we work.
But it didn't quite live up to its promises. It was slow and the cursor sometimes jumped around unpredictably when several users were editing a document together, making the whole thing seem more like a poorly refereed shouting match than orderly collaboration.
Groove 2.0, released a few weeks ago, addresses those problems and adds some attractive features.
The brainchild of Lotus Notes creator Ray Ozzie, Groove is different from most other collaboration programs, such as those from IManage Inc. and Intraspect Software Inc., in that there is no need for a central computer or server to run things.
Users simply install the software on their workstations. The program shuttles data between PCs over the Internet or an office network, keeping the document updated so participants see each other's changes as they happen.
Not just for businesses
While Groove is aimed at the corporate world, just like its rivals, it can easily be adopted by students and home users who want to get away from passing e-mails back and forth when they want to get a document right.
The first version was unwieldy because every time a user made a change to, for example, a text document, Groove would send the whole updated document to all the other participants. This quickly ate into computer performance and burdened networks.
Groove 2.0 doesn't send the entire document every time, just the changes. This means that the program is usable even over a dial-up modem. Faster connections, such as cable modems or office networks, are still preferable of course.
In the Business News department of The Associated Press, we're now using the software to track stories in a small workgroup. After just a few weeks of "grooving," it's hard to imagine going back to using ledgers to do the same thing. '
Groove has proven useful, even when we only use a few of its many tools.
There's a chat function that works much like AOL Instant Messenger -- though without an audible alert. There's a sketchpad that works like a virtual whiteboard. You can even host a formal "meeting," with an agenda, a list of action points and minutes.
The new Groove also works with Microsoft Word and PowerPoint, allowing users to collaborate on documents and presentations in the de facto standard office applications.
Only one at a time
Editing in Word isn't quite as elegant as Groove's built-in writing pad because only one user can write at a time. That may be a plus for large groups, however, where users might otherwise get in each other's way.
Microsoft is an investor in Beverly, Mass.-based Groove Networks Inc., so it's quite likely we will see further integration with Microsoft products in future versions of Groove.
Other changes in the new version are intended to satisfy the demands of corporate network administrators. For instance, a company computer can now maintain a directory of in-house Groove users, making it easier to connect with colleagues.
We had no problems connecting through the company firewall, but editors at another company had trouble getting out through theirs. A Groove Networks spokesman said the software is designed to work with all firewalls, but can have problems in rare cases.
Speaking of security, all the data Groove sends between computers is encrypted to prevent eavesdropping.
Groove can be downloaded free for personal use from Groove.net.
The free edition has some limitations (you can only have five meetings running at the same time, for instance) but is adequate for home users. However, it's about 30 megabytes in size, so don't try to download it over a dial-up modem.
The standard version costs $49, and a beefed-up professional edition $99. Groove only works with Windows PCs.
On the Net
|
OPCFW_CODE
|
Copying VHDs locally to machines in Azure
This was from when RemoteApp didn’t support creating an image directly from VM.
- A1 Std machine, copying a 127GB VHD to a local drive (not temp D:\) via azcopy took 6.5 hours
- A4 Std machine, copying a 127GB VHD to D:\ via azcopy took 5 mins 20 secs
- A4 Std machine, copying a 127GB VHD to D:\ via save-azurevhd took 10 mins 39 secs
- A4 Std machine, copying a 127GB VHD to a local drive (not Temp) via azcopy took 25 mins 21 seconds
- A4 Std machine, copying a 127GB VHD to a local drive (not Temp) via save-azurevhd took 52 mins 11 seconds
Copying files into a VM via the two commands is very CPU intensive due to the threading it uses, so utilize a larger box no matter your method. And the hands down winner is to use Azcopy into the local temp D:\ (avoids an extra storage account hop). However, if you want a status bar, utilize save-azurevhd.
Copying VHDs between Storage Accounts
Due to a storage cluster issue in AU East, it has been advised to create new storage accounts and migrate VHDs to the new storage accounts. MSFT had provided us with a script, but it was taking hours/days to copy (and kept timing out).
Instead, we spun up a D4v2 machine in the AU East region, and I was able to have 6 azcopy sessions happening all at once with the /SyncCopy command. Each was running >100MB/sec whereas other async methods were running at <5MB/sec. You will see a ton of CPU utilzation during this, but the faster the machine, the better. Additionally, azcopy supports resume. To allow multiple instances of azcopy to run on a machine, utilize the /Z:<folderpath> switch for the journal file.
Stop Azure Blob with Copy Pending
Prior to getting all our copies going with the /SyncCopy, we had a few that were running async. Unfortunately, after stopping that with a CTRL-C and having azcopy stop, the blobs still had a copy pending action on them. This resulted in errors when attempting to re-run the copy with /SyncCopy on a separate machine: HTTP error 409, copy pending.
To fix this, you can force stop the copy. As these were new storage accounts with only these VHDs, we were able to run it against the full container. However, MSFT has an article on how you can do it against individual blobs.
Set-AzureStubscription -SubscriptionName <name> - CurrentStorageAccount <affectedStorageAccount>
Get-AzureStorageBlob -Container <containerName> | Stop-AzureStorageBlobCopy -Force
As I was migrating my websites to a new host (I may blog about that later as it’s been an interesting ride), I had this lovely issue where one of my websites would go into an infinite redirect loop when sitting behind the Azure CDN (custom origin).
Of course, it worked fine for all pages except for the root. And it also worked fine when it wasn’t behind the Azure CDN. For whatever reason, adding a bit of code to the functions.php theme seemed to work.
I then had to add in a manual redirect in nginx via the below. Still no idea why it doesn’t just “work” as it has before, but whatever. Now that it’s working, I should go back and figure out why it wasn’t with redirect_canonical…
rewrite ^ $scheme://www.test.com$request_uri? permanent;
Last night my SP3 decided to stop working with any Touch covers. They would work on other SP3s, just not this one. It definitely made work a lot of fun today.
Anyways, there is a “button reset” procedure that has worked in the past when it wouldn’t start. Turns out, it solved this problem too.
Solution 3 is the answer.
Ugh, this has taken me way too long to finally figure out/fix. I’ve been trying to wipe my Surface Pro 3 with TH2 – as I upgraded to RTM. However, I’ve had a bear of a time getting my USB key bootable.
Now, I’ve done it before, but for whatever reason previous ways haven’t been working. Turns out, there are 2 key things (one of which I was missing):
- GPT partitioning
- FAT32 formatting
To make it easier, you can use Rufus. Just make sure after you select the ISO, you reselect GPT and FAT32.
Once the key is formatted, you can boot from it either by restarting via advanced mode from within windows or by holding the volume down button when you turn it on.
*sigh* So much time wasted on this one.
Microsoft has finally enabled the ability to associate a reserved IP to an already created cloud service (VMs). This is great news as we have a few VMs that are externally accessible that were either built prior to this functionality or we just plumb forgot during build.
While logical, Microsoft doesn’t comment that this will cause an outage, and should be done during a normal change window. Sadly, while the IP change takes very little time, DNS updates are typically 20 minute TTL.
Other items that cause small network blips that may require a downtime window (all V1):
- Adding new endpoints to a VM
- Adding subnets to an already created virtual network
We are utilizing SQL Backup to Azure blob and had a meltdown today where the log backups were erroring out leaving us with 1TB files up in Azure that were locked. Needless to say it happened late last night and so there were multiple hourly files in multiple folder structures all over our storage accounts. It took a bit, but the following script clears out all the locks on blobs within a container in all directories. Please use carefully and don’t run it against your “vhds” container!
Also, it requires the Microsoft.WindowsAzure.Storage.dll assembly from the Windows Azure Storage NuGet package. You can grab this by downloading the commandline nuget file and running the below. Note, it will dump the file you need into .WindowsAzure.Storage.<ver>libnet40
nuget.exe install WindowsAzure.Storage
Break lease Script Below – one line modification from https://msdn.microsoft.com/en-us/library/jj919145.aspx
$storageAssemblyPath = $pwd.Path + "Microsoft.WindowsAzure.Storage.dll"
# Well known Restore Lease ID
$restoreLeaseId = "BAC2BAC2BAC2BAC2BAC2BAC2BAC2BAC2"
# Load the storage assembly without locking the file for the duration of the PowerShell session
$bytes = [System.IO.File]::ReadAllBytes($storageAssemblyPath)
$cred = New-Object 'Microsoft.WindowsAzure.Storage.Auth.StorageCredentials' $storageAccount, $storageKey
$client = New-Object 'Microsoft.WindowsAzure.Storage.Blob.CloudBlobClient' "https://$storageAccount.blob.core.windows.net", $cred
$container = $client.GetContainerReference($blobContainer)
#list all the blobs in the container including subdirectories
$allBlobs = $container.ListBlobs($null,1)
$lockedBlobs = @()
# filter blobs that are have Lease Status as "locked"
foreach($blob in $allBlobs)
$blobProperties = $blob.Properties
if($blobProperties.LeaseStatus -eq "Locked")
$lockedBlobs += $blob
if ($lockedBlobs.Count -eq 0)
Write-Host " There are no blobs with locked lease status"
if($lockedBlobs.Count -gt 0)
write-host "Breaking leases"
foreach($blob in $lockedBlobs )
$blob.AcquireLease($null, $restoreLeaseId, $null, $null, $null)
Write-Host "The lease on $($blob.Uri) is a restore lease"
if($_.Exception.RequestInformation.HttpStatusCode -eq 409)
Write-Host "The lease on $($blob.Uri) is not a restore lease"
Write-Host "Breaking lease on $($blob.Uri)"
$blob.BreakLease($(New-TimeSpan), $null, $null, $null) | Out-Null
To get the optimal performance out of your Azure VMs running SQL servers, MS recommends to use Storage Spaces and stripe multiple Azure disks. The nice thing about storage pools in Storage Spaces is that it allows you to add disks behind the scenes without impacting the actual volume.
Now lets say you have a SQL AlwaysOn cluster (2+ nodes), and for performance reasons (IOPS) you realize that you need to add more disks. As Storage Spaces shows all disks (physical, virtual, and storagepools) across the whole cluster, it is possible you won’t be able to simply add them due to naming mismatch. Fear not though, it is still possible if you follow the steps below:
- Add the new disks to the VM
- Log into the VM
- Failover SQL to a secondary if the current VM is the primary
- Stop clustering service on the VM
- Run Get-PhysicalDisks to get the disknames
- Run Add-PhysicalDisk -StoragePoolFriendlyName <storagepool> -PhysicalDisks (Get-PhysicalDisk -FriendlyName <disks>)
- Run Update-HostStorageCache (if we don’t do this sometimes the volume resize doesn’t work)
- Run Resize-VirtualDisk -FriendlyName <diskName> -Size <size>
- Run Update-HostStorageCache (if we don’t do this sometimes the disk resize doesn’t work)
- Run Resize-Partition -Size <size> -DriveLetter <letter>
- Start the clustering service on the machine
- Failback SQL to the VM if required
Hopefully this helps someone as we were beating our heads in for quite a few days (along with MS).
We had an issue recently where an application was not properly getting disconnected from SQL during a failover of an AlwaysOn Availability Group (AOAG). Some background: The application was accessing the primary node, and after the failover the application continued to access the same node. Unfortunately, as it was now read-only, the app was not very happy.
Turns out it was due to the Read-Only configuration of the secondary. We had it set to “Yes” which allows any connections to continue to access the secondary with the assumption the application is smart enough to know it can only read. It appears while using this setting, connections aren’t forcefully closed, causing all sorts of issue.
Setting it to either “No” or “Read-Intent Only” properly severed the connections for us. Yay!
For more info.
Looking to update your Azure ILB endpoints, but are struggling with the Set-AzureEndpoint cmdlet? You should be using the Set-AzureLoadBalancedEndpoint cmdlet instead!
|
OPCFW_CODE
|
OpenVG - The Standard for Vector Graphics Acceleration
OpenVG™ is a royalty-free, cross-platform API that provides a low-level hardware acceleration interface for vector graphics libraries such as Flash and SVG. OpenVG is targeted primarily at handheld devices that require portable acceleration of high-quality vector graphics for compelling user interfaces and text on small screen devices - while enabling hardware acceleration to provide fluidly interactive performance at very low power levels.
OpenVG at a glance
OpenVG 1.0 is an application programming interface (API) for hardware accelerated two-dimensional vector and raster graphics. It provides a device independent and vendor-neutral interface for sophisticated 2D graphical applications, while allowing device manufacturers to provide hardware acceleration on devices ranging from wrist watches to full microprocessor-based desktop and server machines.
OpenVG 1.1, released on December 8th, 2008, adds a Glyph API for hardware accelerated text rendering, full acceleration support for Adobe® Flash® and Flash Lite 3 technologies, and multi-sampled anti-aliasing to the original OpenVG 1.0 specification. The new OpenVG specification is accompanied by an open source reference implementation and a full suite of conformance tests implemented by the Khronos Group.
The Benefits of an Accelerated Vector Graphics API for Small Screen Devices
Vector graphics are widely used on today's desktop through packages such as Flash and SVG. Handheld devices have an urgent need for the smooth and fluidly scalable 2D that high-quality vector graphics provide to create high-quality user interfaces and ultra-readable text on small displays devices. Existing solutions have significant limitations. OpenVG addresses these limitation and provides additional tangible benefits:
- Low Power Consumption - An efficient 3D hardware accelerator reduces power consumption by up to 90% compared to a software engine
- Seamless Transition from Software to Hardware - Enables a seamless transition from efficient software rendering to hardware-accelerated high-quality 2D
- Scalability - Vector graphics provides easy scalability with high-quality rendering, including anti-aliasing, to different screen sizes without multiple bitmaps
- Accelerates Existing Formats -Designed to accelerate existing formats (e.g. Flash, SVG, PDF, Postscript, Vector fonts, etc.)
- Games, Screensavers, Mapping, User Interfaces - Fast scalable anti-aliased vector graphics enables advanced user interfaces, mapping applications, games and screensavers
- Portable Content - Scalable vector graphics makes it easier to port content across devices and platforms
- Royalty Free - A royalty-free, cross platform API facilitate rapid developer adoption and content creation
- SVG Viewers
OpenVG must provide the drawing functionality required for a high performance SVG document viewer that is conformant with version 1.2 of the SVG Tiny profile. It does not need to provide a one-to-one mapping between SVG syntactic features and API calls, but it must provide efficient ways of implementing all SVG Tiny features.
- Portable Mapping Applications
OpenVG can provide dynamic features for map display that would be difficult or impossible to do with an SVG viewer alone, such as dynamic placement and sizing of street names and markers, and efficient viewport culling.
- E-book Readers
The OpenVG API must provide fast rendering of readable text in Western, Asian, and other scripts. It does not need to provide advanced text layout features.
The OpenVG API must be useful for defining sprites, backgrounds, and textures for use in both 2D and 3D games. It must be able to provide twodimensional overlays (e.g., for maps or scores) on top of 3D content.
- Scalable User Interfaces
OpenVG may be used to render scalable user interfaces, particularly for applications that wish to present users with a unique look and feel that is consistent across different screen resolutions.
- Low-Level Graphics Device Interface
OpenVG may be used as a low-level graphics device interface. Other graphical toolkits, such as windowing systems, may be implemented above OpenVG.
OpenVG API Design Philosphy
- Hardware Acceleration Abstraction Layer that accelerates bezier curves and texturing can be flexibily implemented. This will allow accelerated performance on a variety of application platforms.
- Simplicity means that functions that are not expected to be accelerated in hardware in the near future were either not included, or included as part of the optional VGU utility library.
- OpenGL-style syntax is used where possible, in order to make learning OpenVG as easy as possible for OpenGL developers.
- Extensibility makes it possible to add new state variables in order to add new features to the pipeline without needing to add new functions.
- Focus on Embedded Devices like mobile phone, PDA, game console, DVR, DVD, car navigation, etc.
- Conformance Tests are expected to be available Q405.
- Coordinate Systems and Transformations (Image drawing uses a 3x3 perspective transformation matrix)
- Viewport Clipping, Scissoring and Alpha Masking
- Image Filters
- Paint (gradient and pattern)
The VGU Utility Library
- Higher-level Geometric Primitives
- Image Warping
OpenVG Rendering Pipeline
The OpenVG pipeline mechanism by which primitives are rendered. Implementations are not required to match the ideal pipeline stagefor- stage; they may take any approach to rendering so long as the final results match the results of the ideal pipeline within the tolerances defined by the conformance testing process.
- Stage 1: Path, Transformation, Stroke, and Paint
- Stage 2: Stroked Path Generation
- Stage 3: Transformation
- Stage 4: Rasterization
- Stage 5: Clipping and Masking
- Stage 6: Paint Generation
- Stage 7: Image Interpolation
- Stage 8: Blending and Antialiasing
|
OPCFW_CODE
|
Not found author now returns a 404 error.
Hey everyone!
I did this fix to return 404 when an author didn't exist.
Hope to be helpful.
Thanks,
Coverage increased (+0.03%) to 96.107% when pulling a0d95d7d67ad173016c714cfae970bcf3fac064c on felipefarias:author_not_found into baa3eccab999d437a5549cf3f6181f5c616a3700 on nephila:develop.
Coverage increased (+0.003%) to 96.084% when pulling 0cd2335e101a58fac9e0dbb941b5dfe87d9d0616 on felipefarias:author_not_found into baa3eccab999d437a5549cf3f6181f5c616a3700 on nephila:develop.
Coverage increased (+0.003%) to 96.084% when pulling 0cd2335e101a58fac9e0dbb941b5dfe87d9d0616 on felipefarias:author_not_found into baa3eccab999d437a5549cf3f6181f5c616a3700 on nephila:develop.
Coverage increased (+0.003%) to 96.084% when pulling 0cd2335e101a58fac9e0dbb941b5dfe87d9d0616 on felipefarias:author_not_found into baa3eccab999d437a5549cf3f6181f5c616a3700 on nephila:develop.
Coverage increased (+0.003%) to 96.084% when pulling 0cd2335e101a58fac9e0dbb941b5dfe87d9d0616 on felipefarias:author_not_found into baa3eccab999d437a5549cf3f6181f5c616a3700 on nephila:develop.
Coverage increased (+0.003%) to 96.084% when pulling 0cd2335e101a58fac9e0dbb941b5dfe87d9d0616 on felipefarias:author_not_found into baa3eccab999d437a5549cf3f6181f5c616a3700 on nephila:develop.
Coverage increased (+0.003%) to 96.084% when pulling 0cd2335e101a58fac9e0dbb941b5dfe87d9d0616 on felipefarias:author_not_found into baa3eccab999d437a5549cf3f6181f5c616a3700 on nephila:develop.
@felipefarias thanks for contributing this! Would you add a test to this? Just hitting the view and checking the status code it's fine
|
GITHUB_ARCHIVE
|
greggirwin on master
docs-cs: Repairs and missing tr… docs-cs: filio Merge pull request #199 from To… (compare)
greggirwin on master
docs-cs: recent changes (image.… Merge branch 'master' of https:… docs-cs: cs\datatypes\map.adoc… and 1 more (compare)
And you may examine
system/lexer/pre-load, default value is
none but if you put a function there it will be called during the scanning (on REPL only, no effect on compiled exe)
>> system/lexer/pre-load: func [s] [replace s "*" "+"] == func [ ] >> 3 * 5 == 8
It can be used to extend to replace macros on REPL. There are better examples for use of
pre-loader from @toomasv and others.
@endo64 I would argue against suggesting
pre-load to newcomers (and pretty much everyone else), not until we get a decent reader macros support.
And you don't need compilation with
expand from console is enough for quick experiments.
makeaction and native values. This is basically because the toolchain itself uses
maketo pre-define all actions and natives, and it was not deemed necessary or useful to forbid this feature after the initialization phase. However, (1) only existing actions and natives can be (re)made, and (2) the function spec has to be suppplied and if you get that wrong, a crash may occur. In "my" spec document, I have added wording to the effect that this is not recommened. Apparently, R3 alllowed it, but R2 did not.
Something to note somewhere: a comma
, is allowed as replacement for the decimal point. Now because floating point numbers starting with a decimal point are allowed, and on the other hand a decimal point is also allowed in a
word! value, one has the following possibilities:
>> b: load "a,1" print [mold b type? b] [a 0.1] block >> b: load "a.1" print [mold b type? b] a.1 word
It appears that a number starting with a comma does not need to be separated from a preceding word by whitespace.
if <refinement>but when the refinement carries further arguments, one sometimes sees a check on those directly:
if <optional-argument>. I could cite various lines in the Red toolchain code. This works most of the time, since optional arguments which are not present in the call are initialized to
none. However, when the type of the optional argument is
logic!and it IS present in the call as
@meijeru my point was - why separation between refinements and their arguments exist in the first place, if, as far as I know, most refinements use only one argument? You can, in theory, just use refinement value itself as an argument.
func [/refinement argument][if refinement [use argument]] ; refinement is just a flag, argument bears the actual value
func [/refinement][if refinement [use refinement]] ; refinement carries the value AND serves as a flag of its presence
Your example is a "desync" of
refinement and its
argument. In what I described above there's nothing to synchronize in the first place, but... cases where
none! are still tricky.
logic!, then check and use its argument(s), which, by design, may or may not be an actual
VID.red. It is the only example in the whole codebase with a refinement (
/parent) that has 2 arguments... How I know? I did an exhaustive search by program, which is just a variation on the different programs like the concordance that I have previously written..
line widthhere be hyphenated instead?
Is it worth linking to the url wiki definition (or its translations) in the url documentations?
I mean this line:
url!value represents a reference to a network resource and allows for directly expressing Uniform Resource Locators.
deep-reactor!tracks changes not only in series but also in composite scalars (pairs, tuples, date, time). In contrast to
reactor!that does not.
>> a: object [b: "test"] >> find a 'b *** Script Error: find does not allow object! for its series argument *** Where: find *** Stack: >> ? find USAGE: FIND series value DESCRIPTION: Returns the series where a value is found, or NONE. FIND is an action! value. ARGUMENTS: series [series! bitset! typeset! any-object! map! none!] [...]
findaccept objects or not?
#indirectives in this document?: http://red.qyz.cz/red-system-from-red.html
|
OPCFW_CODE
|
package realtime
import (
"crypto/sha256"
"encoding/hex"
"strconv"
"github.com/Jeffail/gabs"
"github.com/RocketChat/Rocket.Chat.Go.SDK/models"
)
type ddpLoginRequest struct {
User ddpUser `json:"user"`
Password ddpPassword `json:"password"`
}
type ddpTokenLoginRequest struct {
Token string `json:"resume"`
}
type ddpUser struct {
Email string `json:"email"`
}
type ddpPassword struct {
Digest string `json:"digest"`
Algorithm string `json:"algorithm"`
}
// RegisterUser a new user on the server. This function does not need a logged in user. The registered user gets logged in
// to set its username.
func (c *Client) RegisterUser(credentials *models.UserCredentials) (*models.User, error) {
if _, err := c.ddp.Call("registerUser", credentials); err != nil {
return nil, err
}
user, err := c.Login(credentials)
if err != nil {
return nil, err
}
if _, err := c.ddp.Call("setUsername", credentials.Name); err != nil {
return nil, err
}
return user, nil
}
// Login a user.
// token shouldn't be nil, otherwise the password and the email are not allowed to be nil.
//
// https://rocket.chat/docs/developer-guides/realtime-api/method-calls/login/
func (c *Client) Login(credentials *models.UserCredentials) (*models.User, error) {
var request interface{}
if credentials.Token != "" {
request = ddpTokenLoginRequest{
Token: credentials.Token,
}
} else {
digest := sha256.Sum256([]byte(credentials.Password))
request = ddpLoginRequest{
User: ddpUser{Email: credentials.Email},
Password: ddpPassword{
Digest: hex.EncodeToString(digest[:]),
Algorithm: "sha-256",
},
}
}
rawResponse, err := c.ddp.Call("login", request)
if err != nil {
return nil, err
}
user := getUserFromData(rawResponse.(map[string]interface{}))
if credentials.Token == "" {
credentials.ID, credentials.Token = user.ID, user.Token
}
return user, nil
}
func getUserFromData(data interface{}) *models.User {
document, _ := gabs.Consume(data)
expires, _ := strconv.ParseFloat(stringOrZero(document.Path("tokenExpires.$date").Data()), 64)
return &models.User{
ID: stringOrZero(document.Path("id").Data()),
Token: stringOrZero(document.Path("token").Data()),
TokenExpires: int64(expires),
}
}
// SetPresence set user presence
func (c *Client) SetPresence(status string) error {
_, err := c.ddp.Call("UserPresence:setDefaultStatus", status)
if err != nil {
return err
}
return nil
}
|
STACK_EDU
|
Our Dynamixel system offers the best and most optimized environment for building a robot system. Robot developers use Dynamixel actuators to build robots in various forms, such as manipulators, robot hands, mobile robots, and articulated robots, including humanoid robots.
Many robot engineers long to build humanoid robots that resemble the form of humans, and it is also the most built robot that many researchers and developers using Dynamixel actuators make.
Our commercial humanoid platforms using Dynamixels, which first started in 2003 with the Cycloid 2, has established a full-scale and firm robot system for research with the ROBOTIS OP.
Open source humanoid platform ROBOTIS OP, which opens all its CAD drawings, circuit diagrams, and source codes is not only used for research in kinematics, dynamics, gait, image recognition, and HRI, but also used as a platform in RoboCup, FIRA, and other artificial intelligence required robot competitions. The ROS system is applied to the latest ROBOTIS OP3 version which was released in 2017 to offer users a various and easy development environment.
THORMANG was first presented at the 2015 DARPA Robotics Challenge, a disaster relief robot competition. It was developed based on our high performance actuator, the Dynamixel Pro, and is the world’s first open source full sized humanoid platform that can be put into an actual field. It is highly recognized as a learning and research platform robot thanks to its open design and sources.
Our open-source based humanoid platforms have become a foundation stone in building up the Dynamixel ecosystem and have become the benchmark for many robotics researchers.
ROBOTIS has been advancing its humanoid robots for a long time, allowing us to take a step closer to the future we had only dreamed of.
TurtleBot3 which was developed in collaboration with Open Robotics and released in 2017 is an official ROS education platform. It is a mobile robot with a modular structure developed for research based on ROS and our Dynamixel X series. The advantage of the product is that it is low cost and expandable, and its modularity allows easy maintenance. The software is also open source allowing users to freely share and use its source. The TurtleBot3 can be used in any field for study and research that apply SLAM, navigation, autonomous drive, and mobile systems.
ROBOTIS Manipulator-H, a robotic arm system based on the Dynamixel Pro, is a universal system that can be operated in various environments with its lightweight, modular structure and allows easy maintenance and transformation. It is based on open source and can be used for learning and research for kinematics and manipulation. Thanks to its human arm sized hardware configuration, it can be used as a robot system that drives aircraft and automobiles, and can be used as a remotely operated robot in a disaster scene. Its powerful output (compared to its weigh) and portability make it easy to utilize when working in a narrow space or to construct a system for performing at exhibition halls.
Based on the compact Dynamixel MX series, the ROBOTIS Hand is easy to use and mount on humanoids or mobile robots. When attached to Dynamixel based robotic arm, it can be connected in a daisy chain topology for immediate operation without additional work.
Open Source Community and Dynamixel Ecosystem
ROBOTIS values being able to provide robot researchers and developers a system to create and build robots and aims to be a platform provider that closely work with various open platform sponsors. To successfully do so, we commercialize easy to use high performance Dynamixel actuators and platforms and continue to expand our product line-ups through ongoing research and development. In addition to this, by opening our programming software to the public, it provides an opportunity for beginners to easily implement and operate robots of their dreams.
Through regular collaborations with the open source community, ROBOTIS strives to build a Dynamixel ecosystem and provide a robotics contents and solution most essential to many robot researchers and developers.
|
OPCFW_CODE
|
Date of Degree
Communication Technology and New Media | Cultural Resource Management and Policy Analysis | Digital Humanities | Interactive Arts | International and Intercultural Communication | Other Arts and Humanities
Data Analysis and Visualization, Interactive Visuals, Tableau, Carto
I have always been fascinated by how ideas are spread. Often, ideas are chosen to serve an immediate purpose, and there is an expectation that the choice will matter only insofar as it serves to achieve the desired goal. However, once an idea takes off, it becomes sufficient in itself to disseminate its message. When I first heard about the rubric “Ideas Worth Spreading” in connection with Technology, Entertainment, and Design (TED) conferences, I had an emotional response because I was always trying to get involved in those three categories. My fascination with the question of what makes some ideas attractive to a wide range of people motivated me to investigate the world of TED and particularly local TEDx events.
The goal of this project is to visualize, measure, and analyze the growth and dispersal of selected aspects of TEDx events between 2008 and 2019. A TEDx event is a local gathering where live TED-style talks and performances are shared with the community. TEDx events are fully planned, unique, and independently coordinated; but all of them have features in common. Their diverse topics reference multiple issues from a variety of disciplines. Just like TED events, TEDx events lack any religious, commercial, or political content and do not focus solely on entrepreneurship, technology, or business, although their main topics come from those categories.
The project was inspired by a huge data-analysis initiative pioneered by Cultural Analytics Lab called Elsewhere, which maintains numerous datasets of contemporary global cultural activities that were collected and measured by various practitioners (Manovich, 2018).
TED Conferences LLC, a nonprofit company dedicated to hosting short (around eighteen-minute) talks, began by organizing conferences that focused mostly on technology, entertainment, and design. As TED grew, its range of topics expanded to encompass innovation, science, business, global issues, the arts, and more, bringing together audiences and speakers from every walk of life and cultural origin who seek a deeper understanding of contemporary culture.
Between 2009 (the year TED was founded) and 2018, TEDx events were as diverse as the cities that hosted them. They came to form a rich catalog of contemporary cultural trends whose analysis can inform a wide variety of queries about the world we live in. Drawn to the wealth of data that surfaces in an examination of archived and live TEDx events, I began wading through their affordances, looking for patterns and visualizing similarities and differences in TEDx variables specific to different cities from 2009 to 2018. Approaching these variables through data visualization allowed me not only to trace relationships between articulation of ideas and their reception but also to represent the interconnections of these relationships in a graphical interface.
The visualizations I created thus explore how the phenomenon of TEDx events bring together thousands of thinkers to share their experiences and opinions about the themes, factor that shape our world today and sometimes pointing to the world of tomorrow.
Liamis, Antonios, "Visualizing TEDx Events: Ten Years of “Ideas Worth Spreading”" (2020). CUNY Academic Works.
Communication Technology and New Media Commons, Cultural Resource Management and Policy Analysis Commons, Digital Humanities Commons, Interactive Arts Commons, International and Intercultural Communication Commons, Other Arts and Humanities Commons
|
OPCFW_CODE
|
App-V 5: On Streaming
Now that Hotfix 4 for App-V 5.0 SP2 has been out now for several weeks, many of you have likely already seen our Updated Guidance for Performance Recommendations now available on Technet (http://technet.microsoft.com/en-us/library/dn659478.aspx.) It almost goes without saying that the new stream-to-disk model of populating individual state-separated sparse files at the native NTFS level yielded an approach to streaming that came with some caveats and albeit, a few glitches, at first.
It was a big change and a lot of the move from an isolated virtual drive to a state separate immutable package cache directory meant that there would have to be some re-engineering and with that brought changes – including changes at the philosophical level.
The basic concepts remain the same. They are just implemented differently:
In addition, a switch to the use of standard protocols already deeply rooted into Windows occurred completing the change of the streaming landscape. Many customers have asked me to clarify some of the new performance recommendations and the rationale behind some of the choices.
SMB or HTTP?
Some concepts require us to rethink how we implement our packages. For example, intra-datacenter streaming, especially for Shared Content Store scenario will yield much better results with SMB or file-based streaming. HTTP, or web streaming, will be better suited for standard streaming especially to devices external to the data center. In the case of Internet-facing scenarios, especially where data would need to be encrypted, HTTPS would be the way to go.
Feature Block 1
When the package needs to stream content for a first launch, the StreamMap.xml file (which is already cached upon publishing) will be parsed.
Once all of the files listed in FB1 (the actual element is PrimaryFeatureBlock) are downloaded, the application can proceed to launch. Our updated guidance mentions the concept of automatic stream faulting upon first launch (which is what always happens in SCS) where if there is no FB1 (like in the example below) the file will be instantly pulled down to populate the sparse file and loaded into memory thus often resulting in a much quicker launch (especially in a scenario where HTTP streaming is being used.) This significant performance hit does not reveal its ugly head as drastic with SMB streaming as it is often going to be faster especially over higher speed LAN links.
Remember – this also not a simple file retrieval process. The files are inside a compressed package. So extraction also has to occur and that’s going to be somewhat less costly with SMB than with HTTP.
When the application launch succeeds, the PreviouslyUsed value in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AppV\Client\Streaming\Packages\<GUID>\<GUID> wil be marked to 1. This also means that if you are using the default configuration, the Autoload settings mean that a background load task will be queued to begin. This will not happen in SCS mode. No Autoload, No FB1.
Removing/Updating Package Content with Opened Packages
We use metadata to govern version and package lineage from a streaming perspective and as a result, many administrators are choosing to forgo the appending of package files to include a version stamp. That is certainly an option but overwriting content on package stores may yield problems. Usually it relates to packages being in use. There are several tools at the ready to verify on your content servers what files may still be open for streaming by clients. You can use the Handle.exe tool for Sysinternals http://technet.microsoft.com/en-us/sysinternals/bb896655 to view and close open packages although I would not advise closing packages, especially in Shared Content Store scenarios as that could create application crashes and lead to data loss.
Since Anonymous authentication is not supported and access is authenticated, you can use the handle command with the –u switch to get user information. You could also revert to the old fashioned NET FILE command if you are using SMB for streaming as well.
It will also reveal the user.
I’ve noticed, and you likely have as well that when working in stream-to-disk scenarios, the handles to the packages can remain open for quite a while – even after the file has been fully cached sometimes. That is actually by design. Each App-V Client will maintain a connection to the package file for each user on a per-package basis. Like the case with previous versions of App-V, this means that there will be a lot of sessions coming from RDS clients. BUT – unlike previous versions of App-V, we do NOT have to deal with limitations caused by ephemeral ports, individual connection on a per-application basis, constant re-authentication on each application launches. There will be one session per package/user combination for stream-to-disk scenarios. NOTE – In Shared Content Store mode you may see multiple sessions depending on stream faults.
With these improvements come some caveats. The client will keep an open handle to the file when using SMB streaming. This is because these sessions involve connect to compressed packages and that can be an expensive process if you need to consistently reconnect. As a result, the App-V Client will cache open sessions for up to 30 minutes past use. I my opinion, this is a small price to pay for the added benefits and scalability that come with the changes.
|
OPCFW_CODE
|
Harvard paleontologist Stephanie Pierces lab reveals that the morphological development and diversification seen in early reptiles began years before these mass termination occasions. Additionally, they were directly driven by what caused the mass termination occasions in the first place– increasing international temperature levels due to climate change.
” We are recommending that we have 2 major aspects at play– not just this open eco-friendly opportunity that has always been believed by several researchers– but also something that no one had formerly created, which is that environment change really directly activated the adaptive reaction of reptiles to help develop this huge selection of new body plans and the surge of groups that we see in the Triassic,” said Tiago R. Simões, lead author on the research study and a postdoctoral fellow in the Pierce laboratory.
” Basically, [increasing worldwide temperatures] activated all these various morphological experiments– some that worked rather well and survived for countless years as much as this day, and some others that generally vanished a few million years later on,” Simões added.
In the paper, which was released just recently in the journal Science Advances, the scientists set out the vast physiological changes that occurred in lots of reptile groups, consisting of the forerunners of crocodiles and dinosaurs, in direct action to significant environment shifts focused in between 260 to 230 million years back.
The study supplies a close look at how a large group of organisms progress because of climate modification, which is specifically significant today as global temperature levels continually rise. In fact, the rate of carbon dioxide released into the atmosphere today is about nine times what it was during the timeframe that culminated in the biggest climate change-driven mass extinction of perpetuity 252 million years ago: the Permian-Triassic mass termination.
” Major shifts in worldwide temperature level can have remarkable and differing effect on biodiversity,” stated Stephanie E. Pierce. She is the Thomas D. Cabot Associate Professor of Organismic and Evolutionary Biology and manager of vertebrate paleontology in the Museum of Comparative Zoology. “Here we show that increasing temperatures throughout the Permian-Triassic resulted in the termination of lots of animals, including a lot of the forefathers of mammals, however likewise triggered the explosive evolution of others, especially the reptiles that went on to control the Triassic period.”
The study involved nearly eight years of data collection and took a heavy dosage of camerawork, CT scanning, and loads of passport stamps as Simões traveled to more than 20 nations and more than 50 various museums to take scans and photos of more than 1,000 reptilian fossils.
With all the details, the researchers produced an expansive dataset that was analyzed with advanced analytical techniques to construct a diagram called an evolutionary time tree. Time trees reveal how early reptiles were connected to each other, when their family trees initially stemmed, and how fast they were progressing. Then they integrated it with international temperature level data from millions of years ago.
Diversity of reptile body plans began about 30 million years prior to the Permian-Triassic extinction, making it apparent that these changes werent set off by the occasion as previously believed. Although the termination occasions did help put them in high equipment.
The dataset likewise revealed that rises in worldwide temperatures, which started at about 270 million years back and lasted up until a minimum of 240 million years back, were followed by fast body modifications in many reptile family trees. For instance, some of the bigger cold-blooded animals progressed to lessen so they could cool off much easier; others evolved to reside in water for that same result. The latter group consisted of a few of the most strange kinds of reptiles that would go on to end up being extinct consisting of a tiny chameleon-like animal with a bird-like skull and beak, a giant, long-necked marine reptile once believed to be the Loch Ness monster, and a sliding reptile looking like a gecko with wings. It also consists of the ancestors of reptiles that still exist today such as crocodiles and turtles.
Smaller reptiles, which generated the first lizards and tuataras, went on a various path than their bigger reptile brethren. Their evolutionary rates decreased and stabilized in response to the rising temperature levels. The detectives think it was because the small-bodied reptiles were already better adapted to the rising heat since they can more easily release heat from their bodies compared to larger reptiles when temperatures fumed really quickly all-around Earth.
The researchers state they are planning to broaden on this work by examining the impact of environmental catastrophes on the evolution of organisms with plentiful contemporary variety, such as the significant groups of snakes and lizards.
For more on this research study, see Researchers Discover That Global Warming Spawned the Age of Reptiles.
Reference: “Successive environment crises in the deep past drove the early advancement and radiation of reptiles” by Tiago R. Simões, Christian F. Kammerer, Michael W. Caldwell and Stephanie E. Pierce, 19 August 2022, Science Advances.DOI: 10.1126/ sciadv.abq1898.
The image portrays an enormous, big-headed, carnivorous erythrosuchid (close relative to dinosaurs and crocodiles) and a tiny sliding reptile at about 240 million years ago. The erythrosuchid is chasing the gliding reptile and it is moving itself using a fossilized skull of the extinct Dimetrodon (early mammalian ancestor) in a hot and dry river valley.
Reptiles had one heck of a coming-out celebration just over 250 million years earlier throughout completion of the Permian duration and the start of the Triassic.
Their rates of development and variety started blowing up, resulting in a dizzying range of abilities, body strategies, and characteristics. This assisted to strongly establish both their extinct lineages and those that still exist today as one of the most diverse and successful animal groups the world has actually ever seen. For the longest time, researchers explained this thrive by reptile competitors being cleaned out by 2 of the biggest mass extinction events in the history of the world. These took place around 261 and 252 million years ago.
This description has actually been rewritten by a brand-new Harvard-led study that rebuilded how the bodies of ancient reptiles altered and compared it to the effects of countless years of environment change.
The image depicts a massive, big-headed, carnivorous erythrosuchid (close relative to dinosaurs and crocodiles) and a small sliding reptile at about 240 million years back. The dataset also revealed that increases in international temperatures, which started at about 270 million years earlier and lasted until at least 240 million years earlier, were followed by fast body changes in many reptile family trees. The latter group included some of the most bizarre kinds of reptiles that would go on to end up being extinct including a small chameleon-like animal with a bird-like skull and beak, a giant, long-necked marine reptile as soon as thought to be the Loch Ness beast, and a moving reptile resembling a gecko with wings. Smaller sized reptiles, which gave rise to the first lizards and tuataras, went on a various course than their bigger reptile brethren. The detectives believe it was due to the fact that the small-bodied reptiles were already much better adjusted to the rising heat since they can more easily release heat from their bodies compared to larger reptiles when temperature levels got hot really quickly well-rounded Earth.
|
OPCFW_CODE
|
As a centerpiece of the Fourth Industrial Revolution, which is unfolding before our eyes, the Internet of Things is becoming ever more essential in a growing number of sectors. There are already billions of devices in offices, factories, and production plants worldwide, with the ability to connect to the Internet, communicate and self-coordinate; while their functionalities are expanding by the day. Therein lies a potential for automation and increased productivity, the realization of which will soon enough become mandatory for those who seek to keep up with the competition.
However, organizations making their first steps in tapping into these opportunities, are often held back by the infrastructure commitments entailed by the automated orchestration of thousands of devices. This is especially true in cases in which such equipment is dispersed globally and\or performs mission-critical tasks.
Since IoT applications, like most network technologies, depend on size and throughput to deliver their goods, organizations may find it challenging to “get their feet wet” before committing to capital-intensive decisions that may have long-lasting consequences. Luckily, the times in which such paradigm-shifting maneuvers had to be bootstrapped from scratch to have significant impact are over. Below we demonstrate how leveraging Cloud platforms could allow anyone to tap into the IoT, quickly and with no strings attached.
Cloud-based “IoT as a Service”
As it is the case with computation, storage, and other traditional cloud-based services, launching an IoT network in the Cloud allows organizations a degree of elasticity that removes the most painful barriers to enter. The most obvious of which are of course Cost, Scalability, and Flexibility. However, while often overlooked, Security and Data-Access concerns are also major reasons to consider launching a cloud-based IoT operation.
Cost – Building an on-premises IoT infrastructure from scratch involves considerable up-front investments which initially may be hard to justify from a business perspective. As with anything network-related, IoT projects tend to start small and grow over time. Cloud-platforms’ pay-as-you-go models are tailor-made for such cases: As long as workloads are small, so are payments, which grow proportionally with the magnitude of the operation.
Scalability – Scaling complex IoT solutions up or out may be a difficult task, involving the purchasing of new hardware, on-boarding and configuring it. This can take days to weeks while with the elasticity of the cloud, a new resource can be provisioned in minutes or even seconds, sometimes by the virtue of a simple API call. Respectively, cloud platforms also enable you to quickly scale down and in when the resources are not required anymore.
Security – While on first glance it may appear that keeping everything on premise with in-house guarded SCADA centers is the safest way to go, it should be kept in mind that cloud service providers are amongst the most highly secured entities on the planet. Cloud-providers don’t only invest massive resources in securing their platforms, they also provide tools for monitoring, logging security events, and performing security updates over the air to remote devices. Very large organizations may have their reasons to go-solo for security reasons, but it should be remembered that out-competing the security provided by the Cloud is not an easy task.
Data Access – One of the main reasons to process your IoT workload in the cloud is the ability to access data in real-time. Data that is stored on-premise may limit the number of consumers that can access it in parallel, while certain consumers may be excluded from access altogether. On Cloud IoT platforms, on the other hand, data can be made available for a huge number of consumers in real time from almost anywhere. Additionally, data can be pre-processed and pre-filtered, so that only relevant data will be revealed to whom it may concern.
In short, IoT in the Cloud shifts most of the heavy lifting to the well-established shoulders of the Cloud provider, allowing the client to take care of their actual business, while shortening their often crucial Go-to-Market time.
Building your IoT solution on AWS
When it comes to fully integrated IoT solutions that cover customers from the edge to the Cloud and that are proven at scale, AWS is probably the most encompassing solution. AWS IoT integrates seamlessly with other AWS services, offers the fastest AI modeling on the market, and brings together data management and rich analytics.
The centerpiece of the AWS IoT offering is called AWS IoT Core and provides all the components needed to roll out and connect IoT infrastructure as fully managed services. AWS IoT Core is an extensive toolbox; below are its main features:
- Message Broker: Message Broker is the heart of any IoT workload, enabling both – devices and applications to exchange messages between them. It provides a fully managed pub/sub message broker that can automatically scale with message volume, and thus securly support a huge number of communicating devices.
- Device Gateway: The Device Gateway functions as the entry point for IoT devices communicating over MQTT, WebSockets, and HTTP, and connects them to the AWS cloud.
- Device Management: Device Management allows users to register and manage devices to centrally administer them on AWS. Users can generate a device certificate, associate a policy to it, and add device attributes such as their locations and tags using Device registry. Additionally, devices can be consolidated into groups for easier, and more logical management.
- Rules Engine: To help parse IoT-derived data before it reaches centralized applications, AWS’ IoT Rules Engine supports an SQL-like query syntax to parse and filter received data, and execute actions accordingly. Rules Engine can be easily integrated with other AWS services such as Lambda, Elastic Search, Kinesis, SNS and more.
- Device Shadow: The Device Shadow functionality of the IoT Core maintains persistent information about a device state which can be accessed by other applications even if the IoT device providing the data is offline. This provides asynchronous communication with the device. For instance, if the device should display a green light, using the Device Shadow, an application can get the device’s current state, and change it, in this example to a red light, without being directly connected to the device. Changes are reflected in the AWS cloud instantaneously.
AWS IoT is an ever-evolving service. New additions to AWS IoT have been announced recently at re:Invent 2020. You can read about the most interesting ones here.
Relying on these tools, and many more, AWS supports IoT use cases by providing infrastructural elements that cater for every aspect of IoT-based data collection, analysis, and presentation. AllCloud has assisted many customers, including Netafim, a global leader in precision irrigation systems, to deploy and run their infrastructure on AWS. If you want to discover what AWS IoT can do for your organization, contact us.
|
OPCFW_CODE
|
M: VLC media player 3.0.0 'WeatherWax' Release Candidate 3 - doener
https://git.videolan.org/?p=vlc/vlc-3.0.git;a=tag;h=refs/tags/3.0.0-rc3
R: CommieBobDole
I may try this out - I had to move away from VLC because getting it to
properly do hardware decoding/deinterlacing on Linux has been like pulling
teeth.
If you've got a modern video card, it's hard to go back to software
deinterlacing after seeing how good it looks when done by the GPU, and I could
never reliably get it to work. Eventually moved to SMPLayer which works great,
but it would be nice to see it working right in VLC.
R: modzu
+1 for smplayer
R: johnhattan
I just tried today's daily Windows x64 installer, and Windows Defender is
blocking it, reporting that it's infected with "Trojan:Win32/Fuerboos.D!cl"
Might be a false positive, but I'm not moving forward until I know.
R: eitland
Tried submitting it to virustotal?
Edit: I might be close to paranoid but I've seen to many cases of nice
freeware or open source software getting compromised or even selling out to
adware.
R: buovjaga
Relevant:
[https://www.reddit.com/r/bestof/comments/73dafr/vlc_creator_...](https://www.reddit.com/r/bestof/comments/73dafr/vlc_creator_refused_several_tens_of_millions_of/)
s/creator/maintainer/
R: nouveaux
I couldn't find an official rc3 download link but here's the link to the
nightlies:
[https://nightlies.videolan.org/build/](https://nightlies.videolan.org/build/)
R: snvzz
I've always found VLC's UX very awkward. Very happy with mpv.io. On Windows,
MPC-HC until it's been abandoned recently; Replaced it with MPC-QT, which is a
mpc-hc-like UX but implemented using libmpv.
R: solarkraft
I agree about the UX, but it's just so damn reliable.
R: visarga
Yeah, but it can't skip 5 seconds ahead. I mean, ostentatiously, it can, but
with horrible lags and jerking. I can't watch a video where I don't have fast
skip. MPlayer, a much worse player than VLC, had instant skip for all file
formats. Youtube has instant skip. But VLC doesn't even use the
forward/backward keys for skipping, it uses Alt-Cmd-Arrow or some arcane
combo. Clearly they dropped the ball on skipping.
R: boomboomsubban
Pretty sure there's an option to change the skip method, and shortcut keys are
definitely configurable.
R: 0x0
Did they re-use the codename for VLC 2.2.8? The about screen for 2.2.8 shows
"Version 2.2.8 Weatherwax (Intel 64bit)"
R: favorited
Can't wait for this. I've been using nightlies to get the new subtitle
renderer and have been very happy (aside from unrelated minor playback
regressions).
R: scarfacedeb
If you're on MacOS, try IINA instead.
It has much nicer interface, thumbnail previews on hover, multiple subtitles
support.
Moreover it solved all of the issues that I've had with VLC.
R: alphabettsy
+1
R: sunstone
I'll wait for the .1 release but the current version of VLC on android and
Ubuntu have been pretty flakey. In android have lost all my playists and least
twice and in Ubuntu playing full screen video cause the computer to lock up to
a hard reset.
I hope this is just a rough patch for VLC which has been a great application
for a long time.
R: Roberto_ua
Still can't play 8K videos on my MacBook Pro. I downloaded and tested this
one.
[https://www.youtube.com/watch?v=1La4QzGeaaQ](https://www.youtube.com/watch?v=1La4QzGeaaQ)
R: petrikapu
I'd like to have mac version of it. Eager to test if 4k videos play smoothly.
R: floatingatoll
There's a build from 12/25 that might be sufficient for that:
[https://nightlies.videolan.org/build/macosx-
intel/vlc-3.0.0-...](https://nightlies.videolan.org/build/macosx-
intel/vlc-3.0.0-20171225-0507-rc2.dmg)
R: petrikapu
Works much better with 4k videos than previous stable release!
R: cgb223
Anyone know if this finally integrates Chromecast support?
R: jokoon
I haven't tried it, but it's there some thumbnail preview just like most web
video players?
I find weird that most desktop players don't have this feature.
R: j_s
I have a tough time playing network streams in VLC, if it cuts out there's no
easy way to restart where it left off.
R: rllin
chromecast!
R: johnhattan
Yeah, I'd been using the Chromecast support in the earlier 3.0 betas. Seemed
to be working just fine, although it wasn't getting the "friendly" name that
I'd set for the device. So instead of "Living Room TV", I'd see a big hash-
code.
It can also play audio over Google Home speakers.
R: sand500
Haven't gotten a chance to try this yet but is this the same functionality as
videostream but free?
R: fatwah
If you are on Windows use MPC-HC. Else use mpv.
R: have_faith
Why?
R: gsich
Much better subtitle rendering for starters. Another reason is madvr.
|
HACKER_NEWS
|
My Time Organizer is a Google Chrome Extension that let’s you organize your day with events, tasks, and notes. It is a simple application that helps you in organizing your schedule by listing important events, tasks, and notes. It let’s you add information regarding important events and notify you for the upcoming events with alerts. You can also manage a to-do list for different tasks you have to do, and make notes for important things.
My Time Organizer has a beautiful, intuitive interface, which seems more or less like a desk calendar. You can switch between the month and week-view, to easily manage your daily activities. Moreover, it has been built with easy drag-n-drop features to make it even more user-friendly.
My Time Organizer offers three basic items which helps you in organizing your day: Events, Tasks, and Notes. You can add any of them to a specific day in calendar, just by dragging the icon with item name from the top panel and dropping it to the desired column (day).
Add Events to My Time Organizer:
To add information regarding important events like meetings, appointments, birthdays, etc., just drag the “Event” icon from the top panel and drop it onto the desired day. Then, you can click on the “Pencil” icon that appears on mouse hovering, enter information regarding the event, and set a reminder. My Time Organizer will notify you about the upcoming event with a reminder alert.
Add a Task to My Time Organizer:
To add a task, simply drag the “Task” icon from the top panel and drop it onto the desired day in the calendar. Then click on the pencil icon and enter the task you have to do. That’s it!
You can see a check box below the task, which you can click to mark it as done.
Also check out TaskLogger, to track time spend on an activity.
Add Notes to My Time Organizer:
Adding notes is quite similar to adding tasks in the calendar. Just drag the “Note” icon from the top panel and drop it onto the desired day. Then click on the pencil icon and write whatever you want to add as a note.
You can add as many events, tasks, and notes in the same way, as you want. To view the whole list you can switch to the Month view, where you can see the total number of added events, tasks, and notes on each day of the month. To add a new item to the list, you can again, switch back to the Week view.
Key Features of My Time Organizer:
- Intuitive User Interface: My time organizer has a very intuitive interface with amazing animation effects.
- Easy Drag and Drop Features: It provides easy drag and drop features to quickly add events, tasks, and notes.
- Notification Alerts for Upcoming events: It gives timely notifications for all the upcoming events that are listed.
- Different Color Themes: It provides beautiful color themes to change the calendar background. You can click on the “Settings” option from the top panel to change the color theme of the calendar.
- Voice Input support: It also supports voice input. Though I haven’t tried it, you may try it and see how it works.
- Beautiful Animation effects: It shows eye-catching animation effects while adding or deleting items, which looks absolutely amazing.
The Final Verdict:
My time Organizer is a nice app to organize your daily activities in the best possible way. It not only let’s you add important information regarding various events and tasks, but also gives you timely alerts for the upcoming events. Moreover, it has a beautiful interface with intuitive buttons and exiting animation, which makes it even more interesting to use this app as a daily time organizer.
|
OPCFW_CODE
|
Strings in Switch Statements: 'String' does not conform to protocol 'IntervalType'
I am having problems using strings in switch statements in Swift.
I have a dictionary called opts which is declared as <String, AnyObject>
I have this code:
switch opts["type"] {
case "abc":
println("Type is abc")
case "def":
println("Type is def")
default:
println("Type is something else")
}
and on the lines case "abc" and case "def" I get the following error:
Type 'String' does not conform to protocol 'IntervalType'
Can someone explain to me what I am doing wrong?
This error is shown when an optional is used in a switch statement.
Simply unwrap the variable and everything should work.
switch opts["type"]! {
case "abc":
println("Type is abc")
case "def":
println("Type is def")
default:
println("Type is something else")
}
Edit:
In case you don't want to do a forced unwrapping of the optional, you can use guard. Reference: Control Flow: Early Exit
Added the answer with alternative and safer solution below.
don't use this! The value is marked optional in it's own reason, a nil value will crash your app. @mikejd offered a much better answer
@Lcsky I have updated the answer to use guard if you are not sure of the value in opts.
This dangerous because if the optional is nil then the program will crash!
You should never use ! directly like here, so potential crash. It is forbidden if you use SwiftLint for example. I prefer mikejd solution.
Honestly, it is so lame that Swift cannot handle optional values in switch statements. As others say, this is actually quite dangerous and requires extra boilerplate code to make it safe, when the language could be a little smarter and make this a lot easier.
According to Swift Language Reference:
The type Optional is an enumeration with two cases, None and Some(T), which are used to represent values that may or may not be present.
So under the hood an optional type looks like this:
enum Optional<T> {
case None
case Some(T)
}
This means that you can go without forced unwrapping:
switch opts["type"] {
case .Some("A"):
println("Type is A")
case .Some("B"):
println("Type is B")
case .None:
println("Type not found")
default:
println("Type is something else")
}
This may be safer, because the app won't crash if type were not found in opts dictionary.
this is probably the best answer, but I dont do app development anymore, so I have no way of testing this.
the best answer indeed!
You have been hit by the friendly Swift 3 renaming goblin: .Some has been renamed to .some (which luckily the Swift 3 compiler ant has been happy to tell you :-). Still the best answer, thanks (even more if you return for a Swift 3 version).
Try using:
let str:String = opts["type"] as String
switch str {
case "abc":
println("Type is abc")
case "def":
println("Type is def")
default:
println("Type is something else")
}
just tried both of these, and both still give me the error Type 'String' does not conform to protocol 'IntervalType'
What kind of dictionary is opts?
I tried both versions and with this var opts = ["type": "Type is abc"] it works in a playground
opts is <String, AnyObject>
case let string as String where string == "abc": gives the error Type 'String' does not conform to protocol 'AnyObject'
ok, Ive figured it out - before the switch I do let str:String = opts["type"] as String and then I perform the switch (pretty much as I had it in my original question) on str instead. @Kirsteins you helped me to get to where I needed, so if you update your questions with a solution like this I will mark you as correct.
I had this same error message inside prepareForSegue(), which I imagine is fairly common. The error message is somewhat opaque but the answers here got me on the right track. If anyone encounters this, you don't need any typecasting, just a conditional unwrap around the switch statement:-
if let segueID = segue.identifier {
switch segueID {
case "MySegueIdentifier":
// prepare for this segue
default:
break
}
}
A guard statement also works well inside prepareForSegue(): guard let segueID = segue.identifier else { return }
Instead of the unsafe force unwrap.. I find it easier to test for the optional case:
switch opts["type"] {
case "abc"?:
println("Type is abc")
case "def"?:
println("Type is def")
default:
println("Type is something else")
}
(See added ? to the case)
|
STACK_EXCHANGE
|
Validate that 'name' attribute is set only if hashable
addresses part of issue #8263.
a few points:
this needs a test to verify that it works.
the cleaner way to do this rather than checking this for all attribute setting on every data structure would be to make name on a property on Series objects (the only type that currently has names in pandas). Something like:
@property
def name(self):
return self._name
@name.setter
def name(self, value):
try:
hash(value)
except TypeError:
raise TypeError('name must be hashable')
self._name = name
Thanks. Makes sense to me.
What next? Shall I try and make another commit on this branch while the
PR remains open, or do you want to reject it in which case (i) I could
try on a new branch and a new PR, or (ii) you could do it yourself?
Am 04.01.2015 um 20:51 schrieb Stephan Hoyer:
a few points:
this needs a test to verify that it works.
the cleaner way to do this rather than checking this for /all/
attribute setting on every data structure would be to make |name|
on a property on |Series| objects (the only type that currently
has names in pandas). Something like:
@property
def name(self):
return self._name
@name.setter
def name(self, value):
try:
hash(value)
except TypeError:
raise TypeError('name must be hashable')
self._name = name
—
Reply to this email directly or view it on GitHub
https://github.com/pydata/pandas/pull/9193#issuecomment-68646213.
@dr-leo just make a new commit on this PR
There was recently introduced a pd.core.common.is_hashable() in this PR: #8929. Maybe that can be used here to do the hash check.
@Joris: sorry, don't know how to use code from another PR in my own.
@all: I've just committed what will hopefully do part of the trick. I've
added a name property in core.series.py as Stephan suggested. I've also
added a test for this in test_series.py. I couldn't find a good place
for it so I put it behind the test_constructor_map case.
The test suite produces some failures: in test_series, line 1999, e.g.,
Series.name is set to a list type. That's mean, so I've made it a
string. This does not break that test, it is about repr.
Still there are 13 failures and one error out of 8027 tests. At least
one failure relates to Series.name. Unfortunately I am fairly unfamiliar
with nose and unittest. Before attempting to fix these failures I
thought I'd show you what I've done so far.
Please let me know your views on the test results and what it would
take to accept the PR.
As this is my very first PR for a serious project, I am somewhat baffled
about how much work it takes to put together good software :-O.
Anyway, so far it has been fun to work on this and I've learnt quite a bit.
Thanks.
Leo
Am 05.01.2015 um 23:44 schrieb Joris Van den Bossche:
There was recently introduced a |pd.core.common.is_hashable()| in this
PR: #8929 https://github.com/pydata/pandas/pull/8929. Maybe that can
be used here to do the hash check.
—
Reply to this email directly or view it on GitHub
https://github.com/pydata/pandas/pull/9193#issuecomment-68795828.
@dr-leo The other PR is already merged, so you can just use the function it introduced directly.
Trust me, this gets easier with practice :)
I'm pretty sure the test failures you're getting is because None (the default value for name) is not hashable. We definitely want None to remain a valid name, so you'll need to add a special case for that.
Actually, that theory is wrong. None actually is hashable.
OK, it looks like the trouble here is related to some strange business pandas does with overwriting __setattr__. It looks like replacing self._name = value with object.__setattr__(self, '_name', value) should do the trick.
needs a test to see if this survives pickling
Got you. Should all be doable.
On 13/01/2015, jreback<EMAIL_ADDRESS>wrote:
needs a test to see if this survives pickling
Reply to this email directly or view it on GitHub:
https://github.com/pydata/pandas/pull/9193#issuecomment-69671050
On picling: There is already a testcase in test_series @411:
def test_pickle_preserve_name(self):
unpickled = self._pickle_roundtrip_name(self.ts)
self.assertEqual(unpickled.name, self.ts.name)
So I don't see any need for another.
On common.is_hashable: It returns False for NP.float64 (see below). This
breaks a couple of tests. I suppose this is a bug in is_hashable. If
not, we cannot use it to check Series.name for hashability. We allow
float64 for index labels after all.
In [3]: import numpy as NP
In [5]: f=NP.float64(3.14)
In [8]: hash(f)
Out[8]:<PHONE_NUMBER>
In [9]: import pandas
In [11]: from pandas.core.common import is_hashable
In [12]: is_hashable(f)
Out[12]: False
Am 13.01.2015 um 00:55 schrieb jreback:
needs a test to see if this survives pickling
—
Reply to this email directly or view it on GitHub
https://github.com/pydata/pandas/pull/9193#issuecomment-69671050.
@dr-leo what version of numpy are you running? I'm seeing a different result on numpy 1.9.1
np1.9.1, py34,32bit, win7 64bit.
Am 15.01.2015 um 23:35 schrieb Stephan Hoyer:
@dr-leo https://github.com/dr-leo what version of numpy are you
running? I'm seeing a different result on numpy 1.9.1
—
Reply to this email directly or view it on GitHub
https://github.com/pydata/pandas/pull/9193#issuecomment-70175647.
@dr-leo It's only a problem with Python 3. Just made a new issue: #9276
@dr-leo can you rebase on master and give this another try? We just fixed the is_hashable bug in #9473.
Great!
However, I am unfamiliar with rebase. To make things worse, I work with
Mercurial using hg-git. It does have a rebase extension but fiddling
with history is not one of my passions.
My hope was that you could simply merge my little PR branch into
master... That said, if you give me a hint I could try to help.
Leo
Am 17.02.2015 um 03:23 schrieb Stephan Hoyer:
@dr-leo https://github.com/dr-leo can you rebase on master and give
this another try? We just fixed the is_hashable bug in #9473
https://github.com/pydata/pandas/pull/9473.
—
Reply to this email directly or view it on GitHub
https://github.com/pydata/pandas/pull/9193#issuecomment-74607770.
needs a perf check
closing pls reopen if/when updated
|
GITHUB_ARCHIVE
|
The Phenix Customer Portal gives portal users the ability to verify channel status, preview the current video & audio, and manage channels in real time.
Manage channels with Create & Delete
Replicate streams from one channel to another with Fork & Kill
Control streams within a channel with stream terminate
Check on channel details
View primary stream
View all streams
Find HLS & DASH links if enabled
Get channel properties such as channel Alias and channel ID
Resolution and frame rate in frames per second (FPS) shown for video previews in the Details view
Generate and copy permalinks (with EdgeAuth tokens) for publishing or subscribing using the page accessed via the "EdgeAuth" menu item from any Channel in the Channel list. You can select the duration and capabilities for the link. These links can be used as-is, embedded in web pages, or the tokens can be used as you would any EdgeAuth token.
On the Create Token page, you can copy either the entire permalink or copy only the individual token or tokens.
For permalinks containing multiple tokens, you can copy each token individually using the “Copy” button next to the token. For example, a permalink for publishing has both auth and publish tokens.
Use the “Debug Token” page to extract information from a token.
Paste a token (starting with DIGEST:) in the text box, click “Debug Token”, and information about the token is shown below the text box.
Track status and usage as it happens. View usage information for current and past channels and viewers, including time to first frame, minutes published and viewed, viewer information and geographic breakdown.
Available reports include Summary, Current Activity, Usage, and Time to First Frame.
Select desired date ranges
Download Publishing & Viewing reports
Navigate directly to reports from channel details
Some reports can also be downloaded from the portal or via REST API to further analyze data.
Summary - overview of the previous 24 hours and 30 days of publishing and viewing
Current Activity - summary session and user data by country
Usage - publishing and usage numbers and trends. This report has been enhanced to allow portal users to select a time range for the report, and includes Minutes Per User and Minutes Per Stream
Time to First Frame - charts the TTFF for Real-Time, DASH Live, and HLS Live users.
Under the Analytics menu:
Publishing shows video and audio quality, duration of published stream, peak concurrent views, total views for the selected time period
Viewing shows video and audio quality of both origin and viewed stream, viewer-specific information, and user agent for the selected time period.
Fork History (source and destination, status of request, etc.)
Concurrents (number of concurrent views for the Channel)
Ingest (underruns, in milliseconds, useful to see the connection quality, especially for RTMP ingest). Even if the underruns are too small to be seen on the graph, downloading the report may show a pattern of smaller underruns.
|
OPCFW_CODE
|