Document
stringlengths
395
24.5k
Source
stringclasses
6 values
View from the arXiv: May 23 - May 27 2022 A summary of new preprints appearing on arXiv during the week of May 23rd to May 27th 2022 Welcome to ‘View from the arXiv’, where each week I’ll put together a short list of new preprints which have appeared on arxiv.org during the week which I’ve found interesting. I’ll focus on the categories ‘Disordered Systems and Neural Networks’, ‘Quantum Gases’ and ‘Strongly Correlated Electrons’, and in particular the first two as these are the main areas of the arXiv which I follow. This is an entirely subjective list of things which appeal to me, and of course there are far too many interesting papers to be able cover all of them so I just choose one or two from each day to highlight here. As these are all preprints which have not yet been peer-reviewed, remember to take any claims and conclusions with a grain of salt and be sure to cast a critical eye over the work if you’re interested in more details. (And let’s face it, this caveat should be applied to any published work too…!) You can also subscribe to this as a newsletter if you’d like it e-mailed to you every Monday morning! Ground-state energy distribution of disordered many-body quantum systems, by Wouter Buijsman, Talía L. M. Lezama, Tamar Leiser, and Lea F. Santos: In much of modern condensed matter, when we study disordered quantum systems we almost immediately specify to excited states, but there is still a lot to be gleaned from studying the ground state properties. That’s what this paper sets out to do, by comparing the probability distribution for the ground state energy of a disordered system with a distribution obtained from random matrix theory known as the Tracy-Widom distribution. The authors find that certain random models exhibit ground state energy distributions consistent with the Tracy-Widom distribution, while others don’t. Based on a quick read, it looks to me like models which exhibit chaos/thermalisation do exhibit Tracy-Widom-like physics, while models which exhibit non-ergodic behaviour don’t, but this is undoubtedly an oversimplification of the results in this work! Nonequilibrium thermodynamics of the asymmetric Sherrington-Kirkpatrick model, by Miguel Aguilera, Masanao Igarashi, and Hideaki Shimazaki: There has been a resurgence of interest in quantum thermodynamics in recent years, and with it a renewed wider interest in looking again at plain old thermodynamics in interesting models that haven’t been widely studied. This paper takes a modified Sherrington-Kirkpatrick model – the unmodified one is a commonly-studied in the context of spin glasses – and looks at the non-equilibrium thermodynamics, particularly in the context of entropy but also including other thermodynamic order parameters. The remarkable feature of this work is that it’s been possible to make a lot of analytical progress in such a complex system. While the model itself seems a little abstract, the results here are intriguing and I’d be very curious to see this extended into the realm of quantum Sherrington-Kirkpatrick models, and quantum spin glasses. Effect of spin-orbit coupling in one-dimensional quasicrystals with power-law hopping, Deepak Kumar Sahu, and Sanjoy Datta: Quasiperiodic systems offer an interesting intermediate scenario between random disorder and entirely ‘clean’ translationally invariant models. The Aubry-André-Harper model studied here is well-established in this context, but here the authors add a few new ingredients into the mix, namely spinful fermions with long-range hopping and a spin-orbit coupling term. Most of the analysis and quantities studied will be familiar to anyone who’s read recent(ish) papers on quasiperiodic systems, but it’s interesting to see here how the spin-orbit coupling modifies the physics. Specifically, it increases the critical ‘disorder’ strength required for localisation. I suspect this can be understood in terms of the spin-orbit coupling acting a bit like system-bath coupling between the ‘up’ and ‘down’ fermions respectively, increasing the system’s tendency to equilibrate and requiring a stronger quasiperiodic potential to compensate. Observation of classical to quantum crossover in electron glass, by Hideaki Murase, Shunto Arai, Takuro Sato, Kazuya Miyagawa, Hatsumi Mori, Tatsuo Hasegawa, and Kazushi Kanoda: It’s rare for me to include an experimental paper in the weekly round-up (which probably points towards a bias that I should try to address…!) and even though I don’t fully understand the details of this one, it’s worth highlighting here as it investigates a problem close to my heart, namely the classical-to-quantum crossover in glassy systems. Here the authors use Raman spectroscopy to investigate a phase called a ‘charge glass’ rather than a more conventional spin glass, and experimentally demonstrate that increased geometric frustration leads to a crossover from classical to quantum behaviour that can be distinguished in a variety of ways, including the charge density profile and the temperature dependence. This work makes me want to learn more about the theoretical side of charge glasses and find out what else can be done experimentally with these materials! Real-time correlators in chaotic quantum many-body systems, by Adam Nahum, Sthitadhi Roy, Sagar Vijay, and Tianci Zhou: This is a long and very technical paper studying the evolution of correlation functions in systems exhibiting quantum chaos. It’s a very careful study of how these correlation functions evolve in time, and reads like a collection of insights compiled over quite a few years. I won’t say much more about this as it’s a dense and difficult read, but it’s worth highlighting as a very thorough reference that will likely be of a lot of use to people working on non-equilibrium dynamics in the near future. Benchmarking Quantum Simulators using Quantum Chaos, by Daniel K. Mark, Joonhee Choi, Adam L. Shaw, Manuel Endres, and Soonwon Choi: I’m not sure I’ve ever seen a paper with a Supplemental Material long enough to have its own table of contents, but here we are: a four-page paper with a twenty seven page supplement! This paper tackles the problem of benchmarking quantum systems. The idea is that if quantum hardware can do things that classical hardware can’t, how can we first test the quantum hardware to make sure that it’s doing what we think it’s doing? Many benchmarking protocols already exist, but they’re not always practical, and so there is still demand for improved methods for the testing of quantum systems. This work presents a new way to compute the fidelity of a given quantum state prepared in current-generation quantum simulators. In other words, it’s a way of checking how exactly a desired quantum state is produced in real experiments, and crucially this approach is claimed to be highly efficient and ’easy to implement’ in existing hardware platforms. If this turns out to be true, I can see this being an extremely useful piece of work! Anderson and many-body localization in the presence of spatially correlated classical noise, by Stefano Marcantoni, Federico Carollo, Filippo M. Gambetta, Igor Lesanovsky, Ulrich Schneider, and Juan P. Garrahan: Fully characterising and understanding the stability of quantum localised phases of matter remains an open question. This work looks at two types of localised phase (Anderson localised and many-body localised) in the presence of spatially correlated classical noise. The authors find that memory-like localisation effects can be preserved for long times, with a timescale linked to the correlation length of the noise, leading to a metastable localised phase that disappears in the long-time limit. Most of the analysis focuses on the Anderson localised model, with a brief look at the many-body case towards the end. The abstract makes mention of a possible connection with glassy physics that I’d love to have seen more about, but alas the manuscript doesn’t dig into this any further. Simulation Complexity of Many-Body Localized Systems, by Adam Ehrenberg, Abhinav Deshpande, Christopher L. Baldwin, Dmitry A. Abanin, and Alexey V. Gorshkov: Borrowed from the quantum information community, complexity theory is fast becoming a very interesting way to study and understand many-body quantum matter, giving an interesting and complementary perspective that’s quite different to the way most many-body theorists think about things, i.e. rooted in the concept of observables and order parameters rather than information content. This work studies complexity in the context of many-body localisation, providing an interesting counterpart to studies of complexity in systems which exhibit chaotic behaviour. In particular, the authors find that many-body localised phases are ’less complex’ than chaotic systems (defined here as exhibiting a sublinear growth of complexity, as opposed to a linear growth). This is an interesting application of complexity to disordered systems, although the disorder itself doesn’t enter explicitly as far as I can see, which makes me wonder if any of the conclusions would be modified in a realistic system with rare regions and Griffiths effects. I’ll be curious to see more work using complexity theory to study localisation in the future! (With thanks to Dr Sumeet Khatri for drawing this paper to my attention, as it appeared in quant-ph which is an area of the arXiv that I don’t read…!)
OPCFW_CODE
- 1 What is Prometheus node exporter? - 2 What is node exporter in Kubernetes? - 3 How do I know if node exporter is running? - 4 How does node exporter connect to Prometheus? - 5 Why do we need node exporter? - 6 How do I install node exporter as a service? - 7 What is Thanos Prometheus? - 8 How do I run node exporter in the background? - 9 Which file system do Secrets use? - 10 How do I know if Prometheus is installed? - 11 What is blackbox exporter? - 12 What is cAdvisor? - 13 How do you implement Prometheus? - 14 How do Prometheus exporters work? - 15 How do I install Prometheus on Windows 10? What is Prometheus node exporter? Prometheus relies on multiple processes to gather metrics from its monitoring targets. Those processes are called ‘exporters’, and the most popular of them is the Node Exporter. Node Exporter is an ‘official’ exporter that collects technical information from Linux nodes, such as CPU, Disk, Memory statistics. What is node exporter in Kubernetes? Deploy node exporter on all the Kubernetes nodes as a daemonset. Daemonset makes sure one instance of node-exporter is running in all the nodes. It exposes all the node metrics on port 9100 on the /metrics endpoint. Create a service that listens on port 9100 and points to all the daemonset node exporter pods. How do I know if node exporter is running? Step 5: check the node exporter status to make sure it is running in the active state. Step 6: Enable the node exporter service to the system startup. Now, node exporter would be exporting metrics on port 9100. You can see all the server metrics by visiting your server URL on /metrics as shown below. How does node exporter connect to Prometheus? Prometheus Exporter Setup - Step 1: Download The Binary File And Start Node Exporter: - Step 2: Let’s Run Node Exporter As Service: - Step3: You Are Set With Node Exporter. - Step 4: Here’s The Command To Execute Prometheus: - Step 5: Run This Code. - Step 6: Visiting Localhost:9090 Again. Why do we need node exporter? Node Exporter is a Prometheus exporter for server level and OS level metrics with configurable metric collectors. It helps us in measuring various server resources such as RAM, disk space, and CPU utilization. How do I install node exporter as a service? - Create a node_exporter user to run the node exporter service. sudo useradd -rs /bin/false node_exporter. - Create a node_exporter service file under systemd. - Reload the system daemon and start the node exporter service. What is Thanos Prometheus? Thanos, simply put, is a “highly available Prometheus setup with long-term storage capabilities”. Thanos allows you to aggregate data from multiple Prometheus instances and query them, all from a single endpoint. Thanos also automatically deals with duplicate metrics that may arise from multiple Prometheus instances. How do I run node exporter in the background? Running Node exporter in CentOS 7 is very straightforward. Download node exporter package and untar the package. It is always better to rename the folder for better management. Node exporter has to be run always in the background to collect the metrics and expose the metrics as http to be scraped by Prometheus. Which file system do Secrets use? Secrets can be defined as Kubernetes objects used to store sensitive data such as user name and passwords with encryption. There are multiple ways of creating secrets in Kubernetes. Creating from txt files. Creating from yaml file. How do I know if Prometheus is installed? check -node_exporter. To verify the Prometheus server installation, open your browser and navigate to http://localhost:9090. You should see the Prometheus interface. Click on Status and then Targets. Under State, you should see your machines listed as UP. What is blackbox exporter? The Blackbox exporter is a tool that allows engineers to monitor HTTP, DNS, TCP and ICMP endpoints. Results can be visualized in modern dashboard tools such as Grafana. What is cAdvisor? cAdvisor ( Container Advisor ) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects aggregates processes and exports information about running containers. This data is exported by container and machine-wide. How do you implement Prometheus? - Create a dedicated namespace for the Prometheus deployment: $ kubectl create namespace prometheus. - Give your namespace the cluster reader role: - Create a Kubernetes configmap with scraping and alerting rules: - Deploy Prometheus: - Validate that Prometheus is running: How do Prometheus exporters work? Exporters are essential pieces within a Prometheus monitoring environment. Each program acting as a Prometheus client holds an exporter at its core. An exporter is comprised of software features that produce metrics data, and an HTTP server that exposes the generated metrics available via a given endpoint. How do I install Prometheus on Windows 10? Automatically launch Prometheus in the background when your computer reboots. Make sure that your monitoring & alerting is always running, 24/7 - If necessary, install and configure Prometheus. - Download and install AlwaysUp, if necessary. - Start AlwaysUp. - Select Application > Add to open the Add Application window:
OPCFW_CODE
/ Pratik Mallya Two Years of Terraform Among Infrastructure-as-Code (IAC) tools that target public cloud, terraform is arguably the most appealing due to its declarative syntax, simple architecture and operator support. After 2 years of using terraform and evangelizing it within the organization, some reflection over its usability are warranted. First off, while IAC might sound like its “just code”, calling it “code” results in a dissonance among developers. Most devs, especially those used to typed languages, will get frustrated very easily when the terraform “code” they write fails for whatever reason. Any user of terraform will know that the chances of writing terraform “code” that just works (i.e. both apply succeed) is exceedingly low. This is a big problem for new users. The single biggest usability improvement would be for plugin authors to implement stringent validation during the plan stage, but many don’t and its not clear why. Since infrastructure components can be quite complex (e.g. Kubernetes clusters), it does make sense that the apply would fail when the infrastructure component itself failed to create successfully. However, in my experience the majority of failures seem to be related to bad parameters which could be fixed by more validation at the Terraform code is used to manage infrastructure components which makes most devs reluctant to experiment with it for understandable reasons. While a bug in app code might be reverted without much damage, accidentally deleting e.g. a cloud database instance would be a catastrophic event. Accidentally removing a cloud firewall rule can have security implications. Its important to provide a sandbox environment where devs may experiment with terraform. Which is a great segue into terraform’s readability issues. It uses a custom language (HCL); any new language is a burden for developers to learn. Its not clear to devs what the difference is between output. The plugins get updated frequently, and code that isn’t kept up to date can spit out a TON of deprecation messages in addition to the actual plan; this discourages devs from actually reading the plan as its buried in an ocean of crap. Its not clear what good terraform code looks like: should resources be organized around modules? Should everything be in module and instantiated as different environments? Should one use terragrunt? Its not clear. There are many different ways to organize terraform code, and different patterns may exist concurrently. Terraform was created by Hashicorp and I suspect that operating terraform at scale is something that might be easier to do in the enterprise (paid) version. I haven’t used that so can’t really comment on how it addresses the issues I talk about. (Personally, I would not want to use an enterprise version of terraform since my experience with enterprise support in general has been universally bad; detailing why is something I intend to do in another post). Terraform code can get stale very easily; plans are not always applied, manual changes to infrastructure are made but their corresponding terraform code is not checked in. One solution to this is to use automation and a Pull Request based workflow; atlantis solves this problem (and a few others). Without this automation, keeping terraform code up to date is a herculean task for devs; as a result few are willing to keep their configs up to date. Terraform does provide some level of disaster recovery; although once the infrastructure it manages become large enough its questionable as to how long it would take to re-create infrastructure in the case of a true disaster. Even if terraform itself may not spin up the resources, it does provide some idea of what the configuration looked like, which is incredibly useful to know. Its likely that the person who set it all up is long gone and the tf code is the only representation of the infrastructure component that needs to be re-created. Terraform is good at managing resources that expose apis. A new plugin (provider, in terraform terminology) can be written for an api very quickly. Auto-generation of terraform plugins would be pretty nice; not sure if anyone has attempted doing that. Since its written in go, it expects the plugins to be in go as well. Despite all the issues, terraform has wide support among different providers. Once an engineering org is comfortable with terraform, it becomes easy to switch to a different cloud, since most clouds have first class support for terraform. Its declarative syntax makes it more accessible to non-technical users. There doesn’t seem to be an alternative that is as well supported and addresses the usability concerns. So after 2 years, it has been a mixed bag. For Infrastructure teams who need to manage infrastructure full time, I would highly recommend using terraform. For developer teams that need to manage infrastructure components, the usability issues make it a hard sell; however, without alternatives terraform seems like the only good choice. I would very much have liked to frame the experience as a success story but the facts are that while terraform does what it promises, until its usability issues are addressed, it will remain a pain, and not a joy, to write and maintain. Tools like terraformer do seem promising though.
OPCFW_CODE
MinneBar 6 Summary Today I went to MinneBar 6, which seems like it was the most populated one I’ve ever been to. It wasn’t as crowded as the first one that I attended, which was in a much smaller space with less people. This year went off pretty much without a hitch as an attendee. I attended some good talks with some great discussion. At the start of the day Garrick van Buren gathered a bunch of people in a room and we talked about the Do Not Track thought space. I haven’t really been paying that much attention to the area, but it’s getting more and more nefarious from the tracking side it seems like. Mykl Roventine brought up at the end a whole new tangled ball that I hadn’t thought about in the form of affiliate programs - where sites are giving your purchase information to all of it’s affiliate partners in order to have them flag the items that they are responsible for. They’ll probably both be interested in the fact that I am running awstats for my own server, and I was previously going to re-add Google Analytics but have decided against it. The server logs are all I really need, at least for this site. There is an interesting trade-off here though, because many users will bring traffic to your site by putting on Facebook’s Like button or the Tweet This button, and the advantage of the extra traffic should be considered. Next there was a good session about getting started with Android, that was in the largest room, which to me seems the least “MinneBar”-ish - the smaller rooms make discussion and question asking a lot easier. Donn Felker of QONQR did a great job of making the room work though, asking for topics before he started and handling most of them. It was not a lot of coding but a lot of focus on tooling as well as the details about monetizing your idea. One thing that I took away from this session was that people should start with the minimum viable product. If it’s going to be paid, start it out cheap and people will buy it anyway if they think it’s useful. I should have asked about what the change from a 24 hour to a 15 minute refund policy has had on sales of paid apps, especially on the small functionality MVPs. I didn’t see anything all that interesting for the third session, so I fixed a couple of bugs and handled some emails, and talked to Garrick for a couple minutes while the directly before lunch. Grabbing some pizza, I talked to some students from St. Cloud who are just finishing up, and then walk around looking for some other people to talk to. John Chilton found me and we chatted a bit. I’m rubbish at networking at MinneBar. I don’t feel like I can sit down at a table that already has a group of guys at it and make connections - it could be that I’m just not great at it, or something about the tables or that there is always wifi available. I see a lot of guys from my twitter followers but I don’t tend to actually chat with them. In the afternoon, Charles Nutter won the award for the most technical presentation of the day, going over almost all of the JVM bytecode operands while going over a short explanation of the stack machine that it runs on while encouraging everyone to learn a little bit about the bytecode that so many of our programs compile down to. A completely full room really indicated to me that there was untapped audience at MinneBar this year who would have accepted some more technical sessions. Directly after that, the most entertaining session went to Charles again, this time assisted by his son, with an ad-hoc session about Minecraft. I’m a big fan of the game, but I haven’t played it in a while so I saw some of the new features, and had a good time chatting with some other gamers in the crowd. That’s also another audience that I think could be approached at the next MinneBar - something for the leisure time. In the remaining sessions, I attended one on gathering your data about your users and plotting it in a useful way, mining the data that you already have in order to get good analytics. This was somewhat technical but quite cool because they sliced and diced the creative commons StackExchange Data Dump to show some interesting things. I seriously consider implementing some of these for my consulting work, they seem like the kinds of data that would be very appreciated to show growth to investors and the like. The last one I went to was more of a discussion about how designers and coders should cross-pollinate a bit when they are learning. Designers should learn a bit of code, coders should at least know a little bit of Photoshop / Fireworks… at least enough to get the job done. I was pretty tired at the end of the day, so I didn’t stick around for much of the beer, also I was driving home and actually had to get some stuff done in the evening. I also just wasn’t feeling up to the networking aspect just then, partially discouraged because of my earlier failure to get into it around lunch time. Overall, I would say that MinneBar 6 turned out to be a great BarCamp, with some wonderful sessions. The ones that I attended were all worthwhile. I didn’t “vote with my feet” and switch sessions in the middle of any this year - if I remember correctly I think I actually did that last time. I came across with a couple of thoughts - I’d really like to do a presentation next year about something. I feel like I should be able to contribute something to a conference like this, especially because there is such a wide range of topics presented on. Also, I feel a lot more comfortable getting my networking on in a smaller, more focused group - something like Ruby Users of Minnesota meetings, where I know that any person there is going to have more in common with me than one of a thousand. I’ll look forward to going to this next year if I don’t have a conflict though, and I encourage the same to anyone else in the tech community in the Twin Cities.
OPCFW_CODE
import * as Seneca from 'seneca'; import { get } from 'config'; import { promisify } from 'bluebird'; const seneca = Seneca(); seneca.use(get<string>(`vault.transport`)); seneca.client(get(`vault.client`)); const vault = promisify(seneca.act, { context: seneca }); // const val = Math.floor(1000 + Math.random() * 9000); // console.log(val) export default function start() { // setTimeout(spamUserCreation, 1000); // setTimeout(spamCommentCreation, 1000); // setTimeout(spamDatasetCreation, 1000); // setTimeout(spamRequestCreation, 1000); } async function spamDatasetCreation() { const bob = await vault({ role: 'vault', model: 'dataset', cmd: 'create', payload: { title: 'probando' + guidGenerator(), external_id: 'TEST-' + guidGenerator(), datasource_id: 'd75f0078-f311-4334-bed5-a9a24e8ac2db', user_id: '4b509062-3f8c-43ee-8aec-c75f3784e9d6' } }); console.log(JSON.stringify(bob)); } async function spamRequestCreation() { const bob = await vault({ role: 'vault', model: 'request', cmd: 'create', payload: { description: 'a random desc', title: 'describiendo' + guidGenerator() // external_id: 'TEST-' + guidGenerator(), // datasource_id: 'd75f0078-f311-4334-bed5-a9a24e8ac2db', } }); console.log(JSON.stringify(bob)); } async function spamUserCreation() { const bob = await vault({ role: 'vault', model: 'user', cmd: 'create', payload: { firstname: 'bob' } }); console.log(JSON.stringify(bob)); } async function spamCommentCreation() { const bob = await vault({ role: 'vault', model: 'action', cmd: 'create', // query: { // }, subscribable_id: '41058b60-5f25-4964-aec6-746db28ce1c0', payload: { type: 'comment', user_id: '6cced6c0-dc39-4f67-9c41-41b5b93fea1e', actionable_model: 'dataset', // actionable_id: 'sdfasfd', // subscribable_id: '41058b60-5f25-4964-aec6-746db28ce1c0', actionable_id: '41058b60-5f25-4964-aec6-746db28ce1c0', properties: { subscribable_id: '41058b60-5f25-4964-aec6-746db28ce1c0', text: 'yeah' } } }); console.log(JSON.stringify(bob)); } function guidGenerator() { const S4 = function() { return (((1 + Math.random()) * 0x10000) | 0).toString(16).substring(1); }; return (S4() + S4() + '-' + S4() + '-' + S4() + '-' + S4() + '-' + S4() + S4() + S4()); }
STACK_EDU
Intermediate openFrameworks - Basic oF classes NOTE This tutorial assumes that you have already completed the introductory lessons (especially Introduction to openFrameworks) and/or have a firm grasp on basic programming concepts, as well as some familiarity with a creative coding tool such as Processing or Flash. It also assumes that you have downloaded openFrameworks 0072 and followed the appropriate setup instructions (also linked on the download page). Creating a Project As of version 0072, openFrameworks ships with a projectGenerator tool that greatly simplifies the process of creating a new oF application. So let’s make our first project. First, launch the projectGenerator (in the projectGenerator folder of your oF distribution) Click on “Name:” and change the name to “basicDrawing” Click on “Path:” and select path/to/your/of_v0072_osx_release/apps/intermediateOF (note: you’ll have to create the folder called “intermediateOF”) Note about project path (from projectGenerator/readMe.md) This defaults to apps/myApps, but it should allow you to put projects anywhere. We strongly recommend you to keep them inside this release of OF, so that if the OF release or your project get moved, or if some lower level folder gets renamed, the generated paths don’t break. it’s much safer to have paths that are “../../../” vs paths that have a name in them. please note that because we also create a folder with this tool with the name of your project, the actuall full path to your project will look like: (where chosenPath and projectName are based on your settings, and .project is the xcode, code blocks, visual studio file that’s generated) Click “GENERATE PROJECT” and you will see a notification at the bottom of the window that the project has been generated. Quit out of the Project Generator and open up the folder that was generated. Now click on basicDrawing.xcodeproj (in CodeBlocks: basicDrawing.cpb, in VisualStudio: basicDrawing.vcxproj) This will launch XCode and open up the project that you just generated. You should see a window like this: Click the dropdown menu above “Scheme” and chosen “basicDrawing Debug” instead of “openFrameworks”. For some reason, by default, XCode is set to compile the openFrameworks library, which, by itself, does nothing. Instead, we want to compile and run our basicDrawing application, which is set up to automatically compile the openFrameworks library anyway. Now click the “Run” button in the upper left corner. You should see an empty grey window. Congratulations! You’ve just compiled your first openFrameworks app! Tip: If you look in intermediateOF/basicDrawing/bin, you will see the application that you just created. This is a full-fledged native application for your platform. Try double-clicking on it. TIP If you don’t see the grey window, check out the Troubleshooting section at the bottom. Now let’s take a look at our source files. Quit out of your awesome grey window and click the disclosure triangle to the left of the blue icon on the far left of the window, and then click the disclosure triangle next to “src”. This exposes the source files that make up your project. To open a file in the editor window, single-click it in the panel on the far left. If you are familiar with c++, you will recognize the blocks of code in this file as empty member functions of a class called testApp. If you are familiar with any other object-oriented language, the syntax might look a bit strange, but testApp is, indeed, a regular class. For more information about c++ classes, check out the documentation at cplusplus.com. This testApp class is what makes up your openFrameworks application. It’s where the magic happens. For our current purposes, we can say that the act of making an openFrameworks application is filling in the functions of testApp. Each function is called automatically by openFrameworks at specific points during the runtime of your application. This is where the testApp class is declared. If you are accustomed to a language like Java, it might seem strange to you that the testApp class consists of 2 files, but that’s just how it is in c++. The header file is where you have to declare all of the functions that make up your class, and all of the member variables of your class. An extremely simple class would look something like this: This is the file that kicks off the entire program. Like many programming languages, c++ has a special function called main() that is used to kick off the program. If you look into the main() function in main.cpp, you’ll see that an instance of testApp is created. TIP Check out the examples Now that you understand the basic anatomy of an openFrameworks project, it’s a good idea to go back and check out the examples in path/to/your/of_v0072_osx_release/examples. They serve as a fantastic overview of the functionality that openFrameworks provides, as well as a kind of cheat sheet that you can refer back to to see how a particular class, function, or technique works. Basic Drawing and Colors In order to quickly introduce you to the basic concepts of drawing in openFrameworks, we will deconstruct a very simple application. Use the project that you just created to follow along. ofPoint circlePos; and float radius; in your testApp.h. - One of these variables is a simple float - a decimal number. The other is a very useful oF-specific class called ofPoint. As you might guess, it represents a coordinate. It has x, y, and z attributes, and lots of useful functions for manipulating the coordinate. TIP: ofPoint is actually an alias for another oF class, Then, copy and paste the code from the setup(), update(), and draw() methods below into your own methods in testApp.cpp. When you run your application, it should look like this: So let’s take a look at what all of this means. In a nutshell, behind the scenes, this is what openFrameworks does when you launch your application: Of course, this is almost insultingly simplified, but it is helpful to consider. An instance of testApp is constructed, setup is called, and then a loop begins where update() and draw() are called continually until you quit out of the application. This loop is known as “the draw loop”, and we will get to that in a bit. But first let’s take a look at setup: As stated above, setup() is run automatically, immediately before the window opens. It is typically used to set global properties of our application and to initialize variables. And that’s exactly what we are doing here. As you can see, on lines 4-7, we set the framerate, window size, background color, and circle resolution†. Although it’s not shown here, you can also set the application as fullscreen and set the window title. Since setup() is called before anything else in our application, we can use it to initialize variables and we can be confident that they will be set before they are used for any drawing. † By default, when you tell openFrameworks to draw a circle, it actually draws a icosagon! To remedy this problem, we can tell our application that, when we draw a “circle”, we want to draw something more like a hectogon (100 sided shape). the draw loop As you can see, in the draw() function, there are only 2 lines: in the first, we use ofSetColor() to set the draw color to magenta-ish. This means that anything that we draw after that (or until we call ofSetColor again) will use this color. Second, we call ofCircle(), which (shockingly) draws a circle in our window. It takes 3 arguments: x, y, and radius. In almost all of openFrameworksLand, the origin is in the upper left and the units are pixels, so (50, 300) means 50 pixels from the left side of the window and 300 pixels from the top. TIP If you type ofCircleand wait a second, your IDE should bring up several variations on the ofCircle call. We are using the simplest one. Once the draw loop starts, update and draw are called (in that order) continually, for the lifetime of your application. If you have used Processing before, this is a familiar concept†. If not, take a moment to think about South Park. † Unlike Processing, openFrameworks encourages you to separate your draw loop into CPU-functions and GPU-functions. That is, all of your number-crunching and math-heavy stuff should go in the update loop, while your draw loop should only contain drawing functions. This is mainly for performance reasons, although I find that it encourages a more readable kind of code also. Try pasting this code into your draw function right below the existing call to ofCircle Now remove it and paste it at the very top of the draw loop, before any other function calls. Notice that, in the first instance, the purple circle appears on top od the black one, while in the second case, it occludes the purple circle. This is known as Painter’s Algorithm, and simply put, it just means that things drawn last appear on top. A word on framerate Add the following to your draw function in order to see the current framerate. If your framerate plummets or is significantly lower than what you set it to, it is frequently an indication that you are doing something wrong, so you will frequently see people have a little printout with the framerate at the top of their screen. Getting errors before you even write your first line of code can be super frustrating. The best way to overcome these annoying issues is to Google the error message. More then likely, someone has had this problem before you, and you will find an answer that will help. If you can’t find an answer on Google, head over to the openFrameworks forums and post your question. Response times are usually super fast. - Nothing happens when I press “run” In the “Scheme” menu in the upper left hand side, make sure that the name of your app is selected and not “openFrameworks. 2.Build errors before I even wrote any code! Make sure that the Base SDK for your project is set to an SDK that you actually have on your system. Look in this folder to see which SDKs you have installed: Also make sure that “All” is selected in Build Settings. Otherwise, you might be changing settings for only your Debug or Release build. Typically you’ll want to change settings for “All” build types.
OPCFW_CODE
UDP Full Form: Hello there, friends. Welcome to another new blog post in which we will explain what is UDP protocol, how it works, what its features are, what it is used for, what the difference between UDP and TCP protocols is, and what the advantages and disadvantages of UDP are. If you want to learn everything there is to know about UDP, keep reading until the end. We’ve already told you about many different types of protocols in our blog, and this article is part of a series of protocols. If you want to fully comprehend the protocol, you must be familiar with all types of protocols. What exactly is UDP? What is the full meaning of UDP? What exactly is a UDP Header? How does UDP function? UDP receives datagrams from one computer to another using IP addresses. UDP collects the data in a UDP packet and adds it to the packet’s header. This information includes the Source and Destination ports, as well as the length of the packet and a checksum. The UDP packet is sent to the destination after it has been encapsulated in an IP packet. However, unlike TCP, UDP does not guarantee that data packets will arrive at their intended destination. The following are some of the most notable characteristics of the UDP protocol: - UDP is a connectionless protocol, which means it doesn’t need a connection to transfer data. - UDP is a fast protocol that cannot transfer data very quickly. - UDP is an untrustworthy protocol that does not guarantee data delivery. - UDP is used for transaction-based protocols such as DNS, BOOTP, and so on. - The checksum is the only thing in the data segment of UDP. The following are some of the most important UDP applications: - UDP can be used in applications where lossless data transmission is required. - The UDP protocol is used in applications such as gaming, voice, and video. - UDP has the ability to be used in multicasting applications. - In real-time applications, UDP is used. The distinction between UDP and TCP protocols In the table below, we’ve highlighted the key differences between UDP and TCP protocols – |UDP Protocol||TCP Protocol| |UDP stands for User Datagram Protocol.||The full name of TCP is Transmission Control Protocol.| |The UDP protocol is a connectionless protocol, which means that no connection must be established between the communicating devices. |The TCP protocol is a Connection-Oriented Protocol. It requires establishing a connection between the communicating device.| |The UDP protocol is extremely fast.||TCP protocol is slower than UDP.| |UDP is an untrustworthy protocol.||TCP is a global protocol.| |The UDP header is 8 bytes in size.||The size of the TCP header is 20 Byte.| |In UDP, messages are delivered in no particular order.||There is a sequence of message deliveries in TCP.| |UDP does not guarantee data delivery.||Data delivery is guaranteed in TCP.| |UDP is used in DNS, RIP, BOOTP, and other protocols.||TCP is used in HTTP, HTTPS, FTP, SMTP, etc.| The Benefits of UDP The following are some of the most significant advantages of the UDP protocol: - When compared to TCP, the UDP protocol is much faster. - The UDP protocol is a very simple and useful communication protocol. - In UDP, no connection is required for data transmission. - UDP protocol is used by real-time applications such as chatting and online games. - UDP can be used for broadcast and multicast transmission. - A UDP connection does not require much upkeep. - UDP can be used to reduce bandwidth. - UDP is an untrustworthy protocol. - UDP does not have a feature to detect when data has been received. - UDP can detect the error but cannot specify what it is. - There is no guarantee that data will arrive in the order that it is sent over a UDP connection. - UDP also does not guarantee that all data will be delivered.
OPCFW_CODE
[Update 11/15/2017: Visual Studio Mobile Center is now Visual Studio App Center. Learn more on the announcement post from Connect(); 2017] The core of our vision is “Any Developer, Any App, Any Platform.” With our Visual Studio family of products, we are committed to bringing you the most powerful and productive development tools and services for any developer to build mobile-first and cloud-first apps across Windows, iOS, Android, and Linux. Our existing Visual Studio family of products includes the most comprehensive set of development and application lifecycle tools on the market today. An industry leading IDE, a lightweight code editor – Visual Studio Code, and an on-premises and cloud-based team collaboration services with Visual Studio Team Foundation Server and Visual Studio Team Services. In addition, we offer a free developer program with Visual Studio Dev Essentials and a commercial program with Visual Studio Subscriptions. Today, at the Connect(); 2016 event in New York city, we announced the release candidate of Visual Studio 2017 and Team Foundation Server 2017 RTM. I am also excited to see our Visual Studio family continue to grow with the introduction of Visual Studio for Mac and Visual Studio Mobile Center. Visual Studio 2017 RC focuses on improved productivity, refined fundamentals (performance improvements across all areas in VS 2017), streamlined cloud development, and great mobile development. To learn more, read the details in John Montgomery’s post announcing Visual Studio 2017 RC. The download is available here. Visual Studio for Mac is a new Visual Studio IDE. It’s built from the ground up for the Mac and focuses on full-stack, client-to-cloud native mobile development, using Xamarin for Visual Studio, ASP.Net Core, and Azure. To learn more, read Miguel de Icaza’s blog post introducing Visual Studio for Mac. The download is available from here. Visual Studio Mobile Center is “mission control for mobile apps.” It brings together multiple services commonly used by mobile developers into a single, integrated service that allows you to build, test, deploy, and monitor cloud attached apps in one place. To learn more, please read Nat Friedman’s blog post elaborating on Visual Studio Mobile Center. Team Foundation Server 2017 RTM and Visual Studio Team Services is bringing general availability to Application Insights, Package Management service, Code Search, and 3rd party commerce to on-premises extensions. To learn more, please read Brian Harry’s blog post. Get started here. We hope you join us for Connect(); and enjoy 100+ on-demand videos throughout the day. If you miss the event, check back for recordings of the sessions as well as the live Q&A. Enjoy the Connect() event! |Julia Liuson, Corporate Vice President, Visual Studio Julia is responsible for developer tools and services, including the programming languages and runtimes designed for a broad base of software developers and development teams, as well as for the Visual Studio, Visual Studio Code, and the .NET Framework lines of products and services. Julia joined Microsoft in 1992, and has held a variety of technical and management positions while at Microsoft, including the General Manager for Visual Studio Business Applications, the General Manager for Server and Tools in Shanghai, and the development manager for Visual Basic.
OPCFW_CODE
I'm getting lots of this kind messages: jddosd: DDOS_PROTOCOL_VIOLATION_SET: Protocol Reject:aggregate is violated at fpc 0 for 1448 times, started at 2014-11-27 10:56:58 EET jddosd: DDOS_PROTOCOL_VIOLATION_CLEAR: Protocol Reject:aggregate has returned to normal. Violated at fpc 0 for 1448 times, from 2014-11-27 10:56:58 EET to 2014-11-27 11:02:38 EET and I can't figure out: why? Could you point me to the right direction please? Packet Forwarding Engine traffic statistics: Input packets: 15240676085 17916 pps Output packets: 21412011088 24572 ppsPacket Forwarding Engine local traffic statistics: Local packets input : 15544166 Local packets output : 29380069 Software input control plane drops : 0 Software input high drops : 0 Software input medium drops : 0 Software input low drops : 0 Software output drops : 0 Hardware input drops : 0Packet Forwarding Engine local protocol statistics: HDLC keepalives : 0 ATM OAM : 0 Frame Relay LMI : 0 PPP LCP/NCP : 0 OSPF hello : 1702744 OSPF3 hello : 0 RSVP hello : 0 LDP hello : 0 BFD : 0 IS-IS IIH : 0 LACP : 0 ARP : 286860 ETHER OAM : 0 Unknown : 10Packet Forwarding Engine hardware discard statistics: Timeout : 0 Truncated key : 0 Bits to test : 0 Data error : 0 Stack underflow : 0 Stack overflow : 0 Normal discard : 11094859 Extended discard : 0 Invalid interface : 0 Info cell drops : 0 Fabric drops : 0Packet Forwarding Engine Input IPv4 Header Checksum Error and Output MTU Error statistics: Input Checksum : 0 Output MTU : 0 Packet types: 1, Modified: 0, Received traffic: 1, Currently violated: 0Currently tracked flows: 0, Total detected flows: 0* = User configured value Protocol Group: Reject Packet type: aggregate (Aggregate for v4 all reject traffic) Aggregate policer configuration: Bandwidth: 2000 pps Burst: 10000 packets Recover time: 300 seconds Enabled: Yes Flow detection configuration: Detection mode: Automatic Detect time: 3 seconds Log flows: Yes Recover time: 60 seconds Timeout flows: No Timeout time: 300 seconds Flow aggregation level configuration: Aggregation level Detection mode Control mode Flow rate Subscriber Automatic Drop 10 pps Logical interface Automatic Drop 10 pps Physical interface Automatic Drop 2000 pps System-wide information: Aggregate bandwidth is no longer being violated No. of FPCs that have received excess traffic: 1 Last violation started at: 2014-11-27 11:15:03 EET Last violation ended at: 2014-11-27 11:22:18 EET Duration of last violation: 00:07:15 Number of violations: 1449 Received: 35017543 Arrival rate: 19 pps Dropped: 195341 Max arrival rate: 3398 pps Routing Engine information: Bandwidth: 2000 pps, Burst: 10000 packets, enabled Aggregate policer is never violated Received: 0 Arrival rate: 0 pps Dropped: 0 Max arrival rate: 0 pps Dropped by individual policers: 0 FPC slot 0 information: Bandwidth: 100% (2000 pps), Burst: 100% (10000 packets), enabled Aggregate policer is no longer being violated Last violation started at: 2014-11-27 11:15:03 EET Last violation ended at: 2014-11-27 11:22:18 EET Duration of last violation: 00:07:15 Number of violations: 1449 Received: 35017543 Arrival rate: 19 pps Dropped: 195341 Max arrival rate: 3398 pps Dropped by individual policers: 0 Dropped by aggregate policer: 195341 Dropped by flow suppression: 0 Flow counts: Aggregation level Current Total detected State Subscriber 0 0 Active To mee it seem like not really to be related to some kind of ddos, but to some other reason.. kind of routes flap or somthing. Nothing useful in logs though. In the same time I do not have any reject rules in firewall. I'm running setup with 2 RRs with 3 clients connected to each of them. OSPF advertises loopbacks, iBGP other stuff. The default action for aggregate route is to reject anything, that does not hit more specific route from aggregated route. So basically when you have an access network with clients in it and suddenly you lose it (company decides to stop this service ie), those IP-s keep being under resolve by torrents, maleware, viruses etc and as you do not have those specific routes in routing table anymore, router keeps REJECTing them as it is default action. So to solve this: set routing-options protocol aggregate defaults discard and forget of this. Anyway any reject action is a vector for attack, so try to keep your core systems without any rejects... Thanks to Saku Ytti for great help in pointing me to the right directions. His article http://blog.ip.fi/2014/02/junos-l3-incompletes-what-and-why.html and personal help were priceless during this case. Open a new thread, as solutiuon has been already accepted on this thread. And as a good practice- close your threads with solution accepted where solution has been provided to you. okay... opened a new case... thanks
OPCFW_CODE
[Pythonmac-SIG] Re. [ann] AppScripting 0.1.0 hengist.podd at virgin.net Thu Nov 20 19:02:23 EST 2003 >First, let me start by asking both of you to add docstrings to the >code. I had a quick look through both aeve and AppScripting, and it's >really hard to find your way around either of them.... On the to-do list.:) For my education, is it normal practice to put docstrings on everything - both private and public - or just on the public stuff? I've rather assumed that docstrings are supplied for the benefit of end users, rather than developers who can read comments as easily. (e.g. In HTMLTemplate I left out docstrings and supplied separate user documentation which explains its use based on a simpler, fictionalised object model. The internal structure is actually much more complicated and not entirely conventional, so docstringing it would likely cause more confusion than anything. Thinking of doing the same with AS.) > > aeve [...] does things the way that Python typically does things; >This I like, I think. It has the advantage that you should be able to >use the standard Python introspection features on aeve objects, >something I missed with I looked at AppScripting. I'm still curious here about how introspection is being used, and to what purpose? Anyone shed some light for me? I suspect introspecting AS wouldn't do you much good anyway; better to read the docs to learn what's going on. Though knowing what you're looking for'd give me a >There is a question >of cost, though: I assume that this makes aeve startup more expensive >than AppScripting startup, at least in theory. Right? Yes, but as Bob points out, this can easily be made a one-off charge by caching and reusing the result over the length of the runtime, so is pretty much irrelevant. >Note that I also particularly like this model (as opposed to doing >everything at the last possible moment), because it leaves much more >room for caching. Caching's only worth doing when it actually makes a difference to performance. With aeve and AS any performance bottleneck lies at the aete parsing stage so is not too critical. Neither module suffers any performance problems in object referencing/event dispatching since that code is already well streamlined. >And I think caching (of complete AETE interfaces) is >where Python could gain points on AppleScript: AppleScript does all its binding at compile-time, so doesn't suffer a runtime penalty from parsing terminology as aeve and AppScripting do. >if you want to do something really simple to an application then not >having to read and >parse the AETE could be a big win. As Bob said, right now the most painful bottleneck is in having to go through Window Manager whenever you want to send AppleEvents. Boshing that would be the most worthwhile optimisation. If reading aetes at each runtime still proves too slow for users, we could always pickle the parsed aete resources for reuse over subsequent sessions. The only times you'd then need to (re-)parse the aete is when a pre-parsed aete doesn't already exist for the app being used, or a different version of that app has been installed since its aete was last parsed and cached. I'm sure this could be More information about the Pythonmac-SIG
OPCFW_CODE
Here is a picture of my GUI when it first opens up: everything is good. however, here's what happens when I resize it a bit smaller from the bottom: the search pane components are missing and part of the book button is obscured. now when I make things even smaller everything is obscured but the table: the JFrame uses a BorderLayout. What can I do to fix this problem? I've looked at ComponentListener to resize the window whenever someone wants to make it too small, but this seems like a hack. [ September 10, 2003: Message edited by: Ramses Tutoli ] [ September 10, 2003: Message edited by: Ramses Tutoli ] Oh and two things. First does the Book button make a screen pop up for the user to enter the customer number? But mainly, I'd switch the Search and the JTable around. The search (combos and search button) first at the top, then the JTable underneath it, then the Book button at the bottom. Just like a person reads top to bottom, left to right (sure some go right to left, but Sun is in the US, so its US standards. ), a GUI should work the same, it is simpler and easier to handle from a user point of view. Mark Joined: Oct 08, 2001 Or set the size of the window, and don't let the users resize the screen. That's what I did, still got 24/24 on GUI. I think it is safe to say that you got lucky, Mark. I've never seen a good quality Java apps that would not allow you to resize the frame. Another suggestion: try putting the Location JComboBox (and JLabel) below the Name, rather than to the right. Your current setup is basically forcing the minimum width to be bigger than it needs to be. Once your window does below that minimum, that's wht creates problems. The problem with having a single JScrollPane around everything is that when you've got an outer scroll pane and one or more inner scroll pane, the outer one may kick in too soon, when you really want the inner panes (esp. the JTable's scroll pane) to handle most of the load. When you've got scrollpanes inside scrollpanes, things get more complex, and "who's in charge" depends a lot on which layout manager you're using; I find it too much of a pain to deal with. Though to be fair, that may just be because I never spent enough time working at mastering the technique; I just avoid nesting scrollpanes. "I'm not back." - Bill Harding, Twister Joined: Sep 05, 2003 Some good advice here. I opted to go with my plan to put a scrollpane around the frame, however. Here's how it looks now (shrunken down a bit): here's how it looks when it's resized: I like this solution better than preventing them from resizing at all or using ComponentListener to force a minimum height and width because i want the user to have more choice in choosing the display size he wants. If I explain my design decision in this way, I won't get docked any points if the reviewer doesn't agree with my choice, will I? You know what, I think the new way is very pretty. I like the look of the screen, and I think the scrollbars look good if the screen is shrunk. I have seen that in many production commercial applications on the market. I don't think you'd lose points in that area. Mark Joined: Jan 30, 2000 Yeah, it does seem decent. As long as the table's scroller kicks in before the outer scroller, all is well. Guess I was traumatized by a bad experience with this; need to experiment more.
OPCFW_CODE
<?php namespace Pantheon\Terminus\Commands\Org\Team; use Pantheon\Terminus\Commands\TerminusCommand; /** * Class RoleCommand * @package Pantheon\Terminus\Commands\Org\Team */ class RoleCommand extends TerminusCommand { /** * Change an organizational team member's role * * @authorize * * @command org:team:role * * @param string $organization The name or UUID of the organization to of which the user is a member * @param string $member The UUID, email address, or full name of the user to change the role of * @param string $role [unprivileged|admin|team_member|developer] The role to assign to this member * * @usage terminus org:team:role <organization> <member> <role> * Changes the role of the team member identified by <member> from the <organization> organization to <role>. */ public function role($organization, $member, $role) { $org = $this->session()->getUser()->getOrgMemberships()->get($organization)->getOrganization(); $membership = $org->getUserMemberships()->fetch()->get($member); $workflow = $membership->setRole($role); while (!$workflow->checkProgress()) { // @TODO: Remove Symfony progress bar to indicate that something is happening. } $this->log()->notice( "{member}'s role has been changed to {role} in the {org} organization.", [ 'member' => $membership->getUser()->get('profile')->full_name, 'role' => $role, 'org' => $org->get('profile')->name, ] ); } }
STACK_EDU
Ok, so I have spent the last few months setting up a DAW strictly for VO's. Special thanks to Teddy G for all his input... Keep in mind I am not building a dedicated DAW as of yet, just tweaking and optimizing the Vaio Desktop PCV - Rz32g. I'll build a dedicated one after Vista comes out. Asus Motherboard made for Sony (Average IDE-based proprietary MB) 2.6 P4 with HT 2.0 GB DDR 2700 RAM (4x512) Sony DVD-RW Drive with 8MB Cache and 2.0 firmware upgrade (Came with the PC, very solid ). Seagate Barracuda 7200.7 80GB HDD. Contains my OS on C: and my Audio Apps on D: (AA 2.0) Seagate Barracuda 7200.9 300GB HDD with 16MB cache. Best IDE I could find. Contains Temp Folder for AA 2.0, Loops, and Write only files. Both HDD's on same IDE Cable. 300W power supply could not be replaced without replacing MB and Case (Proprietary to Sony), but I yanked out my CD-ROM, Floppy, Secondary USB Hub to conserve power/cpu. Also have a 540W UPS/Backup. 256MB Geforce 6200 VC - AGP Lynx L22 in PCI slot 2 for seperate IRQ (Buying this month). Windows XP Home Tweaks: Set for Background Services over programs. Virtual Memory is 3070MB on C:, D:, Z: (Audio Drive) No Visual effects System restore disabled on D: and Z: Use MSConfig to run only 7-10 services/startups for recording sessions. No internet connection during recording sessions. All IE temp folders are emptied when IE is closed All drives cleaned, defraged weekly - periodically backed up to 3rd HDD with Norton Ghost. Both Drives running at 32bit transfer rate in BIOS, DMA enabled, etc. Various other XP and AA 2.0 tweaks as well... A dual boot will be on the next DAW. Buying a EV-RE20 with SM and accessories, Mackie Onyx 1220, AKG 271 Headphones. Already had Roland DM-20's (Borderline Prosumer) for monitors, I will replace these in my next DAW. Grace 101 Pre will be purchased down the line as well. My room is going to have 2" thick office-cubical panels against the walls and various sound dampening items. The PC will also be contained in a sound treated cart although it is very quiet on its own. As said before, dual boot, a new MB and case just weren't in the budget as of yet because I would have to buy everything from scratch thereafter (RAM, 3 HDD's, Processor chip, OS, Power Supply, etc.). Consider this my first serious DAW; the one to practice and learn with. Thanks again Teddy.
OPCFW_CODE
Oracle Sales Cloud Guest post by Prity Tewary, Senior Consultant, Infosys Infosys has lately been involved in multiple implementations of "Oracle Sales Cloud": An Oracle solution focusing on 2 key areas: Sales Planning and Salesperson Productivity. This blog highlights some of the important modules of the product in each key area, Challenges and Learning's from past few implementations and the adoption roadmap that Infosys recommends for an organization to achieve the desired business benefits from an Oracle Sales Cloud implementation. - Sales Planning: It's important to realize the need to automate and focus more on Sales Planning; instead of treating it as a non-revenue generating function. Analysis shows that managing territories through excels and home grown solutions is time consuming, error prone and has a higher Total Cost of Ownership. Modules Recommended: Oracle Fusion Territory Management, Oracle Quota Management, Oracle Incentive Compensation - Sales Productivity: Many organizations primarily conduct their sales process using Outlook or excels. In such a scenario there is lack of visibility of ownership resulting in dead or duplicate leads and lost opportunities. There is confusion and inconsistency in Lead and Opportunity qualification in spite of regular trainings conducted. Modules Recommended: Oracle Lead, Opportunity and Forecast Management There is a substantial time required for developing organization's best practice Sales Method and Assessments. Process standardization to cover rollout to other countries is a challenge. Comprehensive geography data is not readily available. There is no address validation against the master geography setup during customer import. - There should be substantial emphasis on development of organization's best practices prior to project initiation. - There should be adequate representation of core team for process standardization. - A vendor should be identified who can provide a complete set of master geography data needed by the business for assigning territories. - Correctness of customer address should be ensured before import. Infosys recommends a 2 phased approach for Oracle Sales Cloud implementation: I. Phase 1- Sales Productivity Focus on implementation of vanilla functionality of Oracle Sales Cloud. Infosys' offers an Oracle Cloud Fixed Price Solution for the customers to implement the product in a fixed time with measurable business value. Modules to be implemented: - Lead Management - Opportunity Management - Territory Management - Customer Center - Sales Catalog II. Phase 2- Implementation of additional Oracle Sales Cloud functionality - Sales Desktop (Outlook) - Sales Mobile ( Iphone, Android, Ipad) - Incentive Compensation etc. Overall business value achieved with Oracle Sales Cloud implementation includes but is not limited to, increased Sales Velocity through process standardization, increased sales productivity and Selling Time through process automation, increased Customer Satisfaction and Conversion Rates and increased Revenue. Meet Infosys experts at Oracle OpenWorld 2013, Booth No. 1411, Moscone South Explore more at http://www.infosys.com/oracle-openworld Follow us on Twitter - http://twitter.com/infosysoracle
OPCFW_CODE
respect users specified pgid/puid Tick the checkbox if you understand [x]: [x] I have read and understand the pull request rules. Description This PR shows better what I meant in https://github.com/louislam/uptime-kuma/issues/4310. I did not unfortunately have time to properly test all built containers (I see you build for multiple architectures!), but I can confirm that a local container built with node ./extra/env2arg.js docker buildx build --load -f docker/dockerfile --platform linux/amd64 -t louislam/uptime-kuma:latest --target release . does work, and I have no reason to suspect it won't work for other platforms. How to test this yourself On the Docker host create a user with a known uid and gid. useradd testuser -u 9999 -g 9999 On the Docker host, create a directory that is owned by the user you just created mkdir testdir && sudo chown testuser:testuser testdir Specify the UID and GID in your Docker Compose file like so services: uptime-kuma: image: louislam/uptime-kuma:latest volumes: - ./testdir:/app/data ports: # <Host Port>:<Container Port> - 3001:3001 restart: unless-stopped environment: - PGID=9999 - PUID=9999 docker compose up -d docker logs uptime-kuma-1 should show some log lines showing the node user getting its UID and PID updated. Assuming your container started correctly, connect to its shell and verify that the node user now has the correct pid/gid. docker exec -it uptime-kuma-1 /bin/bash # OR you can just run the command directly docker exec uptime-kuma-1 cat /etc/passwd If you attempt these same steps using the currently published Docker image, you will see that it does not work, as there is nothing handling the PGID/PUID vars. Why like this? Since we need the UID/GID change to happen when the image is started, we need to execute a shell script every time it starts. Given the desire to not run the main process as root, a shell script that does the change we need, and then immediately drops to the node user in order to start the server seemed the easiest way to do this. Changing the Node uid/gid is no trouble, and is even recommended in the best practices guide but unfortunately the approach they suggest there won't work for us since we want to allow the user to specify the UID/PID rather than us baking it in. It defaults to 1000/1000 which is what it would have been defaulting to before. A happy accident by the way, since 1000:1000 is the default uid/gid for the first created 'real' user on almost every linux system :P. Type of change Please delete any options that are not relevant. Other Checklist [x] My code follows the style guidelines of this project [ ] I ran ESLint and other linters for modified files [x] I have performed a self-review of my own code and tested it [ ] I have commented my code, particularly in hard-to-understand areas (including JSDoc for methods) [x] My changes generates no new warnings [ ] My code needed automated testing. I have added them (this is optional task) Let me know if I can do anything more, I'd love to see this, or a better version of this, merged since that means I can then use it! Hey! No rush on this at all but I was wondering if there was anything I could do to help this along. I'm happy to help in any way I can. Starting as root and then dropping privs in startup script can at best be called "almost rootless but not quite" In environments where rootless is required/enforced this new approach can not be used. @hoerup Could you explain more why this cannot be used and what would be needed? I was wondering if there was anything I could do to help this along @kn100 v2.0 comes with ...-rootless images. I would like to see if the issue is still present once we have published the first beta and see how this issue works with the new "architecture" before accepting such a change. I hope you understand this hesitation. ^^ That's totally cool. Let me know when it's ready for testing and how I can test it, I'm happy to test it myself. I agree the approach is non-ideal - rootless in rootless is kind of a mess :D
GITHUB_ARCHIVE
Definition of Code Efficiency Code efficiency refers to the optimization of a program’s performance by minimizing the required resources and time needed for execution. It involves writing code in a way that reduces complexity, improves readability, and maximizes the use of system resources. Efficient code not only enhances the speed, but also reduces memory consumption and overall system demands. The phonetic pronunciation of the keyword “Code Efficiency” is: kohd ih-fish-uhn-see. - Code efficiency is about minimizing the resources consumed by your program, which includes reducing the time complexity and optimizing memory usage. - Efficient code is easier to maintain, debug, and scale. It reduces hardware requirements, power consumption, and associated costs. - Techniques to improve code efficiency include using appropriate data structures and algorithms, reusing code, avoiding redundancy, and profiling and optimizing your code regularly. Importance of Code Efficiency Code efficiency is important because it directly impacts the performance, maintainability, and scalability of a software system. Efficient code consumes fewer resources, such as memory and processing power, leading to faster execution times, reduced energy consumption, and smoother user experiences. Furthermore, efficient code can simplify the development process, enabling developers to easily understand, modify, and extend the system as necessary. This, in turn, can reduce development costs and timelines. Overall, code efficiency is a crucial aspect of software development that contributes to the long-term success and sustainability of the technology in question. Code efficiency is an essential aspect in the world of software development, as it reflects the effectiveness of software in executing tasks using a minimal amount of resources, such as processing power, memory, and time. The primary purpose of code efficiency is to optimize the performance of software applications, ensuring that they deliver optimal results with as few resources as possible. This is highly important when accommodating a broad range of devices – from powerful servers to resource-limited devices such as IoT gadgets, smartphones, and wearables. By exercising code efficiency, developers can create software solutions that are not only faster and more responsive but also more cost-effective in terms of resources, thereby providing a superior user experience that is better able to meet the demands of modern digital environments. To accomplish this goal, developers employ a variety of techniques and best practices during software development to identify potential bottlenecks, optimize algorithms, and reduce the amount of redundant or unnecessary code in a system. Examples of these techniques include adopting modular programming structures, using appropriate data structures and algorithms, and engaging in thorough code review processes. Furthermore, various performance profiling tools are used to evaluate the performance of the code under real-world scenarios, allowing developers to identify and address areas that could be improved in terms of efficiency. By giving due attention to code efficiency, developers can deliver software that is more scalable, lightweight, and resource-friendly, ensuring that it remains competitive in an increasingly crowded and demanding technological landscape. Examples of Code Efficiency Google’s search algorithm: Google’s search engine is an excellent example of code efficiency. They use highly efficient algorithms to quickly parse through vast amounts of data and retrieve relevant search results for users. This efficiency allows millions of users to find information quickly and accurately on a daily basis, with minimal delays and system resource usage. Video streaming platforms (e.g. YouTube, Netflix): Their ability to seamlessly deliver high-quality video content to millions of users relies heavily on code efficiency. By using efficient encoding, compression, and streaming algorithms, these platforms can provide a smooth viewing experience without using excessive bandwidth or server resources. Additionally, algorithms like adaptive bitrate streaming and content delivery networks help ensure uninterrupted and smooth playback. Mobile applications (e.g. Uber, Waze): A major factor in the success of these applications is their ability to offer quick and reliable services to users. Efficient code is the backbone of these services, as it allows them to load quickly, provide real-time data, and maintain a responsive user interface. For example, Uber’s algorithms are designed to accurately match riders with nearby drivers and provide real-time updates on ride arrival times, while Waze calculates optimal routes based on user-reported road conditions and other real-time data. Code Efficiency FAQ 1. What is code efficiency? Code efficiency refers to the quality of a software program’s ability to perform tasks with minimal resource consumption, such as computation time, memory usage, and power consumption. Efficient code enhances performance and reduces the requirements for system resources. 2. Why is code efficiency important? Code efficiency is crucial for multiple reasons, including improving the performance of a program, reducing the amount of energy required to run the code, reducing hardware demand, and ultimately enhancing the user experience. Efficient code also fosters maintainability and easier scalability of the software. 3. How can I improve the efficiency of my code? To improve code efficiency, consider refactoring the code, optimizing algorithms, employing best coding practices, minimizing nested loops, utilizing appropriate data structures, and making use of existing libraries or frameworks where necessary. 4. What are some common examples of inefficient code? Inefficient code can include unnecessary global variables, repeatedly performing the same calculations, needlessly large variable declarations, using inappropriate data types, lack of caching, or employing inefficient algorithms. 5. How do I measure the efficiency of my code? Profiling tools are available to measure various aspects of code efficiency, such as execution time, memory consumption, and CPU usage. Analyze the measurements to determine which parts of your code may need optimization, and re-evaluate your algorithms, functions, or data structures accordingly. Related Technology Terms - Algorithm Optimization - Time Complexity - Space Complexity - Big O Notation
OPCFW_CODE
import numpy as np from scipy.stats import norm from pyray.misc import zigzag2 from pyray.shapes.oned.curve import * from pyray.shapes.twod.plot import * from pyray.shapes.oned.circle import draw_circle_x_y, generalized_arc def make_circle(): for ii in range(10): theta = -np.pi*2*ii/9 im=Image.new("RGB", (512, 512), (256,256,256)) draw=ImageDraw.Draw(im,'RGBA') mc = MapCoord(im_size=np.array([512,512]),origin=np.array([4,4])) cnv = Canvas(mc, im=im, draw=draw) cnv.draw_grid() cnv.draw_2d_arrow(np.array([-4,0]), np.array([4,0])) cnv.draw_2d_arrow(np.array([0,4]), np.array([0,-4])) #cnv.draw_2d_line(np.array([-4,-4]),np.array([4,4]),rgba="purple") #cnv.draw_2d_line(np.array([4,-4]),np.array([-4,4]),rgba="yellow") pt1 = np.array([np.cos(theta), np.sin(theta)]) pt2 = np.array([np.cos(theta), -np.sin(theta)]) cnv.draw_2d_arrow(np.array([0,0]), pt1) cnv.draw_point(pt2, fill=(220,150,10,180)) prcnt = theta/2/np.pi generalized_arc(draw, np.eye(3), np.array([0,0,1]), point=np.array([.5,0,0]), scale=mc.scale, shift=np.array([256, 256,0]), prcnt=prcnt, width=1) #cnv.draw_point(np.array([1,1]),fill="green") #cnv.write_txt((1,1),"(1,1)","green") #cnv.write_txt((-1,-1),"(-1,-1)","red") mc.plot_to_im(0,0) draw_circle_x_y(cnv.draw, np.eye(3), radius=1, shift=np.array([256-mc.scale, 256-mc.scale,0]), scale=mc.scale, arcExtent=360.0, width=1, start=np.array([0,1,0]), rgba="grey") basedir = './Images/RotatingCube/' cnv.im.save(basedir + "im" + str(ii) + ".png") def halving_lemma(): for ii in range(30): im=Image.new("RGB", (512, 512), (256,256,256)) draw=ImageDraw.Draw(im,'RGBA') mc = MapCoord(im_size=np.array([512,512]),origin=np.array([4,4])) cnv = Canvas(mc, im=im, draw=draw) cnv.draw_grid() cnv.draw_2d_arrow(np.array([-4,0]), np.array([4,0])) cnv.draw_2d_arrow(np.array([0,4]), np.array([0,-4])) for jj in range(8): theta = -np.pi*2*jj/8*(1+ii/29.0) pt1 = np.array([np.cos(theta), np.sin(theta)]) pt2 = np.array([np.cos(theta), -np.sin(theta)]) cnv.draw_2d_arrow(np.array([0,0]), pt1, rgba=(50,50,50,50)) cnv.draw_point(pt2, fill=(int(220*jj/7.0),150,10,180)) mc.plot_to_im(0,0) draw_circle_x_y(cnv.draw, np.eye(3), radius=1, shift=np.array([256-mc.scale, 256-mc.scale,0]), scale=mc.scale, arcExtent=360.0, width=1, start=np.array([0,1,0]), rgba="grey") basedir = './Images/RotatingCube/' cnv.im.save(basedir + "im" + str(ii) + ".png")
STACK_EDU
Unity代写|COMS W4172: 3D User Interfaces and Augmented Assignment 1: Getting the Jump on Unity For your first assignment, you will be building your own mobile phone game: a “platformer” with touch-based input! Your task is to design and implement the controls, environment, game conditions, obstacles, and user interface. As long as you meet the requirements presented here, feel free to get creative and make your game as interesting and fun as you can! Note: Unity is a powerful development environment. However, that power comes at the price of a sizable learning curve: You’ll need to get comfortable with a workflow that may be different from what you’re used to, along with what will probably be an unfamiliar language, C# (although you should already know the basic object-oriented programming concepts underlying it). Therefore, please start as early as possible on this assignment, so you can explore and become comfortable with the editor, and begin designing your game enough in advance that you’ll have time to come to office hours if you need assistance. Do take advantage of Unity’s extensive online Manual and Scripting API Reference, along with its many free tutorials. And remember that only one of your four late days may be used on this first assignment; so, the latest you can turn it in will be midnight on Wednesday, February 23, using that late day. Please start by either working your way through the Unity Roll-a-Ball tutorial (or reviewing it, if you’ve done it previously). However, you do not need (and will probably not want) to use the actual code (or any of the simple Unity primitive-based models) from that tutorial. 1. Platforms: Your game will contain a series of four platforms. A player character controlled by the user must successfully traverse each platform, from its beginning to its end, in sequence, while trying to keep their score as high as possible and their time as low as possible. Falling off any platform will reset the player to the beginning of that platform and different kinds of events will decrement their score. The visual appearance of the player and the platforms and how the player transitions between platforms are all up to you, as long as you meet the requirements specified here. Whatever you do, please be sure that you (and we) can play your game through to the end without a lot of practice. That will be especially important for making a video that shows off your work! 1.1. Platform 1: Wall Dash 1.1.1. User Control: On Platform 1, the user should be able to make the player “jump” upward and toward a particular direction. The player should always stay upright during the jump and land on their “feet.” This should be accomplished by touch-based input, where touching the screen will cause the player to jump in that direction and face that direction when it lands. (Does the player jump higher or further from more touches? The distance of the touch from the player? The duration of the touch? This is up to you.) You should use the Unity Touch struct to accomplish this, along with raycasting to cast a ray into the scene through the point you touched from the camera’s perspective, to determine where the touch intersects the 3D scene. Please see the Camera.ScreenPointToRay() function and the section below on “Raycasting.” 1.1.2. Platform Description: Platform 1 should contain a wall of obstacles (an array of at least 3×3 obstacles) that initially blocks the player’s path (i.e., the player should not be able to simply jump over or around the wall). A simple example is shown in Figure 1. The player should be able to pick up (no hands needed) a prop that can launch projectiles in the direction the player is currently facing. The shape and size of a projectile is completely up to you. Projectiles should be able to knock down obstacles they hit, making it possible for the player to pass through where the wall was. Launching a projectile should be done with a UI button located on the screen and there should be no limit to the number of projectiles the player can launch. (You may find Object.Instantiate helpful here.) If the player comes in contact with any of the obstacles from which the wall was constructed, the score should be decremented and the player should not be able to pass through to the end of the platform when they are touching an obstacle. That is, just enough obstacles need to be out of the way that the user can pass without touching any. Figure 1. Example Platform 1 setup. White cube is the player, green cube is the prop for launching projectiles, blue cubes are the wall, and flat grey cuboid is the platform floor. (You are encouraged, but not required, to make your objects look more interesting!)
OPCFW_CODE
How did I lose fear from recruiters I have never trusted recruiters, headhunters or any sorts of recruitment agencies. I always thought that it must be a waste of time, that recruiter definitely: will try to convince me to go to the company where he would get the most profit doesn’t care about me and wouldn’t advice something only for my benefit works only for big and boring companies does not himself know anything about programming or IT in general is just not for me … When I am looking for a job (which has been couple of times now), I am usually following the same procedure: I look at my favourite websites ( https://www.startupjobs.cz/, https://techloop.io/ ) and I myself look for the companies and positions I would like. This approach works, but as the time goes, the companies start to repeat - usually, when I see some job posting on one page, I would find the same also on another. There is not much variety, so to speak. How do I get to the offers of companies which do not advertise on my favourite websites? Actually, I have found my very first job, at Topmonks (https://www.topmonks.com/), via direct contact. I was looking for websites of smaller IT companies in Prague and I found their website and I felt really inspired by their motto and their general image. I read about them and I knew instantly I want to try to get in, even though they did not have any open job advertisement. The direct approach worked - I wrote them what I like about them and why I think I would be a good fit to the team and they appreciated it. That is nice but it is still not very practical … so here come the recruiters! I have met a few recruiters in my IT career and I accepted dozen of LinkedIn invitations from recruiters (because you never know, right? :) but the best connection is via something in common, personal. For example, I met Luu Ly at one of the Czechitas event - she was there as a recruiter who wanted to learn the basics of programming herself. I have reached out to her several times when I was looking for a job, but mainly I am the biggest fan of her blog (https://zezivotarecruiterky.com/). I just love reading about the recruitment process and she (and her company https://www.3queens.cz/ ) do it so naturally and in such a friendly way that you can’t but love them. She is showing you that she is also just a person and that recruitment can be hard and challenging, but it is something very rewarding as well. My biggest help has been another recruiter (from https://zeebra.cz/) - Ondra Moudry. He convinced me that nothing which I originally thought about recruiters was correct. He helped me get to companies I have never heard about but was actually a great fit. He even helped me after all “his” companies did not work out and gave me advice and opinions from his own experience, without any hope of getting the recruitment fee. We met through a mutual friend and also met on one the Topmonks Cafee meetup and that is the best way how to get to people. It also proves that recruiters actually do care about learning IT and about understanding the community. Those many recruiters I added on my LinkedIn profile never did anything for me and why would they. I am not blaming them (except when they address me as a man, then I am very angry :)) and I am happy that I have a network of connections, but what is most valuable are people you can relate to - and even from the recruiters you can make a friends! So, why it is good to work with a recruiter? (preferably, someone you know, you met on a networking event, etc…). They know the jobs which you wouldn’t otherwise find. It is true, I don’t know why, but some companies just do not advertise. That doesn’t mean they are bad companies. Second, they know the company and they will give you advice on how to approach them and they will honestly tell you what to expect. They will help you with salary negotiation and they will make the whole process faster. They can provide you with an outside view and even help you realize what it is you actually want. To summarize - I am very happy I lost my fear from recruiters and suggest every developer to give them a chance. P.S. I must not forgot to mention other great recruiters who helped me, inspired me or became my friends - Petr Zatloukal and Beata Mazalova from https://www.msdit.cz/ are definitely those people! Journey - Think. Write. Grow. Self-improvement app which will help you to focus on your emotions, be positive and stay mindful.
OPCFW_CODE
Back in 2016, many would have argued this was just another unbearable buzzword, but today many organizations are reaping the very real benefits of breaking down old monolithic applications, as well as seeing the very real challenges microservices can introduce. For teams dealing with loads of technical debt, microservices offer a path to the promised land. They promise to bring greater flexibility and easier scalability. Smaller code bases are easier to understand, and with clearly separated services the overall architecture is much “cleaner”. Microservices bring with them new and exciting possibilities (the cake is NOT a lie), but they’re still not without challenges. Anyone that tells you otherwise is sorely mistaken (or, more likely, trying to sell you something). Higher frequency releases and increased collaboration between dev and ops is exciting, but it’s important to stay diligent. Microservices may be considered a revolutionary way to build applications, but this new approach does not require us to completely start from scratch. Rather than asking what specialized framework you need to build a new microservices architecture, let’s ask how we can use current frameworks to support the same goal. But first… A short recap of what microservices are and where they came from: Martin Fowler, along with James Lewis, tried to define this new architecture in his first article covering microservices way back in 2014: “The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery”. Back then, microservices and the concept of containerized applications were so new there weren’t really specialized tooling or frameworks available to support building, deploying and running those kinds of applications. Rather, the focus was on adapting current tools for use with this new architectural style. In the past half a decade, the industry has exploded with technology built especially to support new microservices. That doesn’t mean that they’re the best suited for each individual’s needs though. In fact, unlike monoliths, which are generally developed with the tech stack in mind, each service in a microservices architecture can be built using a different framework based on its own functionality. This post is not about the pros and cons of microservices, but instead looks at the underlying technology most-suited to support it. If you’re looking to dig into some of the common pitfalls of microservices (and how to overcome them, of course), check out this post that covers the main challenges associated with microservices. Instead, we’ll go over some of the most popular frameworks for building microservices – both traditional and container-specialized. The classic Java EE, now Jakarta EE (JEE), approach for building applications is geared towards monoliths. Traditionally, an enterprise application built with Java EE would be packaged into a single EAR (Enterprise Archive) deployment unit which includes WAR (Web Archive) modules and JARs (Java Archive) files. Although there aren’t any technological restrictions ruling out the use of JEE for microservices architectures, there is a significant overhead cost. Each service would need to be packaged as a standalone unit, meaning it should be deployed within its own individual JEE server. That could mean deploying dozens or even hundreds of application servers to support a typical enterprise application. Luckily, the community noticed early-on that the standard JEE didn’t address the new build challenges that microservices introduced. Since 2016, many additional open source projects have been started to support microservices built in JEE. Eclipse MicroProfile is a continually growing set of APIs based on JEE technologies. It’s an OS community specification for building Enterprise Java microservices, backed by some of the biggest names in the industry, including Oracle, Red Hat and IBM. Bottom line: There’s no reason you can’t use Java EE for microservices, but it doesn’t address the operational aspects of running multiple individualized services. For those of you that want to move an existing monolith JEE app to microservices, there are plenty of “add-on” tools out there based on JEE technology to support your needs. Spring is one of the most popular frameworks for building Java applications and, like with Java/Jakarta EE, it can be used to build microservices as well. As they put it, “[microservices do] at the process level what Spring has always done at the component level.” Still, it’s not the most straightforward process to get an application with microservices architecture up and running on the Spring framework… You’ll need to use Spring Cloud (heavily leverages Spring Boot), several Netflix OSS projects and, in the end, some Spring “configuration magic”. For a deep dive on how to build microservices with Spring, check out this post straight from the source. Bottom line: Spring is well positioned for the development of microservices, together with an offering around external open source projects that address the operations angle. That doesn’t mean it will be easy though. Lightbend provides us with another option. Continuing with the same theme, Lagom wraps around the Lightbend stack with Play and Akka under the hood to provide an easier way to build microservices. Their focus is not only to provide an easy solution for those moving towards microservices, but to ensure that those microservices are easily scalable and reactive. In an interview with InfoQ back in 2015, Jonas Bonér, Lightbend’s CTO and co-founder, said: “Most microservices frameworks out there focus on making it easy to build individual microservices – which is the easy part. Lagom extends that to systems of microservices, large systems – which is the hard part, since here we are faced with the complexity of distributed systems.” Bottom line: Lagom takes Lightbend’s capabilities and leverages them in one framework, specially designed for building reactive microservices that scale effectively across large deployments. Their focus is not only on the individual microservices, but on the system as a whole. Not unlike the other frameworks we’ve looked at in this post, Dropwizard is a Java framework for developing ops-friendly, high-performance, RESTful web services. An opinionated collection of Java libraries that make building production ready Java applications much easier. Dropwizard Modules allow hooking up additional projects that don’t come with Dropwizard’s core, and there are also modules developed by the community to hook up projects like Netflix Eureka, similar to Spring Cloud. Bottom line: Since Dropwizard is a community project that isn’t backed by a major company like Spring and Pivotal, Java EE and Oracle, Lagom and Lightbend, its development might be slower, but there’s a strong community behind it and it’s a go-to framework for large companies as well as smaller projects. Apart from the 4 big players we’ve mentioned here, there’s a plethora of other projects that are worth mentioning and can also be used for writing microservices: Vertx, also under the Eclipse Foundation, is a toolkit for building reactive applications on the JVM. Some might argue it should have a spot at the big 4. Spotify Apollo is a set of Java libraries that is used at Spotify when writing Java microservices. Apollo includes features such as an HTTP server and a URI routing system, making it trivial to implement RESTful services. Kubeless is a Kubernetes-native serverless framework. It’s designed specifically to be deployed on a Kubernetes cluster so users are able to use native Kubernetes API servers and gateways. Additional frameworks include Spark, Ninja and Jodd, Restlet and Bootique.io. Bottom line: The Java microservices playing field is quite large, and it’s worth checking out the smaller players just as much as the industry giants. It doesn’t matter which framework or platform you’re using, building microservices isn’t tightly coupled with any of them. It’s a mindset and an architectural approach, and the best practice (as always) is to find the best options for your application’s unique requirements. With that said, successfully implementing a microservice architecture doesn’t stop at the application itself. Much of the cost around it comes from so-called DevOps processes, monitoring, CI/CD, logging changes, server provisioning and more that are needed to provide continued support to the application in production. So, go ahead and enjoy your cake, but don’t forget to stay diligent as you reap your rewards. Enjoyed reading this blog post or have questions or feedback? Share your thoughts by creating a new topic in the Harness community forum.
OPCFW_CODE
Its fairly easy to write a User Defined Excel function using VBA: Suppose you want to write a function that calculates the average of a range of cells, but exclude from the average anything that is not a number or is less than a tolerance. Lets call the Function AverageTol Alt-F11 gets you to the Visual Basic Editor (VBE) Enter the following VBA Code Function AverageTol(theRange, dTol) For Each Thing In theRange If IsNumeric(Thing) Then If Abs(Thing) > dTol Then AverageTol = AverageTol + Thing lCount = lCount + 1 End If End If Next Thing AverageTol = AverageTol / lCount End Function The function loops through every cell in the range and, if the cell is a number greater than the tolerance, adds it to the total and increments a count. Finally it divides the total by the count and returns the result. Now go back to the Excel worksheet, enter some data in cells A1:A10 and in B1 enter =AverageTol(A1:A10 , 5) That was pretty easy, and works well for 10 cells. But if you have a lot of data, say 32000 cells, then 10 formulas using this UDF takes over 5 seconds to calculate on my fast PC (Intel I7 870 2.9 GHZ). One major reason this is so slow is that I used all the defaults: I was lazy and did not declare any of the variables so they all defaulted to Variants. Thats SLOW … but I can easily improve it: here is version A of AverageTol: Function AverageTolA(theRange As Range, dTol As Double) Dim oCell As Range Dim lCount As Long For Each oCell In theRange If IsNumeric(oCell) Then If Abs(oCell) > dTol Then AverageTolA = AverageTolA + oCell lCount = lCount + 1 End If End If Next oCell AverageTolA = AverageTolA / lCount End Function This is the same function but with each variable declared as a sensible Type. This is good programming practice, and considerably faster. 10 formulas using this UDF on 32000 cells now calculates in 1.4 seconds, thats an improvement factor of 3.5 but still SLOW. One reason its slow is that there is a large overhead each time a VBA program transfers data from an Excel cell to a VBA variable And this function does that lots of times (3 times 32000). If you transfer the data in one large block you can avoid much of this overhead: Function AverageTolC(theRange As Range, dTol As Double) Dim vArr As Variant Dim v As Variant Dim lCount As Long ' On Error GoTo FuncFail ' ' get Range into a variant array ' vArr = theRange ' For Each v In vArr If IsNumeric(v) Then If Abs(v) > dTol Then AverageTolC = AverageTolC + v lCount = lCount + 1 End If End If Next v AverageTolC = AverageTolC / lCount Exit Function FuncFail: AverageTolC = CVErr(xlErrNA) End Function The statement vArr = theRange takes the values from all the cells in the Range and transfers it to a 2-dimensional Array of Variants. Then the UDF loops on each element of the Variant array. I also added an error handling trap that makes the UDF return #N/A if any unexpected error occurs. Now the 10 formulas calculate in less than 0.1 seconds: thats an additional improvement factor of 14. But we have’nt finished yet! Another speedup trick is to replace vArr = theRange vArr = theRange.Value2 That reduces the calculation time from 98 milliseconds (thousandths of a second) to 62 milliseconds. Using .Value2 rather than the default property (.Value) makes Excel do less processing (.Value checks to see if cells are formatted as Currency or Date, whereas .Value2 just treats all numbers including dates and currency as Doubles). We can also make another small speedup by using Doubles rather than Variants wherever possible. Change the For Each v … Next v loop to: Dim d as Double Dim r as Double On Error GoTo skip For Each v In vArr d = CDbl(v) If Abs(d) > dTol Then r = r + d lCount = lCount + 1 End If skip: Next v Now the calculation time has come down to 47 milliseconds. So a series of small changes has improved the calculation speed of this simple UDF from 5.4 seconds to 0.047 seconds, 115 times faster!
OPCFW_CODE
MT9M001 to FPGA input timing MT9M001 is a CMOS image sensor. As its output it provides FRAME_VALID, LINE_VALID and DATA. The output signals are synchronized (edge-aligned) by PIXCLK, which is generated by the sensor. The datasheet is for example at http://www.onsemi.com/pub_link/Collateral/MT9M001-D.PDF I read the senosr output using FPGA, it somehow works, but I have a hard time understanding timing of LINE_VALID. Since this is the most critical signal for the image shape, I cannot ignore these problems anymore. The datasheet claims that the maximum frequency of the camera is 48MHz. This is the frequency I use, the period is 20.833. I am supposed to read at falling edge, which means at the 10.416 mark. This is a diagram from datahseet: To setup valid timing constrains, I have to focus on t_PLH and t_PLL. Let's see how they are defined (min, typical, max values): Acording to these data, LINE_VALID goes from low to high after up to 7 ns after rising edge of PIXCLK, which is at least 3.4 ns before falling edge (at 48MHz). This means t_LVS min value should be 3.4 ns, not 2 ns ...? But never mind, let's see t_PLL. Maximum value is 13 ns, which means LINE_VALID goes from high to low no later than 13 ns after PIXCLK rising edge. But PIXCLK falling edge happens 10.4 ns after PIXCLK rising edge, so LINE_VALID falling edge arrives later than PIXCLK falling edge. But only sometimes, because there is no typical or minimum value. Furthemore, if t_LVS is 2 ns, t_PLL would have to be lower or equal to 8 ns. How to handle this? For me it's a real problem, as my line lengths get messed up sometimes (especially when I overilluminate the camera). Based on t_OS and t_OH my data signal constraints are: create_clock -period 20.833 -name cam_pixclk [get_ports CAM_PIXCLK] create_clock -period 20.833 -name cam_pixclk_virt set_input_delay -min -1 -clock cam_pixclk_virt [get_ports CAM_DATA*] set_input_delay -max 1 -clock cam_pixclk_virt [get_ports CAM_DATA*] derive_pll_clocks derive_clock_uncertainty But how to continue with LINE_VALID? I think you're misreading the timing diagram. They've provided a source-synchronous clock, PIXCLK. They use it in two ways: Data and PIXCLK are aligned, with data changing at falling PIXCLK edge. This should make meeting timing super easy: just clock in the data using rising clock. Sideband signals appear to be launched after PIXCLK rising edge. You'd have to infer their hold time from the stated clock condition, which appears you have done. Still, just clock them in on rising PIXCLK same as the data. A tip: use IOB flops an appropriate distribution for the incoming PIXCLK. IOB flop input timing is better controlled than using fabric flops. This will make meeting your constraints easier. Speaking of which... FPGA-centric timing constraints
STACK_EXCHANGE
Midterm Solutions: Homework 1 (due by 4:30pm on Thursday of Week 2).. Get instant access to our step-by-step Digital Image Processing.... CS/ECE 545 Digital Image Processing. Homework 2, Spring 2014 (Due March 5, by 6PM). Note: Some of the problems below include.... SIMG-782 Introduction to Digital Image Processing. Homework 4 Solutions. 1. Write pseudocode algorithm whose input is a pair of vectors h and g of length 256.... In class, work on the examples and the homework assignment. ... Course introduction (slides); 01-Intro to image processing (slides); Matlab Introduction (slides).... Get instant access to our step-by-step Digital Image Processing Using MATLAB(R) solutions manual. Our solution manuals are written by Chegg experts so you .. Text Book: R. C. Gonzalez and R. E. Woods, Digital Image Processing, ... Homework Policy: Weekly written and/or computer programming assignment, due the.... Rafael C. Gonzalez and Richard E. Woods, Digital Image Processing, ... artwork and images found in our textbook, some homework solutions,.... Homework D Solutions on Digital Image Processing - ECE 5273 HW 4 Solution Spring 2014 Dr Havlicek Note This document contains solutions in both Matlab.... Jump to Digital Image Processing Homework Solutions - Digital Image Processing Homework Solutions. Students taking courses that require image.... ECE 468/568: Digital Image Processing ... code for computing SIFT; Homework 1 Lecture 2 ... (Textbook: 12.2.3); Exam 2: preparation; examination; solutions. Digital image processing homework solutions. Image manipulations. development time, and portability of image processing solutions In spite of.... Below you can find instructions about how to submit your homework online for this ... Midterm Solutions: Homework 1 (due by 4:30pm on Thursday of Week 2). Figure 1: Masks employed in Sobel operator. Using (4), we can calculate the gradients of the binary image including a centered rect- angular with.... Homework Assignments for. ECE 5273. Digital Image Processing. Assignments: Solutions: HW 1: PDF PostScript PDF PostScript C M-file.. In this introductory course on digital image processing, we cover the basics in both theory and ... Homework-1: Color histogram and demosaicingdownload ... CNN based solution for image feature extraction and classification, overview of.... A.K. Jain, Fundamentals of Digital Image Processing, Prentice Hall, 1989 (The ... Homework assignments (20% of final grade (10% written papaers and 10% Matlab assignment): ... Correct Solutions of all the written assignments will be given.. Our text book is Digital Image Processing by Gonzalez and Woods. Warning: homework solutions for reference but not Plagiarism. If you find bugs, false logic,.... Text: R. Gonzalez and R. Woods Digital Image Processing, 4th edition, Pearson, 2018. ... Homework solutions will be made available for each assignment.. [Gonzales] Digital Image Processing/2E, R.C. Gonzales, R.E. Woods, ... Separability, Coordinate Transformations, [Gonzales] 4.1, 4.2, 4.6, Homework 2 solution. Digital image processing is ubiquitous, with applications including television, ... Graduate students will be given a larger amount of homework assignments than... d907892728 Kaabil [2017-MP3-VBR-320Kbps] [DDR] ISM 3.0 MARATHI TYPING SOFTWARE.rar Ms Office 2007 7z Self Extract Setup Download Le Dernier Samourai Avec Tom Cruise Dvd Rip Fr CD1 Avi best seller powerpoint bundle 105793 rar Cam350 10 7 Crack Fulll aor 8200 mk3 software download Paulo Coelho L 39;alchimista Epub Download jobangebot pferdebil Silver Starlets Torrentl Dos Grandes al 2X1: Ana Torroja con Martha Sanchez DavidBisbal.
OPCFW_CODE
Case-Based Analysis: The Key Is the Data: Analog, or Case-Based, Analysis Is One of the More Powerful Trading Techniques. However, It Also Is One of the More Difficult to Systematize. Here, We Expand on Our Previous Discussion of Case-Based Reasoning and Cover the Steps to Computerizing This Strategy Ruggiero, Murray, A., Jr., Modern Trader In "Making the case for the trade" (October 2003), we discussed the basic ideas of case-based reasoning. We covered the notion of using analogs to make trading decisions and moved on into how case-based reasoning can be used to implement analog trading patterns. The concept of case-based reasoning for trading application is simple: Look at the current record and find similar records in the past; then, observe what happened during some period after these past records and use the observations to forecast what will happen in the future. These forecasts can form the base of a trading system. Even though the core idea is simple, there are many issues to consider. For example, performing the distance measure for each pattern vs. the test of the database will make the speed of computing unacceptable for a commercial application with a large database. Case-based applications employ various indexing and filtering methods to speed this process. One method that was briefly discussed in the last article is called simply "4.5," a machine learning methodology that develops a decision tree. The leaves of this tree can index the supporting cases and be used to retrieve similar cases to which a distance calculation can be applied. Another issue to address is weighting the fields used in calculating the distance calculation to minimize entropy for the predicted outcome. Most case-based applications use some variation of calculating the Euclidean distance between patterns. This calculation is the easy part. The more difficult part of case-based applications is extracting features that can be used to describe a given case in a useful way. An example would be a case-based application that when given a song, finds similar sounding songs. The research for the song identifier was completed at the University of California at Berkeley. This application used simple nearest neighbor matching, but the approach was novel in how features were extracted from each song that was compared. The study used musical structures such as frequency, tempo and amplitude taken from sampling during the song. These elements were used to create 1,248 features. These features were compared in the database to find similar songs. Analyzing market data is a similar problem requiring preprocessing and data sampling. THE PREPROCESSING PROBLEM Data preprocessing is a concept familiar to those who use neural networks. In developing neural networks, the attempt is to develop a process that is predictive of our desired output. In case-based reasoning, we want to develop preprocessing that is descriptive of a given window of data. Here, we'll assume that we are preprocessing for a data window of a given size. From this beginning, we can test for patterns of differing lengths based on changing the weighting when doing the distance matching. When developing preprocessing strategies, we need to determine the types of relationships that are important to uncover in our data. For example, if we are looking at intermediate patterns where we are simply looking at the general shape of the chart formations, we can develop the preprocessing based on the closing price. However, many patterns that we might try to uncover require the interaction between the open, high, low and close over multiple bars of data. For this reason, we need to be able to maintain the relationships that allow us to analyze chart features such as gaps, key reversal days, inside bar days, outside bar days, etc. We need to normalize these relationships. By normalizing, a distance measure of "0" is given if the exact pattern occurs in 1996 when a market is trading at a 1000 or in 2003 when the market is trading at 1500. We also need to add a predictive set of fields to each record that can be used for prediction once we isolate similar cases. In preprocessing, we first need to develop a method for the representation of a single day. We define each Lay of data by using its relative relationship to itself, yesterday and the day before. …
OPCFW_CODE
A drawing for a link of my snake robot, based on Dowling's design, but using larger servos.. The basic structure uses sections of 2x2 inch aluminum angle (1/16" wall), mounting the servos using their internal mounting screws. A 4-40 flat head screw is mounted in dimple (i.e. smashed with a ball peen hammer over a plate with a hole) in the angle. A standard threaded standoff is screwed onto the screw to provide a larger diameter pivot for the other arm of the yoke. The yokes are made with 1"x1/8" (or maybe 1"x1/16", if it is stiff enough) bar stock. (They are made of 1/16". The two "U" shaped brackets are epoxied together with a single 8-32 screw holding it together in the middle. The screw's not shown on the drawing (yet).) There are 9 total links, for a total snake length of around 2 meters. Link for Scott Edwards Electroncs (http://www.seetron.com/) who make a nifty little PIC based RS232 serial to servo pulse converter. $44 each, 8 servo outputs, 2400 or 9600 bps in. 3 of them are used on my snake robot. More details on the snake robot wiring page. Wiring and early electrical tests for the snake robot http://www.snakerobot.com/ - very slick (albeit expensive) robots http://www.snakerobots.com/ - Gavin Miller's very cool snake robots http://www.dd.chalmers.se/~f96mahe/evcomp.html Moving a Snake Robot using Genetic Programming - 5 servos, mouse hanging off the end used to evaluate fitness... Interesting http://www.frc.ri.cmu.edu/~nivek/OTH/resume.html Kevin Dowling's resume.. His PhD thesis is a great introduction (he's a good writer) to building a snake robot and discusses many of the issues. I can't help feeling that his project is just started, though. He got the robot built, worked out some algorithms, but actually doesn't report on actually running the robot with any measurements (except some power measurements?). Perhaps time ran out? http://www.mil.ufl.edu/publications/FCRAR99/eno.pdf - Jormungand, an Autonomous Robotic Snake Charles W. Eno, Dr. A. Antonio Arroyo Machine Intelligence Laboratory University of Florida Department of Electrical Engineering good pictures and general design and construction info. http://set.gmd.de/~worst/snake-collection_b.html GMD's Collection of Snake-like Robots robot/snakerobot.htm - 29 October 2001 - back to robots - back to Jim's home page - Jim Lux (mail)
OPCFW_CODE
M: Body Language Expert on Steve Jobs and Eric Schmidt's Photos - aresant http://gizmodo.com/5503192/so-awkward-steve-jobs-and-eric-schmidts-body-language-analyzed R: pedalpete Two things I found interesting 1) I had never thought before about jobs trademark attire of a black turtleneck, and I thought it was only something he did for the keynotes. But apparently here it is again. The Body Language Expert doesn't make note of it, but this may be a hint as to just how deeply Jobs secretive thread goes. He is always protecting his neck, the vulnerability during communication. 2) The Body Language Expert reads into Schmidts rounded shoulders as meaning he is afraid of Jobs. Could be. However, as a geek at heart, this is the posture of one who spends a ton of time in front of a computer and doesn't get much excercise (not that I know that about schmidt). You can see in other photos of him, he has a very rounded upper back. [http://www.gadgetcom.com/wp- content/plugins/auto-blogster/im...](http://www.gadgetcom.com/wp- content/plugins/auto-blogster/images/eric-schmid-no-iphone.jpg) [http://www.ieplexus.com/wp-content/uploads/2009/05/google- fo...](http://www.ieplexus.com/wp-content/uploads/2009/05/google-founders.jpg)
HACKER_NEWS
Future of Programming Languages Our destiny may be formed by using many growing innovations, and these new advancements all hold strolling on one of a kind programming Languages. Become acquainted with the correct programming language today, and it’s going to open up entryways of risk—putting you without delay absolutely busy energizing fields, as an example, Mobile Development, Blockchain, and Artificial Intelligence. Which are the first-rate to analyze? Here are some programming dialects—some fairly new and a few very antique—that guarantee to have driving jobs in key advancements of factors to come back. On the Android aspect of the versatile walkway, Kotlin has all of the earmarks of being the local language of factors to return. Since October 2017, Kotlin has been completely upheld with the aid of Google for developing Android programs as some other to Java. Some real organizations, for instance, Pinterest, Basecamp, and Expedia have just modified over to Kotlin for their Android programs. On the off chance which you actually know Swift, Kotlin code will look unusually herbal. They have essentially the same as the linguistic shape that ought to enable Swift engineers to get Kotlin correctly—making it an entire lot less overwhelming for one individual to compose local code for each Android and iOS than when Java and Objective-C had been the main options. Kotlin’s interoperability with Java additionally gives it an internal song to grade by grade supplant Java in large enterprise applications. Swift is a typically youthful programming language. It originally showed up in 2014, subsequent to being created via Apple as a substitution for Objective-C. It straight away picked up in ubiquity, particularly with iOS engineers, because it made their code considerably more brief, snappier to compose, and much less inclined to fundamental mistakes than Objective-C. Swift has since been made open-source and has prolonged to use out of doors of Apple’s organic community. Specifically, there’s a great deal ability for Swift as a server-facet language as a result of Linux help. With a great many humans getting to be stuck to their cellular telephones at painfully inconvenient times of the day, the requirement for iOS engineers isn’t leaving at any factor within the near future. C++ has been round because the mid-Nineteen Eighties, but it’s in addition as essential these days because of the task it performs in many growing advancements. Chief among these is blockchain. The Bitcoin center code is written in C++ as are different most important blockchains, as an instance, Ripple, Litecoin, Monero, EOS, and Stellar. Tight authority over memory the board, speedy execution, and development are for the maximum part professionals for picking C++ to compose blockchains—conveyed facts requiring severa hubs on a gadget to hastily acquire accord on squares of statistics. Numerous one of a kind corporations requiring superior essentially use C++ code too. These comprise gaming, net indexes, replacing frameworks, internet browsers, mechanical era, and vehicle programming. Solidity is any other programming language to remember getting to know in the event that you might want to break into blockchain development. The essential use instances for Solidity are decentralized applications and fantastic contracts strolling at the Ethereum level. The ascent of the ICO (Initial Coin Offering) as a financing tool for brand spanking new organizations has prompted a first-rate interest for gifted Solidity designers. The accomplishment of potential adversaries to Ethereum, for instance, NEO, EOS, and Cardano ought to diminish Solidity’s importance in a while. Be that as it could, for now, ERC-20 tokens walking on Ethereum’s blockchain continue to be a winning detail of the cryptographic cash scene. Python has been round for momentarily and is regularly the foremost language educated in Computer Science publications because it’s so herbal to research. Python may be utilized to compose sensible, item orientated, or procedural kinds of programming. It has an intensive wide variety of present libraries and totally lucid punctuation that makes it brisk to create and perfect for running in bigger engineer groups. Notwithstanding its straightforwardness, Python is a ground-breaking language that lies on the middle of severa modern advances. Machine Learning, Artificial Intelligence (AI), the Internet of Things (IoT), and Data Science are in large part dealt with wherein Python assumes an unmistakable job and ought to preserve on being precious well into what’s to return. Crystal is another dialect that would really like to bring C-like execution into the particularly disconnected universe of web engineers. Precious stone is long past for the Ruby humans group, with a sentence structure this is like and, on occasion, indistinguishable to Ruby’s. As the formally sizeable quantity of Ruby-based new groups continues on growing, Crystal may want to anticipate a key task in supporting take those packages’ execution to the following measurement. Solution moreover takes a extraordinary deal of motivation from the Ruby biological device, yet in place of trying to deliver C-like advantages, it’s centered round framework—something Rails studies skilled troubles with. Solution accomplishes these execution helps by way of walking at the Erlang VM, which has solid execution notoriety labored over its 25 years in the telecom commercial enterprise. The Pheonix application system for Elixir—extra than any bit of this blossoming surroundings—has given this language legs. Presently, take a brisk study four of those 5 dialects advancing up the ubiquity stepping stool, as indicated via StackOverflow and GitHub facts. Every this type of dialects as of now has an excited network and its very own week by week pamphlet (That’s the point at which you understand you’ve made it!). In case you’re thinking about mastering a extra younger language with energizing achievable consequences for the future, read those elevate pitches for each one of the five dialects I just referenced—composed via skilled aficionados and pioneers in their separate organic structures. Everybody who’s taken a propelled direction in programming dialects knows the scholastic international adores the opportunity of realistic programming, which demands that every ability has very a great deal characterised assets of information and yields but no risk to get of worrying different factors. There are many amazing sensible dialects, and it’s far hard to encompass each one of them right here. Scala is a standout amongst the quality-recognized, with considered one of the larger purchaser bases. It became built to hold running on the JVM, so some thing you write in Scala can run wherever that Java runs—which is all over. There are valid justifications to accept as true with that sensible programming statutes, while pursued, can manufacture a more grounded code that is much less annoying to enhance and often freed from absolutely the maximum enraging bugs. Scala is one approach to dunk your toe into those waters. While MATLAB isn’t in peril of supplanting Java, C, or Python at any factor within the close to future on special companies’ “Most Popular Languages” postings, the language has favored a simply unfaltering ascent in appropriation. For example, it moved from the seventeenth spot to the thirteenth spot on the modern day launch of the TIOBE Index. What’s at the back of the language’s ascent? It’s useful for records exam and may companion pretty properly with well-known dialects, for instance, Python (that is making its very very own advances as an information technology equipment), Fortran, and Java. As greater groups mesh research into their paintings approaches, MATLAB may want to end up reducing out a clearly big forte for itself.
OPCFW_CODE
Using the Enterprise Data Mashup Service Engine Configuring Data Mashup Projects Using Joins After creating a virtual database, Data Mashup project, and EDM collaboration, you are ready to bring all your diverse data into a common staging area and create a federated view of the data. This topic provides the necessary steps for you to create data joins using the data stored in the staging tables. Perform the following steps to create and configure a join: To Add the Tables to the EDM Collaboration Before You Begin Before creating a join, you must have a virtual database, Data Mashup project, the NetBeans IDE must be running, and you must be connected to the virtual database. - If necessary, connect to the virtual database. - In the NetBeans IDE, click the Services tab and expand Databases. - Right-click the database you want to start and select Connect. In this procedure, start VirtualMashupDB. - In the NetBeans IDE Project window, expand the Data Mashup project. - Under Collaborations, double-click the EDM collaboration (demoDMfile.edm for this exercise). The file opens in the EDM Editor canvas. - Right-click in the EDM Editor canvas and select Add Table. - In the Select Source Table window, select the virtual database you created earlier (VirtualMashupDB). The tables in the database appear under Schema. - Highlight the SUPPLIER_ADDRESS and COMPANY_DATA tables and click Select. - Click OK. The window closes and the tables, along with Runtime Input, appear on the canvas. You are now ready to create the join. To Create the Join This step merges the SUPPLIER_ADDRESS and COMPANY_DATA into one table. - From the Table Operators palette, drag the Join operator onto the canvas. - In the Create New Join View window, click All to move both tables to the Selected Tables list. Both tables appear in the Preview area and are linked to the join. - To edit the type of join, click in the field at the top of the join table and select one of the following options: - Inner - Returns only the records in the selected tables that match. For this exercise, use this option. - Left Outer - Returns all records in the left table regardless if there are any matches with the right table. When there are no matches, the field is NULL. - Right Outer - Returns all records in the right table regardless if there are any matches with the left table. When there are no matches, the field is NULL. - Full Outer - Returns the all records from the left and right tables in the merged table. All fields that do not match are NULL. - Click OK. The root join is added to the canvas and is linked to the two tables. You can now create join conditions for the tables. To Create a Join Condition Once you create a join, so you can edit it as necessary by creating join conditions. - Open the EDM collaboration file in the EDM Editor. - Right-click the Root Join table on the canvas and then click Edit Join Condition. The Edit Join Condition window, also called the Condition Builder, appears. - Drag a column from the first table onto the empty canvas on the right. For this exercise, drag VENDOR from the CUSTOMER_DATA table. - Drag and drop a comparison, string, or another operator onto the canvas to the right of the column name. For this exercise, use the equal (=) sign. Tip - To add an operator, click the appropriate icon on the toolbar. When the drop-down menu appears, click the icon you want to use and drag it onto the canvas. - Drag a column from the second table onto the canvas to the right of the operator. For this exercise, drag the VENDOR column from the SUPPLIER_ADDRESS table. - Click OK. The Edit Join Condition window closes. - Click Save All. You are now ready to build the Data Mashup project. To Complete the Data Mashup Project Once the Data Mashup project is configured and saved, you need to build the project in order to create the WSDL document. This document is automatically generated based on the configuration of the collaboration. - To complete the project, right-click the project name and select Build Project. A WSDL document named ProjectName_CollaborationName_engine.wsdl is created under the Collaboration node . In this exercise, the file is named DemoDMProject_demoDMfile_engine.wsdl. You are now ready to create and deploy a composite application in the Sun GlassFish Enterprise Server. For instructions on how to perform this task, see Creating and Deploying a Composite Application for a Data Mashup Project. Return to GlassFish ESB Documentation Home
OPCFW_CODE
Update - started a new game with the 1.57ddh patch in place, but with no modification to the default port supply/demand setting. Upon the first delivery of autos to Toledo with its port, demand went to zero and stayed there - did not recover. Like it always used to be. Goods demand also went to zero on first delivery, but then recovered....slowly. That's not exactly as I remember it, but my memory could easily be off on that point. Goods demand going immediately to zero didn't seem quite right, but at least it was starting to recover. First to 1, then to 2. I didn't continue to play long after that to see how far it would go., as the main purpose was to test the port mod effect on autos. So my test removed any doubt about the effect of rearranging the port supply/demand positions as I previously outlined. Again, Jeffry, thank you for that tip. The summation of these changes in the patch and now in the port mod will increase revenue and make game goals easier to achieve. That's wonderful, but will almost necessitate an asterisk with new record games when compared against the harder, pre-patch game records. Oh, right; I was looking at the file and didn't see it, I am dumb. So it's like exclusion. I guess in some moments, if I set only $4000 will affect a group of tiles I want to color but others I don't want too, while making the 2 times rules will affect less specific tiles. Alright; I will just use normal rules or combining attributes then; I'll work on my stuff and see if I can do it; thanks for the info. I may add this post into the main index to find easily this tutorial/manual. Spice can be recolored on minimap. In ini there's Spice_Settings section or something like that, where is property something like ThinSpice_Color and ThickSpice_Color. Well, this is something different than you think, I did not mention that to keep things simple. The first value ($6000) means: "Look at vehicle can pass and infantry can pass atribute", the second value ($4000) means "Only infantry can pass attribute must be set". In other words, all the rule means "The tile must have infantry can pass AND must NOT have vehicle can pass attribute", in other words, this matches infantry-only tiles. But as I said, it is very confusing and not very well designed, I did not use it in the example not to confuse you even more. oooh, I got it. Buildable will be everysingle buildable tile, not just the ones that you can click+drag (which are only 3-4 different tiles) but by marking "buildable" this sill consider the tile with grass, tile with the red thing and so on, so the rule it's less detailed. I guess the spice cannot be recolored. You are alright. I know Fey did some recolor to his tileset by trial and error, so I am going to try to do the minimap color for my retro Dune 2 tileset, and if I have any doub I will come back to ask you. Just confirm me something; vanilla .inis have this: ;Rough rocks (Infantry-only) The template sais is "color=and_value;check_value" I guess this means that will search for any tile that have "infantry can pass" but also the ones with "vehicle can pass AND infantry can pass" to use the same color to two different tiles, right?
OPCFW_CODE
System Configuration is a tool that can help identify problems that might prevent Windows from starting correctly. You can start Windows with common services and startup programs turned off and then turn them back on, one at a time. If a problem doesn't occur when a service is turned off, but does occur when that service is turned on, then the service could be the cause of the problem. System Configuration is intended to find and isolate problems, but it's not meant as a startup management program. To permanently remove or turn off programs or services that run at startup, see Uninstall or change a program. The following table describes the tabs and options that are available in System Configuration: Lists choices for startup configuration modes: Normal startup. Starts Windows in the usual manner. Use this mode to start Windows after you're done using the other two modes to troubleshoot the problem. Diagnostic startup. Starts Windows with basic services and drivers only. This mode can help rule out basic Windows files as the problem. Selective startup. Starts Windows with basic services and drivers and the other services and startup programs that you select. Shows configuration options for the operating system and advanced debugging settings, including: Safe boot: Minimal. On startup, opens the Windows graphical user interface (Windows Explorer) in safe mode running only critical system services. Networking is disabled. Safe boot: Alternate shell. On startup, opens the Windows command prompt in safe mode running only critical system services. Networking and the graphical user interface are disabled. Safe boot: Active Directory repair. On startup, opens the Windows graphical user interface in safe mode running critical system services and Active Directory. Safe boot: Network. On startup, opens the Windows graphical user interface in safe mode running only critical system services. Networking is enabled. No GUI boot. Does not display the Windows Welcome screen when starting. Boot log. Stores all information from the startup process in the file %SystemRoot%Ntbtlog.txt. Base video. On startup, opens the Windows graphical user interface in minimal VGA mode. This loads standard VGA drivers instead of display drivers specific to the video hardware on the computer. OS boot information. Shows driver names as drivers are being loaded during the startup process. Make all boot settings permanent. Doesn't track changes made in System Configuration. Options can be changed later using System Configuration, but must be changed manually. When this option is selected, you can't roll back your changes by selecting Normal startup on the General tab. Advanced boot options: Number of processors. Limits the number of processors used on a multiprocessor system. If the check box is selected, the system boots using only the number of processors in the drop-down list. Maximum memory. Specifies the maximum amount of physical memory used by the operating system to simulate a low memory configuration. The value in the text box is megabytes (MB). PCI Lock. Prevents Windows from reallocating I/O and IRQ resources on the PCI bus. The I/O and memory resources set by the BIOS are preserved. Debug. Enables kernel-mode debugging for device driver development. Go to the Windows Driver Kit website for more information. Global debug settings. Specifies the debugger connection settings on this computer for a kernel debugger to communicate with a debugger host. The debugger connection between the host and target computers can be Serial, IEEE 1394, or USB 2.0. Debug port. Specifies using Serial as the connection type and the serial port. The default port is COM 1. Baud rate. Specifies the baud rate to use when Debug port is selected and the debug connection type is Serial. This setting is optional. Valid values for baud are 9600, 19,200, 38,400, 57,600, and 115,200. The default baud rate is 115,200 bps. Channel. Specifies using 1394 as the debug connection type and specifies the channel number to use. The value for channel must be a decimal integer between 0 and 62, inclusive, and must match the channel number used by the host computer. The channel specified does not depend on the physical 1394 port chosen on the adapter. The default value for channel is 0. USB target name. Specifies a string value to use when the debug type is USB. This string can be any value. Lists all of the services that start when the computer starts, along with their current status (Running or Stopped). Use the Services tab to enable or disable individual services at startup to troubleshoot which services might be contributing to startup problems. Select Hide all Microsoft services to show only third-party applications in the services list. Clear the check box for a service to disable it the next time you start the computer. If you've chosen Selective startup on the General tab, you must either choose Normal startup on the General tab or select the service’s check box to start it again at startup. Warning Disabling services that normally run at startup might cause some programs to malfunction or result in system instability. Don't disable services in this list unless you know they're not essential to your computer’s operation. Selecting Disable all won't disable some secure Microsoft services required for the operating system to start. Lists applications that run when the computer starts up, along with the name of their publisher, the path to the executable file, and the location of the registry key or shortcut that causes the application to run. Clear the check box for a startup item to disable it on your next startup. If you've chosen Selective startup on the General tab, you must either choose Normal startup on the General tab or select the startup item’s check box to start it again at startup. If you suspect an application has been compromised, examine the Command column to review the path to the executable file. Note Disabling applications that normally run at startup might result in related applications starting more slowly or not running as expected. Provides a convenient list of diagnostic tools and other advanced tools that you can run. Open System Configuration by clicking the Start button , clicking Control Panel, clicking System and Security, clicking Administrative Tools, and then double-clicking System Configuration. If you're prompted for an administrator password or confirmation, type the password or provide confirmation. Click the General tab, click Diagnostic startup, click OK, and then click Restart. If the problem occurs, then basic Windows files or drivers might be corrupted. For more information, search Windows Help and Support for "Startup Repair." If the problem does not occur, then use Selective startup mode to try to find the problem by turning individual services and startup programs on or off. Click the General tab, click Selective startup, and then clear the Load system services and Load startup items check boxes. Select the Load system services check box, click OK, and then click Restart. If the problem occurs after restarting, do one or both (if necessary) of the following tasks: Click the Services tab, click Disable all, select the check box for the first service that's listed, and then restart the computer. If the problem doesn't occur, then you can eliminate the first service as the cause of the problem. With the first service selected, select the second service check box, and then restart the computer. Repeat this process until you reproduce the problem. If you can't reproduce the problem, then you can eliminate system services as the cause of the problem. Perform the following task: Click the General tab, and then select the Load startup items check box. Click the Startup tab, click Disable all, select the check box for the first startup item that's listed, and then restart the computer. If the problem doesn't occur, then you can eliminate the first startup item as the cause of the problem. With the first startup item selected, select the second startup item check box, and then restart the computer. Repeat this process until you reproduce the problem. For more in-depth information, go to the Microsoft website for IT professionals.
OPCFW_CODE
You can search for “Great Desolate: The Strongest Player: Miaobi Pavilion (imiaobige.com)” in Baidu to find the latest chapter! Zhao Yu, Diao Chan, Cai Yan, Mi Zhen, Huang Yueying, the five sisters of Zhen Mi, Huang Wudie and Da Qiao in the back house of the Lord Mansion in Samsara City, and the three-year-old Great Empress Ferocity soon became familiar Up. Ye Chen sent Divine Consciousness outside. After seeing the result, he was satisfied and nodded, and with the help of the maids, he began to prepare dinner. As far as Ye Chen is concerned, the harmony of his women is what Ye Chen likes to see most. A little bit of time passed, one after another delicious dishes, constantly appearing from Ye Chen’s hands, and then being served on the plate. For a time, the scent in the City Lord Mansion grew stronger and stronger, making people feel comfortable and appetite when they smell it. When Ye Chen finished 108 dishes, Ye Chen stopped his “crazy” cooking, and then he made a python broth. In the broth, Ye Chen put dozens of spiritual medicines, plus nine kinds of novel vegetables that grew out of meteors. At first, the broth was unremarkable and tasteless, but as time went by, the aroma of the broth became stronger and stronger. When the aroma of the broth is mixed with the aroma of the previous 108 delightful stir-fries, a piece of multi-colored light appears directly above the Samsara City City Lord Mansion, and then it keeps flashing, constantly Transformation, that is called a magnificent. The mutation above the Lord Mansion of Samsara City City, instantly attracted the attention of everyone in Samsara City. Just when everyone was shocked by the constantly changing and unusually gorgeous multi-colored light over the City Lord Mansion, the system prompt suddenly sounded from Ye Chen’s ear. “Ding, congratulations to player Ye Chen for successfully making Divine Grade gourmet food, full python feast, because player Ye Chen first made Divine Grade gourmet food, special reward, player Ye Chen, Innate attribute points X3000, title, God of Cooking. “ As soon as the system’s prompt sound ended, Ye Chen, who had just filled the broth into the pot, was taken aback. Damn, when I was awarded the title of God of Cooking in the previous life, there seemed to be no Innate attribute point, but now I have Innate attribute point… In Ye Chen’s previous life, there was only one God of Cooking, and that was Ye Chen, but the problem was that when Ye Chen got the title of God of Cooking, he was not at all Innate attribute rewards, but now he has it. Ye Chen has How could it not be confused. Just when Ye Chen was a little confused, a single thought involuntarily appeared in Ye Chen’s mind. After a moment, Ye Chen looked towards Small World and only used less than 10% of Golden Python The python body. It must be the ingredients! In my previous life, I didn’t get any top-quality ingredients, and although the Golden Python of this evolutionary family, although not comparable to dragon meat, Phoenix meat, etc., can reach the level of top-level ingredients… /p> It must be so… Damn, I knew it, wait for the Great Desolate to fly, catch a dragon, or Phoenix and so on, use their meat to cook, so you can definitely get more Innate attribute points… Thinking of this, the corner of Ye Chen’s mouth was involuntarily twitched, and then he recovered his calm. Ye Chen is not a person who suffers from gains and losses. Now that he has obtained the title of God of Cooking, the Innate attribute points have been obtained, and the reasons have been clearly understood. There is no need to think about anything else. Otherwise, it will be too tiring, let alone change it. In general, the three thousand attribute points obtained this time because of the title of God of Cooking are an extra bonus, not bad… Ye Chen just thought of this, a message related to food came out of thin air, and then it was directly instilled into Ye Chen. One after another, Ye Chen couldn’t help but froze for a moment. This information about dropping from the sky…… It looks different from the previous life… In Ye Chen’s previous life, when Ye Chen obtained the title of God of Cooking, he also received a message of dropping from the sky. Ye Chen can use all the ingredients to make top-notch food because of that information. It is precisely because of that information that Ye Chen, who has just lifted the seal, can unfamiliarly use Golden Python meat to make 108 dishes, plus a pot of soup. Originally, Ye Chen thought that the information related to cooking might not appear because of “getting the title of God of Cooking” again. Even if it does, it will be the same. But the fact is that the information related to cooking not only reappears, but also completely different from the cooking information received by Ye Chen in his previous life. Ye Chen just thought of this, and the system prompt sounded again. “Ding, congratulations to player Ye Chen, who met special conditions and successfully advanced to the “Culinary Master”.” As soon as the system’s prompt sound ended, Ye Chen was stunned. Damn, what the hell is that… Thinking of this, Ye Chen looked straight towards the title attribute of “Culinary Master”. Kitchen Road Supreme: The only one in the world, the owner can use all the ingredients to make the highest food. Features: 1. Conditioning: The kitchen master cooks food, which regulates the bodies of all living creatures. Long-term food can keep the body in the best cultivation state and the best fighting state. 2, medicated food: The medicated food prepared by the Supreme Kitchen can accelerate the cultivation, and the degree of acceleration is related to the selected ingredients. 3, Broken Neck: The delicious food made by the Supreme Kitchen Road has a chance to make people breakthrough cultivation bottleneck. 4. Enlightenment: The food made by the Supreme Kitchen has a chance to make people understand the avenue, and the avenues visited are related to the avenue of the Supreme Kitchen. 5. Inheritance: The Supreme Kitchen has an inheritance attribute, you can select one to a hundred people, inherit the cooking skill, and get the inheritor, automatically get the title of the God of Kitchen, and the Supreme Ability of the Kitchen, you can get one at random. Looking at the introduction of the attribute of the Supreme Kitchen, Ye Chen opened his eyes sharply. Fuck! Supreme Kitchen, so abnormal! God of Cooking, this title sounds great, but after all, it’s a cook. As for the Supreme Kitchen, it sounds the same. Even Ye Chen, after hearing the title of Master of Kitchen Road, did not think of anything else. But the fact is, even though the kitchen master is also a cook, the special characteristics are simply abnormal and excessive. Of course, the first two characteristics of Kitchen Dao Supreme are not enough to shock Ye Chen. After all, God of Cooking can also do this. The problem is that starting from the third feature, it began to enter the ranks of abnormalities. Take the third characteristic of Culinary Master, “Broken Neck”. If you eat a dish, you have a chance to break the cultivation bottleneck. What does this mean? No doubt, save time! Know that if you want a breakthrough cultivation bottleneck, you can’t break it if you want to. Some people are fast, and some people use it for hundreds of years, thousands of years, and don’t even want to make a breakthrough. Otherwise, the human players in Ye Chen’s previous life would have Golden Immortal everywhere long before Ye Chen was reborn. Of course, this is not important. The important thing is that if the meal with the broken neck attribute is used on the soldiers of Samsara Immortal City, then their realm wants breakthrough, it will no longer be a problem. If you can’t do it once, just twice, if you can’t do it twice, just three times. If you eat too many times, you still can’t break through cultivation bottleneck. What a joke.
OPCFW_CODE
Not working when i18n is on I have a Django 1.10 project working with django-cors-headers. It works fine when USE_I18N = False in settings. But as soon as I set USE_I18N = True, requests to the site result in: XMLHttpRequest cannot load http://<IP_ADDRESS>/api/auth/login/. Redirect from 'http://<IP_ADDRESS>/api/auth/login/' to 'http://<IP_ADDRESS>/en/api/auth/login/' has been blocked by CORS policy: Request requires preflight, which is disallowed to follow cross-origin redirect. Any light on how to make it work? I'm not sure. You might be having problems because you're using an IP address, which makes the request automatically "sensitive". Try using a hostname (localhost ? or add something to /etc/hosts. Or try http://<IP_ADDRESS>.xip.io/ - docs at http://xip.io/ ). Thanks for the answer. I've tried it with localhost, same result. To be specific, I used: curl -H "Content-Type: application/vnd.api+json" -X POST -d '{"data": {"type": "obtainJSONWebTokens", "attributes"<EMAIL_ADDRESS>"password":"password"}}}' http://localhost/api/auth/login/ It works flawlessly with i18n off. It seems to be Django's internal redirection to the i18n URL that fails, not exactly the client request, if this makes any sense. Maybe it's in the order of your middleware. Ensure cors middleware is above the i18n redirect one That could be it. django.middleware.locale.LocaleMiddleware was indeed below corsheaders.middleware.CorsMiddleware. Testing now. I'll let you know in a minute. Nope. Still not working with: MIDDLEWARE_CLASSES = ( 'corsheaders.middleware.CorsMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.locale.LocaleMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', 'django.middleware.security.SecurityMiddleware', 'django.contrib.admindocs.middleware.XViewMiddleware', ) Sorry I'm afraid I can't think of anything else right now. Ddi you try asking on SO? Also when you do find a fix please update here for future reference. There might be something we can do in the library to prevent it in the future. Will try. Thanks for your time. It seems the problem resides in the CORS protocol itself. The original standard basically made it impossible to do local redirects when using prelight requests. There is a fix but it will take some time to be implemented by the browsers. See stackoverflow issue for more details. Considered closed with the last comment. Oh, if it's just an issue with localhost urls, you might be able to get away by not developing against localhost but instead a domain bound to localhost, like http://<IP_ADDRESS>.xip.io/ I believe the solution is required for other domains as well. If we use i18_patterns, django will redirect /api/condos to /api/en/condos, adding the language prefix. CORS will then fail because of the current limitations with local redirects after preflight requests. That will occur no matter the domain. The solution I adopted is not to use language prefixes. So, /api/condos will remain /api/condos, CORS will no longer complain about redirects and django will determine the language to use in the response through the Accept-Language header.
GITHUB_ARCHIVE
Biomaker Challenge is a four-month programme challenging interdisciplinary teams to build low-cost sensors and instruments for biology. The programme aims to facilitate exchange between the biological and physical sciences, engineering, and humanities for the development of open source biological instrumentation using commodity electronics and DIY approaches. The inaugural 2017 cohort comprises 130 participants working in 41 teams on biological and biomedical devices, instruments, and sensors. Participating teams received a Biomaker Toolkit and a discretionary budget for additional sensors, components, consumables, and mechanical fabrication worth up to £1000. Teams of all sizes were considered for the grant and range from an individual to twelve people. Interdisciplinarity within participating teams is prioritised and although most participants are students or staff at the University of Cambridge, John Innes Centre or the Earlham Institute, external team members are welcome and included designers from the Royal College of Art, computer scientists from ARM, local artists, makers, and entrepreneurs. During the challenge, we offer assistance and support providing components and access to prototyping facilities in Cambridge such as Cambridge Makespace and the Media Studio on the Cambridge Biomedical Campus. We also run periodic technical workshops and meetups to encourage teams to interact and help share skills and ideas. Participating teams will document a full set of assembly/fabrication instructions, images, and a list of components used, which are made publicly accessible via GitHub. This will enable others to replicate and build on their work for their own research questions. The challenge culminates on 21 October 2017 in a public exhibit, the Biomaker Fayre, where participants will demonstrate their creations and prizes will be awarded for especially creative and enabling projects. The Challenge will repeat in 2018 and we look forward to seeing the projects develop with a new cohort of participants to further increase access to low-cost, open access biological tools and technologies. Real-Time monitoring of cell proliferation An absorbance sensor that can be used inside a cell culture incubator for real-time monitoring of culture medium pH and cell density. The system is able to automatically transmit this data to an email server for remote monitoring of cultured cells. Microfluidic Turntable for molecular diagnostic testing An Arduino controlled turntable with a stroboscope for disk visualisation on screen and optical detection for absorbance and fluorescence measurements. The disc, fabricated using a laser cutter and paper plotter, is rotated by an Arduino controlled motor. Fluid actuation is also controlled by Arduino, changing the rotation direction and revolutions per second to achieve pumping, mixing and separation. A programmable staging mount, and an imaging platform for a microfluidics based conditioned learning hub for motile bacterial cells. By developing a maze traversal challenge, different scenarios for chemotactic bacterial colonies to employ their decision-making machinery and navigate through the maze will be assessed. This may lead to an understanding of cognition, memory and learning in bacterial colonies.
OPCFW_CODE
Our mission is to Map, Understand and Engineer Metabolic Regulation in Bacteria. In the wet-lab, we use methods like CRISPR genome editing, CRISPR interference, transcriptomics, proteomics and metabolomics. In the dry-lab, we integrate these data with tools like metabolic control analysis, flux balance analysis and kinetic models. Metabolomics methods are especially important for us, and we are innovating novel mass spectroscopy tools and data analysis methods for untargeted and targeted metabolomics. Research Area 1: Mapping and Understanding Metabolic Regulation Understanding the mutual feedback between metabolism and transcription is one of our main research goals. The main challenge in this project is to identify regulatory interactions between metabolites and transcriptional regulators at a very large-scale (reviewed in Donati et al. 2018). The gold standard for testing the effects of metabolites on transcriptional regulators is still in vitro biochemistry. However, most in vitro assays are low-throughput, feasible for only certain compounds, and combinatorial effects cannot be assayed. We have now developed an approach to infer metabolite-transcription interactions directly from metabolomics and transcriptomics data (Lempp et al. 2019). Next, we use this method to map complete metabolic-genetic networks of bacteria. For this purpose, we use CRISPR interference to perturb hundreds of metabolic genes and measure the cellular responses at scale (Donati et al., 2021). Research Area 2: Engineering Dynamic Control of Metabolic Pathways Engineering metabolic valves and new feedback regulation is our second research goal. The general strategy to optimize a production pathway is controlling expression levels of enzymes using variation of promoter strength or ribosome affinity. Choosing the optimal expression levels “a priori” is key for the performance of a heterologous production pathway, because the pathway usually lacks regulatory mechanisms and is not integrated with regulatory mechanisms of the host. However, without mechanisms like feedback regulation the production pathway cannot respond to the cells growth phase and any deviations away from optimal conditions might cause premature decrease in the production rate. Therefore, it is highly desirable to implement metabolic feedback regulation in a production pathway which drains the bulk of resources from the host. For example, we want to understand the consequences of removing regulation in native pathways and inserting heterologous pathways that are not under metabolic control. In a recent study, we have shown that partial feedback-dysregulation is better for arginine overproducing E. coli than complete dysregulation (Sander et al. 2019). In another project, we engineered an E. coli strain that switches between growth and overproduction of citrulline (Schramm et al. 2020). The signal for the switch was a temperature shift of 6°C, which is easy to achieve in bioreactors, even at an industrial scale.
OPCFW_CODE
Douglas Allan Tutty wrote: On Thu, May 03, 2007 at 11:54:10AM +0200, Martin Marcher wrote:On 5/3/07, Douglas Allan Tutty <firstname.lastname@example.org> wrote:Somewhere in the debian documentation is a warning that after going to single-user mode a return to multi-user is not guaranteed to work.too bad i'm trying to do all of that without actually rebooting (more a matter of "because it should be possible" not a requirement)Reboot into single user (with the -s option if there isn't a grub menu item already) so that you know noting under /usr is being used, mv /usr to /oldusr, fix fstab so that the new usr mounts on /usr, then shutdown -r. Of course be careful not to use any binaries that reside under /usr. Stick wit straight bash and other stuff under /bin. Use the full path to make sure.all of this is done and the system already works with the new /usr mountpoint I'd just like to regain the space without rebooting - to be honest this is the whole point of this exercise.I'm not understanding. Do you mean that you mounted /usr over /usr without emptying it? If so, and you insist on non rebooting, then at least stop X and as much else as you can (as a precaution), then umount /usr, which will now show your full /usr directory tree, mv /usr /oldusr, mkdir /usr, fix owners and permissions to match /oldusr, remount /usr, and if everything is working, rm -rf /oldusr. Note that existing running apps that have files from /usr open will continue to work since open files are not unlinked until they are closed. Which will prevent being able to umount /usr in order to mv the underlying /usr to /oldusr. This is why it's necessary to go to single user mode, which 'should' kill any process with open files in the /usr tree. Good luck, Doug. Since it's necessary to go "single user" anyway, what's the difference between getting there from runlevel 2 versus rebooting to it? All users need to be told to save work and log off, in either case. The only diff I can see would be for a large server system that could take "forever" to reboot. Anyhow, as an exercise, you might want to consider going to runlevel 1 as noted earlier, then using 'ps' to see if there're any running processes that should have died but didn't. And, use 'fuser' to see if any of these are using the /usr tree in any way. You should also save a copy of the 'ps' output to a file for future reference. You can then kill whatever may need killing to allow the umount to succeed. Then do the 'umount/mv/remount/remove' steps described earlier to regain the space and return to runlevel 2. If there were processes normally started by rc scripts going to level 2, where the process is already running, you should see errors for those scripts, though it's possible for some to fail to detect there's already a running process, resulting in two copies executing. Getting the interplay of rc scripts for various runlevels *and* runlevel transitions right is an arcane art, quite difficult to master (I make no claims as to mastery, just the difficulty of achieving it;). So, if you find that some programs end up with two copies running (you can check by comparing with the file created from the 'ps' output, above), you can manually kill them. Description: S/MIME Cryptographic Signature
OPCFW_CODE
Site property is verified for new version of search console, but same property is unverified in the old version TukTown last edited by This is a weird one that I've never encountered before. So basically, an admin granted me search console access as an "owner" to the site in search console, and everything worked fine. I inspected some URL's with the new tool and had access to everything. Then, when I realized I had to remove certain pages from the index it directed me to the old search console, as there's no tool for that yet in the new version. However, the old version doesn't even list the site under the "property" dropdown as either verified or unverified, and if I try to add it it makes me undergo the verification process, which fails (I also have analytics and GTM access, so verification shouldn't fail). Has anyone experienced something similar or have any ideas for a fix? Thanks so much for any help! effectdigital last edited by That assuredly did used to be a problem and in these times I've found it hit and miss. Sometimes Google is able to reach the file directly and not be redirected, but sometimes Google still can't reach the file. In which case, you modify your .htaccess file to allow that one file (or URL) to be accessed via either protocol. I don't remember the exact rule but from memory, doing this isn't that hard Failing that you should have access to this method: Ctrl+F (find) for "DNS record" and expand that bit of info from Google. That version works really well and I think, it also gives you access to the new domain level property The htaccess mod method may be more applicable for you. Certainly make the change via FTP and not via a CMS back-end. If you break the .htaccess and kill the site, and you only have the CMS back-end to fix it - which also becomes broken, you're stuck. Modding your .htaccess file should not break FTP unless you do something out of this world, crazy-insanely wrong (in-fact I'm not sure you can break FTP with your .htaccess file) Another option, temporarily nullify the HTTP to HTTPS redirects in the .htaccess, verify, make your changes, then put the rule back on. This is a bad method because, in a few weeks Google will fail to reach the file and you will be unverified again. Also your site may have legal reasons it must, must be on HTTPS. Also allowing HTTP again may shake up and mess up your SERPs unless you act lightning fast (before Google's next crawl of your site) Something like this might help: https://serverfault.com/questions/740640/disable-https-for-a-single-file-in-apache or these search results: https://www.google.co.uk/search?q=disable+https+redirect+for+certain+file Hope that helps TukTown last edited by Thanks very much for your response. You are exactly right about the travails of the multiple properties, and I hadn't even thought about how the new domain level access should handle the multiple versions of each site (I'm still used to having to verify four separate properties). In the end, you were exactly right; I just had to FTP the verification file once more and it worked immediately. A question, though: if you were trying to verify a non secured protocol (http://) of a site that is https://, and you were for some reason unable to verify through GA or GTM, wouldn't uploading a verification file automatically create a secured protocol and therefore be invalid for verification? This is (thank goodness) purely theoretical, but it seems as though it would be a rough task which I'm sure happens periodically. Thanks again for the insight. You were a great help! effectdigital last edited by I have no experience with this particular error but from the sounds of it, you will just have to re-verify and that's all that you can do. One thing to keep in mind is that different versions of the same site (HTTPS/WWW, HTTPS, HTTP/WWW, HTTP, any sub-domains) all count as separate websites in Search Console The days of that being a problem are numbered as Google have come out with new domain-level properties for Search Console, but to verify those you need hosting level access so most people still aren't using that until Google can make the older verification methods applicable What this does mean is that, if the URLs which you want to remove are for a different version of the site (which still counts as a separate property) then you still have to verify that other version of the site (maybe the pre-HTTPS version, or a version without WWW). If you have the wrong version of the property (site) registered in your GSC (which doesn't contain the URLs you want to remove) then you still need to register the old version A common issue is when people move from HTTP to HTTPS, and they want to 'clean up' some of the old HTTP URLs and stop them from ranking (or at least, re-direct Google from the old property to the new one properly). They delete the HTTP version of the site from their GSC, but then they can't get back to do proper clean-up. In most instances Google still considers different site versions to be different sites in GSC. As mentioned this isn't a problem for some people now, soon it won't be a problem for anyone. But if you're looking at any kind of legacy account for sites that were built and verified up to a few months ago, the likelihood is you still have to re-verify other site versions The new domain level properties may also have bugs in, where they defer back to the non-domain level properties for some stuff. You may have just found an example of that to be honest (but I can't confirm this) I'd advise just doing what the UI tells you, it's really all you can feasibly do at this juncture Explore more categories Chat with the community about the Moz tools. Discuss the SEO process with fellow marketers Discuss industry events, jobs, and news! Chat about tactics outside of SEO Dive into research and trends in the search industry. Connect on product support and feature requests. Hi All, For my Ecommerce site I have done lots of tracking total I do have total 45 event tracking but many times one event, track many pages. So if visitors click on url or button then do my site speed affect because of these trackings? Thanks!Reporting & Analytics | | pragnesh96390 I migrated a website from a .aspx to a .php and hence had to 301 all the old urls to the new php ones. It's been months after and I'm not seeing any of the php pages showing results but I'm still getting results from the old .aspx pages. Has any one had any experience with this issue or knows what to do? Many thanks,Reporting & Analytics | | CoGri0 Hi all Wondering if i could pick the brains of those wise than myself... my client has an https website with tons of pages indexed and all ranking well, however somehow they managed to also set their server up so that non https versions of the pages were getting indexed and thus we had the same page indexed twice in the engine but on slightly different urls (it uses a cms so all the internal links are relative too). The non https is mainly used as a dev testing environment. Upon seeing this we did a google remove request in WMT, and added noindex in the robots and that saw the index pages drop over night. See image 1. However, the site still appears to getting return for a couple of 100 searches a day! The main site gets about 25,000 impressions so it's way down but i'm puzzled as to how a site which has been blocked can appear for that many searches and if we are still liable for duplicate content issues. Any thoughts are most welcome. Sorry, I am unable to share the site name i'm afraid. Client is very strict on this. Thanks, Carl image1.pngReporting & Analytics | | carl_daedricdigital0 A lot of you might already be aware of the recent Google change at encrypting all search activity except for clicks on ads. Rand did a whiteboard session on this recently. How is everyone planning to adjust their research data to accommodate for this change?Reporting & Analytics | | SEO5Team0 Since June 13, 2013, the number of organic search queries containing a plus sign (+) has gone up over 1,000% compared to the previous period on my site in Google Analytics. These plus signs appear to be taking the place of spaces in these search queries (i.e. "word1+word2+word3"). This appears to be almost (or completely) Google organic traffic, not other search engines. Since I highly doubt searcher behavior would change so suddenly, I'm trying to figure out why Google is replacing spaces with plus signs. Is anyone else seeing this? Any ideas?Reporting & Analytics | | RCF0 Hello, I have about 5 sites I want to set up multiple-domain tracking in google analytics. All posts I read seem to be focused on cross-domain tracking for the purpose of tracking a visitor from one domain across another domain for shopping cart check outs. I don't need that. I have about 3 sister sites (mastersite.com, sistersite1.com, sistersite2.com, sistersite3.com) related to my primary site. I want 1 Master Analytics Profile to track traffic for all of these sites combined. My visitors will not jump from mastersite.com over to sistersite1.com. There will be no cross-domain visits. How can I set up 1 master google analytics profile that will aggregate traffic data from all sites and present the data to me in one analytics profile. Please helpReporting & Analytics | | AndreGant0 When I look at my SEOMOZ campaigns I see there are a lot of warnings in regards to missing Meta Tags Descriptions but they exist on a clien'ts wordpress site when I look at my SEOMOZ campaigns I see there are a lot of warnings in regards to missing Meta Tags Descriptions but they exist on a clien'ts wordpress siteReporting & Analytics | | Doug_Hay1 How to measure number of visits from Google News coming from Google Universal Search (NOT referral coming directly coming from news.google.com) with google analyitcs I'm running a news site, and I have a problem of accuratly measuring which traffic is REALLY coming from google news. I analyzed a lot of individual articles and I come to the conclusion, that the visits, that come from the google news section in the universal search results are counted as "normal" search engine traffic in google analytics. So if you do a Google search for a topic that includes links from Google news, you don't get an accurate referral count. As an example, if you do a search for "eBay", incorporated into the page 1 search results you may also see Google news results as well.Reporting & Analytics | | Mulle If someone clicks on that Google news link that appears in Google search, it shows up in Google analytics as a referral from Google search, when it was actually from a Google news referral. I was already checking google analytics and google news help forums and searched SEO blogs for this. But I wasn't able to find a working solution. Can anybody help me out with this problem? Thanks so much, Matthias0
OPCFW_CODE
Why use XML in Android? From what I understand, isn't XML used for layouts and to setup how an activity looks? My book says that XML files are converted into Java code but then, why not just write everything in Java? Check out this link on how XML comes in handy over Java in layout. XML has about two syntactic elements. Java has... a lot more. If you really want a good understanding of why, try writing even a simple Android app without XML. While possible, it would definitely be frustrating. In a sense, XML used in this way is like an interpretted language and Java is a functional language. In one case, you're writing code that does the task and in the other, you provide all information necessary to do it without the how. In Java, you can only run the code while with XML, you can decide later to optimize rendering without changing the overall appearance. It is a tremendous advantage that shouldn't be overlooked. @DavidEtler: not a good example because you're begging the question. The idea is having what's likely to change separated from the rutines. XMLs here are descriptors that probably change from one application to another, but the code that read the XML and turn the description into a running view, is (can be) allways the same. Think in HTML. HTML is XML. Now think in your browser how it can parse and render any HTML. Implementing views without these XML is like implementing Java Swing app in hardcore mode. 70% of the code is addressed to compose the views and provide behavior to them. Its because its simpler - tools can be written to manipulate a XML document far easier than understand java code, so the layout can be created and modified by a simple tool that does not need to also be a java parser. Its also easier for people to describe a layout in XML than in java directly. This technique is used by a lot of things, eg WSDL that describes a web service interface and is converted to (quite complex) code by a specialist tool. It helps the developer focus on one aspect without worrying about the implementation and allows tools to be written to generate different types of code (eg the wsdl can be turned into a server stub, and also a client API) To agree and expand a bit - by "simpler" we generally mean a higher level of abstraction - it will (should!) take less XML to describe the desired result than it would take Java (or similarly XAML and C#) because the XML is in effect a domain specific language for layout etc. This should also mean that its easier to understand the intent i.e. layout from the XML than from the equivalent java code. Microsoft took a similar approach with their UI related stuff in the form of XAML. @lzcd as do nearly all others - Qt has its QML, even MFC has a .rc file that describes a UI layout in plain text. I'm slightly late for the party, but here are my two cents on the matter, and I've been lucky enough to answer this question for someone who wrote a large app with 70+ screens and tons of business logic purely in Java. Here's why it's not advisable to write a pure Java/Kotlin Android app: Proper Separation of Responsibilities - programs with UI are preferably implemented with a clear separation between how an interface looks and how an interface behaves. While you might not be able to kick out 100% of layout-related settings from your Java (or Kotlin) code, the layout itself would be defined by an activity's XML file. You see all your components in one place, and if need be, you access them from the activity's Java/Kotlin class and manipulate them programmatically (e.g. to bind event listeners). Separation of responsibilities should be a compelling reason to use XML for your Android app's activities. Speed of Development - Android Studio, like other modern IDE's that support Android development, allows you to preview your layout based on your XML file without compiling the whole app. If you were to code your whole app inside the activity classes only, you'd probably have to compile the app each time you want to review your layout changes, and that's beyond inefficient. As a side note, there might be tools/plugins that preview your layout based on Java/Kotlin code only, but I'm not aware of them. Would you rather see your layout changes in split seconds or wait a minute or two for the AVD (Android Virtual Device) or your debugging device to get the compiled app up and running? Is it even running, or do you have a bug somewhere? It is a few orders of magnitude faster to define your Android layout in XML. This, too, should be a compelling reason. Code Readability - if you aren't convinced yet, and you write your app as an individual developer, then surely you can understand your own code. You might even break down the views' construction into methods, create beautiful abstractions, re-use components and code with state-of-the-art design patterns. But what happens when a second developer gets in the picture? He or she would have to not only understand your code, but also sift view code from behavior code (see point #1) and try to get inside your head and decipher what crossed your mind when you had constructed the code. And keep in mind that layout code in Android can get very verbose, so no matter how beautifully your code would be written, no one really wants to onboard themselves a project that forces them to read a class with 1000+ lines of code for every single Android activity. The biggest advantage of writing your app's in XML, in terms of readability, is that XML is structured in hierarchy. The layout relationships between elements are immediately visible. If you construct your Android app without XML, you programmatically append children to their parent. You can't really indent your code to reflect the elements hierarchy on your screens. Online materials - while it may not be a crucial point, I still find it a good one. Online materials are more prominently available for the XML+Java/Kotlin setup, because it's the encouraged way to code Android apps. If you get stuck on layout problems, you're more likely to find someone who solved your layout problems with this setup rather than the coding-only setup. You may need to read the manual (always encouraged) more often. And sure, you might find some aggressive developers online, telling you to Read the (.*) Manual, but don't let them fool you, as they, too, Google things in hope to find quick answers. I hope it helps a bit. Because Android designers decided to implement it that way :) In principle, everything could be written in Java. Microsoft did it for WinForms: the form description is saved as the auto-generated *.designer.cs file (or a generated region in early version of .NET framework). Each method has its pros and cons. By storing the UI as XML, it may be easier to parse so the designer can be simpler to implement. However, it is another language for developer to learn: a totally new Domain Specific Language (UI), not just XML. By storing the UI as the target language (Java, C#...), the designer's implementation may be more complex but the code for creating UI is already familiar to the developers. Another advantage is that the existing code refactor tools can work without any changes.
STACK_EXCHANGE
M: RSA encryption cracked by carefully starving CPU of electricity - paran http://www.engadget.com/2010/03/09/1024-bit-rsa-encryption-cracked-by-carefully-starving-cpu-of-ele/ R: DarkShikari _until RSA hopefully fixes the flaw_ I cannot comprehend the confusion of ideas necessary to generate this phrase. R: mquander Can't you find a real link next time, instead of this content-free blogspam? Here, I did it for you: [http://www.eecs.umich.edu/~valeria/research/publications/DAT...](http://www.eecs.umich.edu/~valeria/research/publications/DATE10RSA.pdf) By the way, this is from a year ago. R: jsdalton A link to a PDF of a research document with an obtuse abstract, versus a quick, readable summary published on one of the industry's leading tech publications? I'll take the latter, thank you. R: Xk > industry's leading tech publications I hope you're joking. Sometimes, sure, they're good. But that article is ... I'll hold my tongue. They're making it out to be some amazing feat. While I don't want to take away from the authors who I'm sure are great researchers, this is nothing new. And then there's the part where they suggest RSA fix it, and that's just something else. R: jsdalton It _is_ one of the industry's leading tech publications: <http://www.techmeme.com/lb> ... though I do agree, this article is pretty crappy, as some others have pointed out. I mostly just disagree with the parent's assertion that a short summary of a research paper posted on a major publication qualifies as "blog spam" or that the original poster had some kind of obligation to track down the source paper. Sometimes a tl;dr version is just what the doctor ordered. R: devicenull If you have physical access to the server, it's already screwed. This is hardly "cracking" RSA encryption.
HACKER_NEWS
For months now, I have been scratching my head over a small but persistent number of “crash reports” affecting a few of my apps. The issue is most prevalent in MarsEdit, where I have a handful of users who run into the issue multiple times per day. Luckily, one of these users is my good friend and colleague, Manton Reece. I’ve been peppering him with questions about the issue for weeks, while he stoicly puts up with the behavior. Even with the assistance of a highly technical friend who can reproduce the issue at will, I had thrown my arms up in despair several times. I put “crash reports” in quotes above, because although my in-app crash reporter notices the app abruptly terminates, the system doesn’t create any obvious artifacts. No crash or hang reports. No “Quit Unexpectedly” dialog. The app is just … gone. I wrote a question in the Apple Developer Forums, which turned into a kind of de facto diary as I pursued the issue. When I started to feel bad about asking Manton to try this, that, and the other thing, I finally asked if he could send me a “sysdiagnose” report. If you’re curious, the easiest way to grab one of these on any Mac is to simply press the Control, Option, Command, Shift, and “.” (period) keys at once. You’ll see the screen flash, an indication the system is starting to collect the reports. A few minutes later the report will be revealed in the Finder: a probably quite large zip archive. Open it up and see the wealth of information about nearly every aspect of the system. Yet even with this wealth of information, I was stymied. It wasn’t until I chanced upon the delightfully pertinent nuggets of information in “/var/log/com.apple.xpc.launchd/launchd.log” that I got my first whiff of a clue: 2022-05-03 09:15:22.088718 (gui/501/application.com.red-sweater.marsedit4.384452971.384452977 ) : exited with exit reason (namespace: 15 code: 0xbaddd15c) - OS_REASON_RUNNINGBOARD | <RBSTerminateContext| code:0xBADDD15C explanation:CacheDeleteAppContainerCaches requesting termination assertion for com.red-sweater.marsedit4 Here we have a message asserting that MarsEdit was terminated, on purpose, and better still, it includes an explanation! As far as explanations go, “CacheDeleteAppContainerCaches” is not much of one, but it did give me something to go on. Searching for the term yielded pertinent results like this post about Apple Mail and Safari “suddenly quitting.” Unfortunately, they all seem to be scratching their heads as much as I am. The other thing that jumped out at me from the log was the term “OS_REASON_RUNNINGBOARD”. Searching for this results in only a few scant links, all related to Apple’s open source Darwin kernel. However, Searching instead for just “RunnningBoard” offered a glimmer of hope. A post on Howard Oakley’s blog, “RunningBoard: a new subsystem in Catalina to detect errors“, includes a particularly succinct description of the eponymous OS subsystem (emphasis mine): Catalina brings several new internal features, a few of which have been documented, but others seem to have slipped past silently. Among the latter is an active subsystem to replace an old service assertiond, which can cause apps to unexpectedly terminate – to you and me, crash – in both macOS 10.15 and iOS 13: RunningBoard. Unexpected termination. Yep. To you and me? Crashing. At this point in the story I’m going to elide several hours of long, tedious, and yet still somehow fun work, wherein I disabled System Integrity Protection on my Mac, so that I could attach to the pertitent system daemons and try to make sense of how, and when, they might decide to unilaterally terminate an app like MarsEdit. While digging deeper into the issue, I remembered that “explanation” from the log, CacheDeleteAppContainerCaches, and it reminded me of system maintenance software like CleanMyMac. I normally shy away from these kinds of apps because they are historically known to be overly-aggressive in what they decide to delete. In the name of science, however, I decided to run it, with care, on my Mac. Boom! After running CleanMyMac once, MarsEdit, along with Numbers, were suddenly not running anymore. I had finally reproduced the issue on my own Mac for the first time. Anybody who has fixed software bugs, either for a living or as a passion, knows this is the critical first step to really addressing an issue. With some tinkering, I was able to narrow down the reproduction steps to running the “Free Up Purgeable Space” action. It turns out this is invokes a system API responsible for trying to delete caches, etc., from a Mac. Normally the system only does this when disk space is critically low, but CleanMyMac gives you the option to exercise the behavior at any time. That single log line quoted above turns out to hold another gem of information. The “code:0xBADDD15C” looks like it could be an arbitrary hexadecimal value, but it’s an example of an error code designed to both uniquely identify and suggest a mnemonic clue to the underlying issue. Apple documents many of these codes, which include 0xc00010ff (cool off), 0xdead10cc (deadlock), and 0xbaadca11 (bad call). I searched the system frameworks for this code and found it in the disassembly of “/System/Library/PrivateFrameworks/CacheDelete.framework”. Particularly, in an internal function called “assert_group_cache_deletion”. It was only after exploring the issue in the forums, did Quinn explain that the code in this scenario is a mnemonic for “bad disk”. I guess it was easier to spell out than trying to represent “full disk”. Equipped with all this new information, what can we do about the unexpected terminations? Well, nothing. I do wish Apple’s framework would try asking nicely if the app would quit, before summarily terminating it, but I guess the thinking is that this functionality should typically only be reached in extenuating circumstances. After learning more about the issue, I confirmed with Manton that his Mac did have low disk space, so I guess it was just the system trying its best to free up space that caused the issue for him. The one thing I plan and hope to do as followup is to amend my built-in crash reporter so that it will not prompt the user or report a crash when the app terminates for this reason. I think it should be possible to detect the codes alluded to above, and simply let “0xBADDD15C” terminations happen without fanfare.
OPCFW_CODE
What is the proper etiquette for back country camps with assigned spaces when people take a different site? Recently, my family hiked in to a back country camp in a national park. We had an assigned space within the campground but when we arrived our space had been occupied by a camper registered in another site. They had set up their camp and were out hiking so we could not discuss the issue. Not wanting to hang out and wait with full packs, (they finally returned several hours later) we looked at other unoccupied sites and decided to take one of those. The site we chose had apparently not had anyone reserve it as no one approached us with their permit but there was another party who also had the same issue and ended up taking another empty site. I am wondering what the proper etiquette for this situation might be. I hate to push someone out of a site if another will work as well. We ended up with a far better site that we had reserved but had we reserved the better site I would be upset. What you did was probably the most peaceful course of action, but ultimately, the best people to direct this question to would be the people who issue the permits, the proper etiquette will likely differ from place to place. Unfortunately, there really isn't a pleasant global solution to this. By all rights you could have moved their stuff to the side and set up your camp in the spot, but people get touchy when you mess with their things, and you could be setting yourself up for a confrontation by doing so. What you chose to do was probably wisest in order to avoid an awkward or heated encounter. Even waiting around until they showed up and explaining to them, "Hey, we reserved this spot, here's our permit saying so." could have created an unpleasant atmosphere, despite how pleasant both parties may have reacted, I imagine their first response would have been along the lines, "Is it really that big of a deal? Can't you just pick another spot? There are plenty around..." The best thing to do, would have been to notify the campground custodian, if there was one available, which is not very often at backcountry campgrounds. Otherwise, use your own discretion, perhaps they are simply unaware that the sites are assigned, I think every backcountry campground I've ever been to have all been first come first served, even with a permit, most of the time the permits are just so they can keep a count of how many sites are being used, so they don't over crowd the area, they're also used in the event of an emergency (forest fire), so they can know how many people they may potentially need to evacuate, or search for. Or maybe, the people in the OP's site had another space that was occupied by someone else so they decided to pick another unoccupied spot. As long as there are enough spaces around so that it doesn't turn into musical chairs, the OP made a wise choice IMO.
STACK_EXCHANGE
by Sven Nilsen, 2018 Some friends and family of mine were gathering around the table to tell stories, jokes and riddles. Being more than normal interested in the logical nature of riddles, we tended to spend way too much time discussing details of solutions. So, one clever young person decided enough was enough and came up with this riddle: Three men stand in a desert on a line. Each face is oriented in the same direction. The person in the back looks at the shoulders of the middle person, and the middle person looks at the shoulders of the person in front. The person in the front says: "I can see the shoulders of the person in the back". How could this be possible? How could the person in the front see the shoulders of the person in the back? One solution suggested was that the men were standing on a very tiny planet. The line curved around the sphere such that the person in the front had the backmost person in front of him. Another solution suggested was the men bent forward looking at the person behind through their legs. This required that the men were allowed to move their upper bodies, without turning around. After lot of discussion, the story teller revealed the answer: The person in the front lied. This caused even more heated discussion, because the people around the table felt they had been misled. If you say "How could the person in the front see the shoulders of the person in the back?" then you are implying that the person indeed saw the shoulders of the other person. However, if you only say "How could this be possible?" then you are not implying that the person in the riddle speaks the truth. Lying suddenly becomes a possible solution according to the strange rules of telling riddles. We take for granted that when somebody tell a riddle, they speak truthfully. Otherwise, it would not be a riddle, but just a joke. Which was the precise intention of the story teller. What makes this riddle/joke interesting is that it points out some deep intuition we have about recursive minds: - The person's lie in the riddle is identified with the lie of the story teller - It is necessary to reflect on the rules of telling riddles when it breaks expectations - A modeled mind is not necessary executing (it is projected by the interpreter of the riddle) The Ability to Model Minds is Essential for Reflecting on the Nature of Truth In philosophy it is common to use assigned truth values to sentences according to some language. However, this practice tend to mask the fact that in order for sentences to have meaning, they must be interpreted by some sort of mind or physical process. For example, imagine you had a magical ability to program water by simply speaking to it. By telling water to form a shape, it would do so. The sentences you tell the water would have meaning to the water because it determines the water's behavior. Your own mind does not affect the meaning of the sentences, e.g. you could mutter some words in your sleep and the water would form a shape anyway. Normally when we speak to water, nothing happens. What we say has no meaning for the water, because there is no behavior to be determined. With other words, the water is not interpreting our sentences. In order to reflect on the nature of truth, one must not only have the ability to interpret sentences, but also, the way one interprets sentences makes one capable of imagining minds that interpret sentences. This is where the phenomena of recursive minds come from. Truth is an Ambient Concept of Common Knowledge Among Recursive Minds If two rational system gains the ability to model recursive minds, it is believed that they will learn the concept of truth and arrive at similar definitions, because the things that can be said about these systems are the same and therefore identical. Everything possible to say about truth might be possible to say upon reflecting of recursive mind modeling, therefore this provides an anchor for rational agents to infer how they should agree about the nature of truth. For example, Alice believes Bob is taking a bath. Alice also believes that she believes Bob is taking a bath. In Naive Zen Logic, this can be written: "Bob is taking a bath" ? Alice ("Bob is taking a bath" ? Alice) ? Alice ? operator means X is belived by Y written as X ? Y. So, how can a mind learn the concept of truth? From the sentences themselves! The concept of truth is not about any particular fact, but about a reflective state of mind. Any fact is a specific instance of truth, while the concept of truth itself is "outside" the realms of facts. Reflection in this case means: Interpreting Naive Zen Logic the way it is meant to be interpreted (how it is defined and used). This does not mean that the concept of truth is arbitrary, but rather that the concept is ambient: It is learned through examples using a general capability of modeling minds. The reason to believe this is grounded in path semantics, that connects the things said about an mathematical object to the identity of the mathematical object and vice versa. It is not necessary to show what the definition of truth is, only that there exists a way to figure out what it is, that is common for all agents. When Alice reflects on what she believes, the concept of truth is embodied in the statements of what she believes. Naive Zen Logic can not express the nature of truth directly, but an agent playing with it can learn how it works implicitly. However, this happens at a very deep level of intelligence (one not currently reached by AI technology). With other words, truth is a kind of projection of a general understanding that the sentences we interpret are indeed interpreted. It is sufficient to learn the concept of truth that the sentences are interpreted, and the mind interpreting is able to reflect on recursive minds interpreting sentences. The result is a shared common knowledge among recursive minds. Artificial General Intelligence Requires the Ability to Model Minds Without the ability to model minds, there can be no AGI, since otherwise something as essential as "truth" has no meaning. It has meaning to humans, but not relative to the mind representing the AGI: - A computer program does not have any intrinsic concept of truth - The ability of general computing is weaker than general theorem proving - Comprehending the concept of truth is a statement about a reflective ability of modeling recursive minds I believe there is no other way to ground the concept of truth. At least, it is the only way I know so far how two agents can converge on the same concept. It seems to be a necessary building block to build higher order concepts upon it. AGI requires higher order concepts to perform efficiently in the real world, which is not possible without some sort of grounding the concept of truth. E.g. it would be trivial to trick an AI without this kind of grounding to believe anything. A primary obstacle for achieving AGI is the milestone of modeling minds. Since we have not reached this milestone yet at sufficient level, we justifiably think of the AI technology we have as lacking some sort of "effective intelligence". It prevents us from calling the current state-of-the-art as "true intelligence", since when "true" has no meaning as in "truth", there can of course be no "true intelligence". The concept might be meaningful to humans, but not to the computer program and it is therefore not an AGI. Implications for AI Safety Strategies So far in this post, I argued that Recursive Minds is a milestone in AGI technology. This is some way one can measure the progress on AI besides it just improving on various benchmarks. However, I also believe that Recursive Minds is a easily noticeable tipping point, not just some minor improvement in the overall state of AI technology. This has implications for AI safety strategies. The reason for this is that, since higher order concepts through reflection requires the grounding of the concept of truth, I predict that no significant progress on higher order concepts (relative to super-intelligence) will come before the ability to ground truth using Recursive Minds is achieved. This does not mean that grounding of truth using Recursive Minds will happen directly, but that the ability to do it will roughly coincide in time with an easily noticeable tipping point of AGI. The kind of AI safety technology that is needed to deal with AI control problems changes in nature before and after this tipping point. Before this point in time, AI control problems will have a character of localized semantics. After this point in time, AI control problems will take on a more globalized semantical character. For example, the bias of training data is a localized kind of control problem, because it is a grounding problem of semantics relative to some specific data set. By fixing the data set or the algorithms, this problem can be solved (locally). A control problem of globalized semantical character means that e.g. Machine Learning faces problems that are not related to some specific data set or training environment. This could be e.g. definitions of goals that are interpreted differently across various contexts: Put this box on the top shelf. What is the box? What is the top shelf? It depends on what the speaker is referring to. This is an example demonstrating what it means to understand a goal at high level of thinking, not just a hard-coded way that fits a specific situation. Such kind of problems are unsolved globally as long they remain unsolved, but once a technique to solve them exists, it might be relatively easy to fix them (globally). E.g. most implementations of similar AI technology being tested against some safety standard. A problem could be that various safety solutions could depend on the approximate stage of Recursive Minds being reached. For example, an AGI equiped with the ability to test other AGI implementations for some specific error, is not expected to be functional before passing the Recursive Minds tipping point. This is not because a such AGI requires grounding the concept of truth directly, but that the higher order concepts required builds on some grounding of truth. Therefore, it might be necessary to accelerate AI safety research rapidly in the time period shortly after the Recursive Minds stage is reached. To deal with the dependency problem efficiently, one could coordinate the research on AI safety by planning in advance what to do in the event of a such tipping point. This could be a way to avoid scenarios like Future-X, where lack of rigorous definitions of AGI leads to a dangerous slippery slope. Alternative Approaches Are Likely to Fail It might be possible to hard-code the concept of truth in a computer program, but it requires extensively elaboration to cover the practical use cases of truth. I find it easier to believe that a program capable of pursuing the things that can be said about truth will be able to invent concepts that are useful but coherent. A neural network that learns to think about truth might perform better than any hard-coded program. My argument can be broken down into three parts: - Reflecting on the nature of truth is possible by reflecting on modeling Recursive Minds - Higher order concepts require some sort of ability that "looks like" reflecting on the nature of truth - AGI ability of 2) is likely to coincide with the ability of 1) from similar problem complexity While I do not have a name or concept for this general ability, I would like to point out Recursive Minds as a useful target ability. It might be possible work around this issue, but I think seems a bit "we hope it will eventually develop understanding" of things that are relatively easily comprehensible by humans. I believe such approaches just never reaches reaches sufficient level of intelligence.
OPCFW_CODE
Storing data for use on Android and Windows Applications I posted this last night on StackOverflow and was advised to move it over to StackExchange, thank you for taking a moment to look at my question. I'm developing a project proposal for my final year project at University and as I aim to use programming languages I am currently not too familiar with I'm looking for some guidance - I can't include details of my project but hopefully you will understand what I'm after. I'm going to be creating an Android application (in Java) and a Windows Application (in C#) that will ideally access, query and update a remotely hosted Database or set of XML files (this would most likely be over the Internet). I've done some looking around the internet and SQLite seems like a safe-bet for cross-platform manipulation of the database; however I would like to keep the system as lightweight as possible and I'm wondering whether XML files may provide a better alternative? Anyone out there that has experience using SQLite and/or remotely hosted XML for the purposes of Android and/or C# development that could point me in the right direction? If there is an alternative solution other than those I have mentioned I would be interested to hear about them too. Thank you for taking the time to read my question. Edit: The purpose of this application is for a small scale business, the data source would not need to be updated by more than one source but may be view from multiple sources (i.e. through multiple phones and a desktop PC). The database wouldn't be updating masses of data at a time (most likely single rows of a few tables at the most). Don't have any of your endpoints talk to your database directly, create some sort of (potentially web) service on the server that interfaces with it. Then you can create your datastores however you want, or even swap them out, without having to worry about your client devices. When dealing with data & mobile devices, one of the first question is: "do you need to keep a local (in device) database for offline use or is it OK to ask a server some data everytime you need it?" If you have to store a copy of your database on each device, having a common RDBMS (SQLite) is going to be a good thing. If the app starts for the first time, just download the db file locally. It will be easy to download it again and upgrade it incrementally with upgrade scripts (shared accross server and clients). However, you won't be able to use the same code to access your database on both devices. Perhaps you can think of a way to generate the Data Access code in all the languages you need based on your database structure. This may (or may not) save you some time. Generating code with code is always a good experience anyway (see The Pragmatic programmer, rule 29 "Write Code That Writes Code"). If you are planning to develop an application in Android SQLite is the only way to go. In my case I have to include a database which is already populated with information to fill the dropdownlists (can't remember their Android name but I hope you got the meaning). Thus I have to insert data prior to deploy and have to include it. Also using SQLite means you can query database with SQL syntax which is very handy. Windows Phone, on the other hand, provides Azure platform in addition to SQLite, but the drawback is it is not free and needs an always online device. A caveat regarding Android is the assets folder, the place where you put additional files to include with the application, has a filesize limit of 1,5 megabytes concerning files other than MP3 or PNGs. You have to rename your db file to PNG or MP3 if you exceed filesize limit and copy it to application (which you have to do if you include it with its db extension) with correct filetype. Thankfully, the OP isn't hosting the database on a specific phone, but somewhere else. With the creation of a good service (SOAP, RESTful, or just plain XML marshalling), he should be able to talk to whatever he wants, server side. It's almost never a good idea to have a remote client talk to the database directly, if for no other reason than security.
STACK_EXCHANGE
- Why won’t my ball python eat? - What are the consequences of a ball python not eating? - What can I do to get my ball python to eat? - What are some common mistakes people make when trying to get their ball python to eat? - How can I tell if my ball python is healthy? - What should I do if my ball python stops eating? - Can I force my ball python to eat? - What are some common reasons why ball pythons stop eating? - How can I prevent my ball python from stopping eating? - What should I do if my ball python won’t eat? It can be frustrating when your ball python won’t eat. Here are some tips on how to get your ball python to eat. Checkout this video: Why won’t my ball python eat? If your ball python is not eating, there could be a variety of reasons why. It is important to first rule out any potential medical problems by taking your python to the vet. If your vet gives you the all-clear, there are a few things you can do at home to help encourage your python to eat. What are the consequences of a ball python not eating? There are many potential consequences of a ball python not eating, including health problems, weight loss, and decreased lifespan. If your ball python is not eating, it is important to seek professional help from a veterinarian or reptile specialist. What can I do to get my ball python to eat? If your ball python is not eating, it could be due to a variety of reasons. Sometimes, it takes a little trial and error to figure out what the problem is. However, there are some general tips that can help you get your ball python to start eating again. One of the first things you should do is check the temperature of your snake’s enclosure. If the temperature is too low, your snake will be less active and may not have an appetite. The ideal temperature for a ball python is between 78 and 80 degrees Fahrenheit. If the temperature is too high, your snake may be stressed and also lose its appetite. Another thing to consider is whether or not you are feeding your ball python live prey or frozen prey. If you are feeding live prey, make sure that the prey is not too large for your snake. If the prey is too large, it may be difficult for your snake to eat and digest properly. It’s also important to note that some snakes prefer live prey while others prefer frozen prey. If you’re not sure what your snake prefers, you can try offering both live and frozen prey and see which one they choose. If your ball python still isn’t eating, it’s important to take them to see a vet so they can rule out any potential health problems. Once you’ve ruled out any health problems, you can try experimenting with different types of food until you find one that your snake likes. What are some common mistakes people make when trying to get their ball python to eat? There are a few common mistakes people make when trying to get their ball python to eat. The first is not offering the right size prey. If the prey is too big, the ball python will be unable to eat it and may become discouraged from trying to eat anything at all. The second mistake is not offering the prey often enough. A ball python needs to eat about once a week, and if it goes longer than that without eating, it may become weak and unhealthy. The third mistake is not keeping the snake’s enclosure warm enough. Ball pythons are native to Africa, where it is warm year-round. In captivity, they need an environment that mimics their natural habitat as closely as possible, which means a temperature in the high 70s or low 80s. How can I tell if my ball python is healthy? There are a few key indicators that can help you tell if your ball python is healthy. First, check the eyes. They should be clear and not sunken in. Second, check for any physical abnormalities, such as lumps, bumps, or scratches. Third, look at the snake’s weight. If it has lost a significant amount of weight, it may be sick. Finally, check the temperature and humidity of its habitat. If it is too hot or too cold, it could make your snake sick. What should I do if my ball python stops eating? If you have a pet ball python that has suddenly stopped eating, there are a few things you can do to try to get them back on track. First, it’s important to understand that ball pythons, like all snakes, will go through periods of fasting. This is perfectly natural, and in the wild, it’s often related to the changing seasons or a lack of food availability. If your snake is healthy and has a good body weight, then fasting for a few months is nothing to be concerned about. If, however, your snake is looking thin or appears to be losing weight, then you will need to take action. The first step is to check their habitat and make sure that everything is set up correctly. The temperature and humidity should be at the correct levels, and there should be no drafts or other sources of stress. If everything looks good on that front, then the next step is to try offering them different types of food. Many ball pythons are picky eaters and will only eat certain types of prey. Offering them a variety of options may help encourage them to eat. If your ball python still isn’t showing any interest in food, then it’s time to consult with a veterinary reptile specialist. They will be able to give you more specific advice on how to proceed and will likely want to do some basic exams and tests to rule out any potential health problems. Can I force my ball python to eat? If your ball python isn’t eating, don’t try to force it. This can make the situation worse and may even lead to your python becoming aggressive. Instead, try these tips to encourage your pet to eat: -offer live food -try different types of food -make sure the food is the right size -offer food at the right time of day -make sure the temperature in the cage is appropriate What are some common reasons why ball pythons stop eating? There are many reasons why ball pythons may stop eating, including: -Incorrect environmental temperatures -Incorrect humidity levels If your ball python has stopped eating, it is important to consult with a veterinarian or reptile specialist to determine the cause and create a plan to get your snake back on track. How can I prevent my ball python from stopping eating? There are a few things that may cause your ball python to stop eating, including: – If the snake is not used to being handled, it may become stressed and stop eating. – If the snake is not comfortable with its surroundings, it may refuse to eat. – If the temperature in the snake’s enclosure is too low, it may become sluggish and stop eating. What should I do if my ball python won’t eat? One of the most common questions reptile enthusiasts have is “What should I do if my ball python won’t eat?” If your ball python hasn’t eaten in a while, there are a few things you can do to help encourage it to eat. First, make sure that the physical environment is suitable for your ball python. The temperature should be between 78 and 80 degrees Fahrenheit, and the humidity should be between 50 and 60 percent. The cage should also be large enough for the snake to move around freely; a 20-gallon tank is a good size for an adult ball python. If the environmental conditions are not ideal, make changes as necessary. Always make sure you have a thermometer and hygrometer in the cage so that you can monitor the temperature and humidity levels. If the environment is suitable but your snake still isn’t eating, try changing its food. Live food may be more appealing to your snake than frozen food, so offer live mice or rats instead of frozen ones. You can also try offering different types of frozen food, such as quail or rabbits. Make sure the food you offer is small enough for your snake to eat easily. If you’ve tried all of these things and your snake still refuses to eat, it’s time to consult a veterinarian. There could be an underlying medical condition causing your snake’s appetite loss, so it’s important to get professional help.
OPCFW_CODE
""" Uninstall PyXLL from all Excel installations. This script modifies the registry directly to remove references to PyXLL from Excel. Close all Excel sessions before running this script, as otherwise Excel will re-write its settings when it closes so PyXLL will still be installed. PyXLL is removed from the following registry keys (both 32 and 64 bit): - HKLM|HKCU/Software/Microsoft/Office/*/Excel/Options - HKLM|HKCU/Software/Microsoft/Office/*/Excel/Add-in Manager - HKLM|HKCU/Software/Microsoft/Office/*/Excel/Resiliency/DisabledItems """ import sys, os import re import logging try: import winreg except ImportError: import _winreg as winreg logging.basicConfig(level=logging.INFO) _log = logging.getLogger(__name__) _root_keys = { winreg.HKEY_CURRENT_USER : "HKEY_CURRENT_USER", winreg.HKEY_LOCAL_MACHINE : "HKEY_LOCAL_MACHINE", } def uninstall_all(): """uninstalls PyXLL from all installed Excel versions""" for wow64_flags in (winreg.KEY_WOW64_64KEY, winreg.KEY_WOW64_32KEY): for root in _root_keys.keys(): try: flags = wow64_flags | winreg.KEY_READ office_root = winreg.OpenKey(root, r"Software\Microsoft\Office", 0, flags) except WindowsError: continue # look for all installed versions of Excel and uninstall PyXLL i = 0 while True: try: subkey = winreg.EnumKey(office_root, i) except WindowsError: break match = re.match("^(\d+(?:\.\d+)?)$", subkey) if match: office_version = match.group(1) uninstall(office_root, office_version, wow64_flags) i += 1 winreg.CloseKey(office_root) def uninstall(office_root_key, office_version, wow64_flags): """Uninstalls PyXLL from a single Excel install""" # uninstall entries from \Software\Microsoft\Office\<version>\Excel\Options # (this is what Excel uses to determine what to load on start-up) options_key = None try: flags = wow64_flags | winreg.KEY_READ subkey = r"%s\Excel\Options" % office_version options_key = winreg.OpenKey(office_root_key, subkey, 0, flags) except WindowsError: pass if options_key: _log.debug("Found %s Excel %s options keys" % (_get_arch(wow64_flags), office_version)) pyxll_values = [] try: i = 0 while True: name, data, dtype = winreg.EnumValue(options_key, i) if "OPEN" in name and dtype == winreg.REG_SZ \ and data.rstrip('"\'').lower().endswith("pyxll.xll"): pyxll_values.append(name) i += 1 except WindowsError: pass winreg.CloseKey(options_key) # if there were any pyxll keys found delete them if pyxll_values: _log.debug("Found PyXLL in %s Excel %s's options keys" % (_get_arch(wow64_flags), office_version)) try: flags = wow64_flags | winreg.KEY_WRITE subkey = r"%s\Excel\Options" % office_version options_key = winreg.OpenKey(office_root_key, subkey, 0, flags) for value in pyxll_values: winreg.DeleteValue(options_key, value) winreg.CloseKey(options_key) _log.info("Deleted PyXLL from %s Excel %s's options" % (_get_arch(wow64_flags), office_version)) except WindowsError: _log.error("Couldn't delete PyXLL keys from %s Excel %s's options; Write access not allowed." % (_get_arch(wow64_flags), office_version)) # uninstall entries from \Software\Microsoft\Office\<version>\Excel\Add-in Manager # (this is what Excel uses to list addins in the addin manager) addins_key = None try: flags = wow64_flags | winreg.KEY_READ subkey = r"%s\Excel\Add-in Manager" % office_version addins_key = winreg.OpenKey(office_root_key, subkey, 0, flags) except WindowsError: pass if addins_key: _log.debug("Found %s Excel %s Addins" % (_get_arch(wow64_flags), office_version)) pyxll_values = [] try: i = 0 while True: name, data, dtype = winreg.EnumValue(addins_key, i) filename = os.path.basename(name) if filename.lower() == "pyxll.xll": pyxll_values.append(name) i += 1 except WindowsError: pass winreg.CloseKey(addins_key) # if there were any pyxll keys found delete them if pyxll_values: _log.debug("Found PyXLL in %s Excel %s's Addins" % (_get_arch(wow64_flags), office_version)) try: flags = wow64_flags | winreg.KEY_WRITE subkey = r"%s\Excel\Add-in Manager" % office_version addins_key = winreg.OpenKey(office_root_key, subkey, 0, flags) for value in pyxll_values: winreg.DeleteValue(addins_key, value) winreg.CloseKey(addins_key) _log.info("Deleted PyXLL from %s Excel %s's addins list" % (_get_arch(wow64_flags), office_version)) except WindowsError: _log.error("Couldn't delete PyXLL keys from %s Excel %s's addins; Write access not allowed." % (_get_arch(wow64_flags), office_version)) # uninstall entries from \Software\Microsoft\Office\<version>\Excel\Resiliency\DisabledItems # (this is what Excel uses to list blacklist badly behaving addins) disabled_key = None try: flags = wow64_flags | winreg.KEY_READ subkey = r"%s\Excel\Resiliency\DisabledItems" % office_version disabled_key = winreg.OpenKey(office_root_key, subkey, 0, flags) except WindowsError: pass if disabled_key: _log.debug("Found %s Excel %s disabled addins" % (_get_arch(wow64_flags), office_version)) pyxll_values = [] try: i = 0 while True: name, data, dtype = winreg.EnumValue(disabled_key, i) if dtype == winreg.REG_BINARY: value = data.decode("utf-16", "ignore") if "pyxll.xll" in value: pyxll_values.append(name) i += 1 except WindowsError: pass winreg.CloseKey(disabled_key) # if there were any pyxll keys found delete them if pyxll_values: _log.debug("Found PyXLL in %s Excel %s's disabled addins" % (_get_arch(wow64_flags), office_version)) try: flags = wow64_flags | winreg.KEY_WRITE subkey = r"%s\Excel\Resiliency\DisabledItems" % office_version disabled_key = winreg.OpenKey(office_root_key, subkey, 0, flags) for value in pyxll_values: winreg.DeleteValue(disabled_key, value) winreg.CloseKey(addins_key) _log.info("Deleted PyXLL from %s Excel %s's disabled addins" % (_get_arch(wow64_flags), office_version)) except WindowsError: _log.error("Couldn't delete PyXLL keys from %s Excel %s's disabled addins; Write access not allowed." % (_get_arch(wow64_flags), office_version)) def _get_arch(flags): if flags & winreg.KEY_WOW64_64KEY: return "64 bit" elif flags & winreg.KEY_WOW64_32KEY: return "32 bit" return "unknown" def main(): uninstall_all() if __name__ == "__main__": sys.exit(main())
STACK_EDU
1. System software 2. Application software System software: Directly interacts with the computer system. Operating system, compiler, interpreter are examples for this. Application software: All the programs written by a user with the help of any software is called as application 09/04/134 VIT - SCSE Introduction to Programming The shift in programming language is categorized as 1. Monolithic Programming 2. Procedural Programming 3. Structural Programming 4. Object Oriented Programming 09/04/135 VIT - SCSE This programming consists only global data and sequential Assembly language and BASIC 09/04/136 VIT - SCSE Procedural Oriented Programming Mainly comprises of algorithms. FORTRAN and COBOL The important features of Procedural Programming are Emphasis is on doing things (algorithms) Large programs are divided into smaller programs known as Most of the functions share global data Data move openly around the system from function to function Functions transform data from one form to another Employs top-down approach in program design 09/04/137 VIT - SCSE Pascal and C Structured programming is based upon the algorithm rather than Programs are divided into individual modules that perform Introduction of user defined data types 09/04/138 VIT - SCSE Object Oriented Programming C++, Smalltalk, Eiffel, Java, C# etc. Object oriented programming is a programming methodology that associates data structures with a set of operators, which act upon it. Depending on the object features supported, the languages are classified into two categories: Object-Based Programming Languages Object-Oriented Programming Languages 09/04/139 VIT - SCSE Object-based programming languages support encapsulation and object identity without supporting inheritance, polymorphism and message communications. Object – Based language = Encapsulation + Object Identity Object-Oriented Programming Language incorporate all the features of object-based programming languages along with inheritance and polymorphism. Object-oriented programming language = Object Based Language + Polymorphism +Inheritance 09/04/1310 VIT - SCSE Features of Object oriented Programming Improvement of over the structured programming languages. Emphasis on data rather than algorithm Data is hidden and cannot be accessed by external functions Objects may communicate with each other through functions New data and functions can be easily added whenever Follows bottom –up approach in program design 09/04/1311 VIT - SCSE Basic concepts of object oriented Objects – data and function Data Abstraction : the act of representing essential features without including the background details or explanations. Encapsulation – data and function into a single unit Message Passing : It is the process of invoking an operation on an object. 09/04/1312 VIT - SCSE Advantages of OOP Through inheritance we can eliminate redundant (Unnecessary) code and extend the use of existing classes. The principle of data hiding helps for security. It is possible to have multiple objects. It is easy to partition the work in a project based on objects. Object-oriented systems can be easily upgraded from small to Message passing techniques for communication between Code reuse is possible. 09/04/1313 VIT - SCSE Applications of OOP Real time systems Simulation and modeling Object oriented databases Hypertext and hypermedia AI and expert systems Neural networks and parallel programming Office automation systems CIM / CAM / CAD systems 09/04/1314 VIT - SCSE
OPCFW_CODE
It is much more comfortable and simpler to install VirtualBox for Windows 8. It would be great if you could share your experience about running Windows 8. That means that my Windows 8. This guide explains how to install Windows 8. Therefore I don't understand where the error message comes from. . Should I accept these format prompts? The Display Performance Issues Windows 8. Step 9 In the Windows 8. The laptop also is running Windows 8. Step 11 As you can see in the window below, after installing the guest add-ons for the Win 8. This site is not directly affiliated with. In his video, he demonstrates this on Windows Small Business Server 2011, but the process and the Hyper-V Manager tool is the same. Many Windows users aren't aware of it, but a powerful virtualization tool is built into every copy of Microsoft Windows 8. If a magic packet contains a six byte SecureOn password the machine only wakes up if the password is correct. Click Next once you've made your choice. This is the very same Type-1 hypervisor that runs virtualized enterprise workloads and comes with Microsoft Windows Server 2012 R2. There are many reasons why you may want to try out Windows 8. VirtualBox will boot and go right into the Windows 8 installation mechanism. Highlight it and click the big Settings button. You might also want to go in and tweak various settings as needed in VirtualBox. You will see icons for Hyper-V Virtual Machine Connection and Hyper-V Manager. Enter your product key and accept the license terms. Some older versions of Linux for best performance. Click on it to continue. We've shown you how to go about installing the new on a brand-new hard drive or a partition of your existing hard drive--that's easy. In this article, we will examine how to install Microsoft Windows 8. So far I have installed the 90 Evaluation version of Windows 8. By playing around with 2D, 3D and none video settings, I have found out the enabling only 2D video acceleration worked quite well for me. This is the very same Type-1 hypervisor that runs virtualized enterprise workloads and comes with Microsoft Windows Server 2012 R2. The Hyper-V Manager is the main adminsitrative and management console for all Hyper-V related activities, whereas the Hyper-V Virtual Machine Connection is a quick way to console directly into a running virtual machine. Step 2 For Windows 8. Step 4 Leave the installation location as default and click on the Next button. I suggest you try using the following command to repair your system. Do you want to upgrade your existing Windows 7 or 8 computer? I am not sure what other step I can take at this point. Give the machine a name. Cross your fingers and click on the big Start button to load your virtual machine for the first time. The virtual machines you create on your desktop with Client Hyper-V are fully compatible with those server systems as well. If you have latest VirtualBox version, skip this step 3. So, you can login with your existing Microsoft account or create a new account from this screen. To set up Windows 8. You will see icons for Hyper-V Virtual Machine Connection and Hyper-V Manager. Scroll for the Windows 8. I am currently running the repair command you suggested. If you aren't happy with Windows 8, and are curious to find out what's new in Windows 8. I have selected 64 bit file as mine is Windows 8. As Microsoft gives trail version of Windows 8. The easiest way to shut down Windows 8. The drive letters correspond to the. Right Click on each of these and pin them to Start or to the Task Bar. Step 16 Click Custom: Install Windows only advanced as in the following window. It will take sometime to finish the online updates, setting up apps and other configurations.
OPCFW_CODE
With Bitcoin now worth potentially more than an ounce of gold, I’m capping off my series of Bitcoin posts with an attempt to answer a recurring question. How to go about creating your very own crypto-currency. When looking at the various crypto-currencies that have emerged over the last few months, most, if not all of them have had one thing in common. They are essentially cloned versions of Bitcoin. My question isn’t how to clone Bitcoin but rather how can you go about creating a completely new virtual currency. One that is based on varied asset backings. The currency could be like a Bitcoin, based on an algorithm or based upon more traditional assets like US dollars, gold, or even a basket of mixed existing asset types. Not to be confused with Cyberpunk, Cypherpunk is a concept originally emerging in the late 1980’s. Early cypherpunks communicated through electronic mailing lists, where an informal group of cyber activists aimed to achieve privacy and security through proactive use of cryptography. With the recent NSA scandal and related electronic spying, the concepts of the cypherpunk movement have become popular once again, specially within the communities involved in crypto-currencies like Bitcoin. Provided as a free software library, the Open-Transactions platform is a collection of financial cryptography components used for implementing cryptographically secure financial transactions. The author, Chris Odom also known as “Fellow Traveler” and co-founder of Monetas, the company behind the project, describes it saying, “It's like PGP FOR MONEY. The idea is to have many cash algorithms. So that, just like PGP, the software should support as many of the top algorithms as possible, and make it easy to swap them out when necessary.” Pretty Good Privacy or PGP, created by Phil Zimmermann in 1991, is a data encryption and decryption method that provides cryptographic privacy and authentication for data communication. PGP encryption uses a serial combination of hashing, data compression, symmetric-key cryptography, and public-key cryptography; each step uses one of several supported algorithms. Each public key is bound to a user name and/or an e-mail address. Similar to PGP, Open-Transaction’s user accounts are pseudonymous (can be written under a false name.) A user account is provided as a public key allowing users to open as many user accounts, as they want. But unlike Bitcoin, the system can be configured to enable true anonymity, but to do so, it is limited to "cash-only" transactions, although it can be setup to offer pseudonymity or more simply, transactions that can be linked to the key that signed them. While the real life identity of the owner is hidden, continuity of reputation becomes possible, while supporting potentially millions of users. An interesting aspect of the system is that it isn’t limited to any one specific asset or currency (virtual or otherwise). Basically any user can issue new digital currencies and digital asset types by uploading new currency contracts. Want to create a Gold, Silver, Bitcoin, Litecoin or even USD backed currency? Not a problem on OT. Users are able to conduct transactions, verify instruments, and agree on current holdings via signed receipts, all without the need to store any transaction history. Open-Transactions can be used for broad variety of purposes including issuing currencies/stock, paying dividends, creating asset accounts, sending/receiving digital cash, writing/depositing cheques, cashier's cheques, creating basket currencies, trading on markets, scripting custom agreements, recurring payments, and escrow services. The project uses what it calls “strong crypto” with account balances that are unchangeable (even by a malicious server.) The receipts are destructible and redundant with transactions that are unforgeable. The cash is untraceable and cheques are non-repudiable. There are some potential limitations, for example what if a transaction server attempts to inflate the currency? According to the developers, this is prevented through auditing, which must be utilized, either by the issuer directly, or by the other members of the voting pool. While the transaction server cannot lie on your receipts, it can potentially inflate the currency itself by using dummy accounts. But the inflated funds cannot be spent without flowing into other accounts, where they will show on an audit. Recently, the creators of the project formed a company, Monetas to provide commercial services around the OT platform. The company describes its mission as “to empower people to live and do business with greater freedom than ever before.” Monetas is building the world’s first decentralized system for financial and legal transactions. They claim the system has no single point of control or failure—making it immune to abuses of power and resilient to failure. The solution requires only a mobile phone, and makes transactions easy, cheap, instant, global, secure, and private. It is globally available to individuals, merchants, and entrepreneurs everywhere, for free.
OPCFW_CODE
Why triples? It has been stated that an RDF based on triples is too low level to be useful. I cannot say whether triples are useful to any particular person for any specific purpose. I would like to suggest some questions that people may find useful in deciding whether triples and by extension RDF may be useful for a particular project or purpose. In this article I consider RDF as a simplified XML infoset. If triples are too low level to be useful as a basic on which to develop a predicate logic language, other candidates would be a pure text based syntax or an XML based syntax. The benefit of XML is not that it is often the very 'best' syntax for people to read or write but particularly that it is generally the best compromise as a syntax that is both human and machine readable. That was perhaps the initial reason why XML became as popular as it is today. I have absolutely no doubt that any particular piece of information encoded in XML could be similarly encoded in s-expressions but that has become an entirely moot point. Indeed the layout and transformation language that generally preceded XSLT was DSSSL - a derivative of Scheme. So certainly the developers of XML were well aware of Lisp. What has happened is that the web community has decided on the XML syntax and people are generally interested in putting up with any particular shortcomings of XML in the interest of being able to take advantage of the available software for parsing, manipulating, storing and transmitting XML. Let us further examine the implications of developing a logic or for that matter any other, language on XML itself or alternatively on the RDF XML syntax. XML 1.0 may represent directed labelled graphs. Representing trees is straightfoward and well accepted. Representing links in XML 1.0 can be done by entities. Yet entities add complexity to the XML abstract syntax and most real world interfaces (e.g. SAX, DOM, XPath) do not provide sufficient detail concerning such XML 1.0 constructs as entities (which require DTDs) in order to make XML 1.0 a reasonable platform itself on which to represent DLGs. Additional facilities have been built on XML 1.0 to enable integration with the Web, namely the XML namespaces recommendation, RDF and XLink. The namespaces recommendation has been controversial for several reasons but particularly because of the lack of direction given toward what a namespace name URI reference ought reference. RDDL has emerged as a reasonable solution to this issue, one which models a namespace as a proper set of resources. However namespaces do not easily work with DTDs, so accepting XML namespaces leaves the problem of how to represent DLGs in a namespace aware fashion. XLink and RDF are both W3C recommendations aimed at answering this question. The XML 1.0 abstract syntax is described by a set of 57 EBNF productions which themselves can be represented in XML/RDF [XSet]. Common programming level interfaces such as SAX, DOM and XPath used a simplified verion of this abstract syntax which generally corresponds to the XML Infoset. Much debate occurs as to what level of detail the commonly accepted subset ought capture. There is general agreement that whether an XML attribute is delimited by a single or double quote character is not important. Nor is the order of attributes. The fact that an information item is an attribute vs. element and the order of child elements is significant. The RDF 'Infoset' or abstract syntax can be considered a yet simplified XML infoset where the origin of an item/object as from either an attribute or child element is not important, nor is the order of child elements (Syntactic conventions do enable cpature of order in rdf:Seqs). What is gained by this simplification? An RDF abstract syntax can generally be stored in a single table with three columns. As such, the possibility exists to use RDF client side applications on handheld clients (for example) whereas the need to deploy a full XML enabled database would provide greater overhead. This, along with a simplified application and query model are the sole benefits of the triple model over the XML Infoset (or full XML grove) in their entirety.
OPCFW_CODE
O(n^4) memory usage in PPM This line consumes an insane amount of memory: https://github.com/lucidrains/pixel-level-contrastive-learning/blob/bf7fc8feb5684244b5afb4a261005815ead8b1a0/pixel_level_contrastive_learning/pixel_level_contrastive_learning.py#L159 As an example, trying to train a 28x28 "pixel" latent, the above attempts to allocate 75GiB of GPU memory. I was hoping to use this for far finer latents, up to 114x114 "pixels". This einsum could be unraveled but that would be insanely expensive processing-wise. Do you or the authors have any ideas for how this could be improved? Hi! Have you heard of a tool called Keops? Hi! Have you heard of a tool called Keops? I'll try it this weekend https://www.kernel-operations.io/keops/index.html I'll try it this weekend https://www.kernel-operations.io/keops/index.html Would it be possible to modify the PPM so that it only aggregates locally, so that it can be used for even larger feature maps? Would it be possible to modify the PPM so that it only aggregates locally, so that it can be used for even larger feature maps? Great find with keops, thanks for the suggestion. I'll give it a try. I worry about the processing overhead, but it'll probably at least get me one resolution tier higher in training. Local aggregation sounds like the better long term solution but I really need to sink my teeth into this more to make a better call there. Great find with keops, thanks for the suggestion. I'll give it a try. I worry about the processing overhead, but it'll probably at least get me one resolution tier higher in training. Local aggregation sounds like the better long term solution but I really need to sink my teeth into this more to make a better call there. Sounds good, let me know how it goes! As for local aggregation, it would deviate from the paper and would be experimental. I think I'll leave it for some future paper to explore that and if there are positive results, I'll add it! Sounds good, let me know how it goes! As for local aggregation, it would deviate from the paper and would be experimental. I think I'll leave it for some future paper to explore that and if there are positive results, I'll add it! @neonbjb Hi James, want to give https://github.com/lucidrains/pixel-level-contrastive-learning/issues/8 a try? @neonbjb Hi James, want to give https://github.com/lucidrains/pixel-level-contrastive-learning/issues/8 a try? That was fast! Sure I'll give it a try. It is missing one part: you need to replace the cosine_similarity() call with a keops call too to completely eliminate the n^4 tensor allocation. Ironically I did this part this morning but couldn't figure out the matmul part. I'll post the code here after I test it. That was fast! Sure I'll give it a try. It is missing one part: you need to replace the cosine_similarity() call with a keops call too to completely eliminate the n^4 tensor allocation. Ironically I did this part this morning but couldn't figure out the matmul part. I'll post the code here after I test it. @neonbjb oh yes! you are right! I replaced cosine similarity as well hopefully I did it correctly; I'm pretty new to keops still 😅 @neonbjb oh yes! you are right! I replaced cosine similarity as well hopefully I did it correctly; I'm pretty new to keops still 😅 Unfortunately this doesn't work currently. sum(dim=-1) does not trigger keops to collapse a lazy tensor back into a regular one, only sum(dim=1) and sum(dim=0) do for some reason. I need to do some more reading on how keops works to understand why. I can fiddle around with the dims to make it work, but then run into a lib error. This occurs both on my personal machines as well as colab. I'll keep plugging away at this, but I just want to keep you abreast. :) Here's a colab sheet you can use to play around with it live if you'd like: https://colab.research.google.com/drive/1n-dTzli0t5lbrG8y9CWclPpgm8GGi7VO?usp=sharing Unfortunately this doesn't work currently. sum(dim=-1) does not trigger keops to collapse a lazy tensor back into a regular one, only sum(dim=1) and sum(dim=0) do for some reason. I need to do some more reading on how keops works to understand why. I can fiddle around with the dims to make it work, but then run into a lib error. This occurs both on my personal machines as well as colab. I'll keep plugging away at this, but I just want to keep you abreast. :) Here's a colab sheet you can use to play around with it live if you'd like: https://colab.research.google.com/drive/1n-dTzli0t5lbrG8y9CWclPpgm8GGi7VO?usp=sharing @neonbjb ok, do let me know if you figure it out! I'll circle back to this later this weekend! @neonbjb ok, do let me know if you figure it out! I'll circle back to this later this weekend! The keops approach didn't end up panning out because the computational complexity was too high. My iteration rates even at moderate-sized latents were many seconds per iteration. I have a workaround that I am testing right now. Instead of computing the pixel propagation across the entire image and then extracting out the relevant regions using torch.masked_select(), I: Perform cutout & flip on images Feed images through online & target predictors Reverse the cutout & flip on the resulting latents Extract the region between those two latents that is the same Perform pixel propagation algorithm on results from (4), which is generally much smaller than the whole image. This has allowed me to train against much larger latents, although I don't believe it aligns with the paper because step (3) necessarily causes some of the latent data to be "dropped out" when it is interpolated back down to original image size. I've had it training for a day now and results seem promising. Thanks for your help with this & the excellent repo(s). I've learned a lot from your code. If you're at all interested in what I've done above let me know and I'll put up a PR. I'm going to close this issue since I don't think the keops approach is really acceptable and it might just be a limitation of the model if we are sticking close to the paper. The keops approach didn't end up panning out because the computational complexity was too high. My iteration rates even at moderate-sized latents were many seconds per iteration. I have a workaround that I am testing right now. Instead of computing the pixel propagation across the entire image and then extracting out the relevant regions using torch.masked_select(), I: Perform cutout & flip on images Feed images through online & target predictors Reverse the cutout & flip on the resulting latents Extract the region between those two latents that is the same Perform pixel propagation algorithm on results from (4), which is generally much smaller than the whole image. This has allowed me to train against much larger latents, although I don't believe it aligns with the paper because step (3) necessarily causes some of the latent data to be "dropped out" when it is interpolated back down to original image size. I've had it training for a day now and results seem promising. Thanks for your help with this & the excellent repo(s). I've learned a lot from your code. If you're at all interested in what I've done above let me know and I'll put up a PR. I'm going to close this issue since I don't think the keops approach is really acceptable and it might just be a limitation of the model if we are sticking close to the paper. @neonbjb Would it be possible for you to share the code you just mentioned, reversing the cutout/ratio/flip to avoid the O(n⁴)? @neonbjb Would it be possible for you to share the code you just mentioned, reversing the cutout/ratio/flip to avoid the O(n⁴)? Sure, I am working on the code here: https://github.com/neonbjb/DL-Art-School/blob/gan_lab/codes/models/pixel_level_contrastive_learning/pixpro_lucidrains.py The commits on Jan 12-Jan 13 are what implement the changes. All the changes to the contrastive loss stayed inside of pixpro_lucidrains.py so you can ignore the other files: https://github.com/neonbjb/DL-Art-School/commits/gan_lab/codes/models/pixel_level_contrastive_learning/pixpro_lucidrains.py Sure, I am working on the code here: https://github.com/neonbjb/DL-Art-School/blob/gan_lab/codes/models/pixel_level_contrastive_learning/pixpro_lucidrains.py The commits on Jan 12-Jan 13 are what implement the changes. All the changes to the contrastive loss stayed inside of pixpro_lucidrains.py so you can ignore the other files: https://github.com/neonbjb/DL-Art-School/commits/gan_lab/codes/models/pixel_level_contrastive_learning/pixpro_lucidrains.py
GITHUB_ARCHIVE
I have quite a complex way to organise my 50k plus mp3 songs : 1. Media Type (Albums/Other/Soundtracks/Classical/Pop Music/Remixes/Jazz/Latin/Tango) I create a folder called Albums that will be sychronised with iTunes for the iPhone and my Sonos. In this folder, I only put whole albums and compilations of favourite artists. All other albums, live recordings, singles, extended plays are put into the Other folder. Soundtracks, Classical, Jazz, Latin & Tango are self explanatory. I put in Pop Music songs that are hits which I do not want to have the album - one hit wonders. In Remixes, these are favourite remixes (House, Trance, ...) also by year. 2. Grouping I group artists under certain groups (ie. Peter Gabriel goes into Genesis, Annie Lennox under Eurythmics) 3. Album Artist 4. Year / Symbol / Code / Album name (if there is only one album, I combine the Artist name and the album name together - If the Album name = Artist name, I put Self-Titled) Yes, putting the year here makes sense to have your albums in chronological order. I put a code here to be able to easily find my folders using a search function (i use the character ¦) I then put the first letter of the Media Type field (Album, Compilation, Single, Live, Remix, Tribute, ...) which is followed by the complete Album Name. If the Publisher of the album exists, its first 10 characters are placed after this in parenthesis. 5. (Artist) Disc-Track# - Track Name (feat Other Artist) If <Artist> is different from <Album Artist>, the artist name is indicated before the disc & track numbers. If the word "feat" is present in <Artist>, the featured artist name is put after the track name Here is what I have : Tons of fun... but time consuming. Jean from Montreal $IsNull(<Media Type>,Albums,$If($Or(<Media Type>="Album",<Media Type>="Compilation"),Albums,$If(<Media Type>="Live",Albums,$If(<Media Type>="Remix",Albums,Other))))\$IsNull(<Grouping>,$Group($Sort(<Album Artist>),1)" "Artists\$Sort(<Album Artist>),$Group($Sort(<Grouping>),1)" "Artists\$If(<Grouping>=<Album Artist>,$Sort(<Grouping>),$Sort(<Grouping>)\$Sort(<Album Artist>)))$If(<Library>=1," ",\)$If(<Media Type>="Various Songs","Various Songs",$IsNull(<Year>,," "<Year>¦)$IsNull(<Media Type>,A,$Left(<Media Type>,1))" "$If(<Artist>=<Album>,Self-Titled,<Album>)$IsNull(<Publisher>,," ("$Left(<Publisher>,10)")")\$IsNull(<Disc-Track#>,,<Disc-Track#>-))$if(<Album Artist>=<Artist>,<Title>,$if($Contains(<Artist>," (feat "),<Title>" (feat "$Split(<Artist>," (feat",2),<Artist>-<Title>))
OPCFW_CODE
Unanswered: You do not have the necessary permissions to use the <name> object - Problem I am Active Duty in the United States Air Force stationed in California. Recently our communications guys transfered all of our data from one server to another following that transfer there was a problem with a very important Access file I use. Now I am pretty good at hardware, decent at most software and a complete idiot with Access. I have done my homework prior to posting though. I've learned the error I'm getting "You do not have the necessary permissions to use the <name> object. Have your system administrator or the person who created this object establish the appropriate permissions for you. (Error 3033)" comes up because the Database I am trying to open is secured and apparently I don't have the proper Secure workgroup file. I realize the easiest fix would be to have the file's owner open the file and correct the problem but being in the Military he is no longer at this base. The second easiest fix would be to use the unsecured .bak file. The .bak file is no-where to be found. The good news is I was able to find the One-step Security Wizard Report that has the WID. The db was made in Access 2000 and I currently have Access 2007. How can I re-create the workgroup file and give myself access? If you cannot find a workgroups (.mdw) file, then there may be a System.mdw stored either in the Program Files area or the Application Data. It may be that, in the absence, of a specified workgroups file, Access reverts to this default workgroups file. Typically, this file contains a user called "admin" as a default. This is a misnomer as this is not a real administrator at all (though it could be made one). You could test this hypothesis by creating a new (empty) database with System.mdw as it's workgroups file (look for the Workgroups Administrator option in Tools/Security and apply to "join" the workgroup. After that, you can install other users and groups, mirroring the users and groups in your original database. Then you open the original database with a command line which includes the /wkgrps switch, specifying the new workgroup file. At the next level, if you want ot make specific objects (tables, forms, etc.) accessible to specific users, you must go to the User/Groups Permissions utility in the same menu. Remember however that these permissions are attributes of the database and not of the workgroups file. So apparently its the .mdw that is the problem. Double sided question. I understand that I can recreate the .mdw using the One-step Security Wizard Report. When that .mdw is made does it need to be 100% accurate to the origonal or can it have an additonal user added in? The reason I ask is because the two file creators in the origional .mdw are no longer at this base. I have tried making the file but it is not working for me. Would someone be willing to try creating one for me? I am willing to provide a copy of the One-step Security Wizard report.
OPCFW_CODE
Should I always use accessors for instance variables in Objective-C? If I have a class with some IBOutlets, it seems kind of silly to create accessors for these. But then I feel like I'm breaking OO practices by not always going through the accessors for instance variables. I also feel the same way about some instance variables that should not be public, I'd rather not expose the inner workings of some classes. I can make the actual ivars private but the @property shorthand doesn't seem to be able to specify visibility. This leads me to not create accessors and just access the ivars directly. I'm not sure if this is frowned upon though. Is it? What's the community's thoughts on this admittedly newbie question? (Please ignore dot syntax) I'm not sure about accessing instance variables directly, I think one shouldn't, but for some variables it just doesn't make sense to use accessors. Like for the IBOutlets you mentioned. I can only help you out with private accessors. Starting with Objective-C 2.0 you can declare extensions. Class extensions are like “anonymous” categories, except that the methods they declare must be implemented in the main @implementation block for the corresponding class. Just put this extension into a separate header file and you'll have private accessors that aren't visible in the header. Public/Private You can declare your iVars as in the @interface file to be readonly, but then re-declare them in a category so that your class can change them. Here's a quick intro to Categories. An example: //MyClass.h @interface MyClass : NSObject { NSString *name; } @property (readonly) NSString *name; @end And in the implementation file you can redeclare this: //MyClass.m @interface MyClass () //declare the class extension @property (readwrite, copy) NSString *name; //redeclare the property @end @implementation MyClass @synthesize name; @end Now, the name property is readonly external to the class, but can be changed by the class through property syntax or setter/getter syntax. Really private iVars If you want to keep iVars really private and only access them directly without going through @property syntax you can declare them with the @private keyword. But then you say "Ah, but they can always get the value outside the class using KVC methods such as setValueForKey:" In which case take a look at the NSKeyValueCoding protocol class method + (BOOL)accessInstanceVariablesDirectly which stops this. IBOutlets as properties The recommended way is to use @property and @synthesize. For Mac OS X, you can just declare them as readonly properties. For example: //MyClass.h @interface MyClass : NSObject { NSView *myView; } @property (readonly) IBOutlet NSView *myView; @end //MyClass.m @implementation MyClass @synthesize myView; @end
STACK_EXCHANGE
This function calculates the intermediate MVN correlation needed to generate a variable described by a discrete marginal distribution and associated finite support. This includes ordinal (r ≥ 2 categories) variables or variables that are treated as ordinal (i.e. count variables in the Barbiero & Ferrari, 2015 method used in corrvar2, doi: 10.1002/asmb.2072). The function is a modification of Barbiero & Ferrari's ordcont function in It works by setting the intermediate MVN correlation equal to the target correlation and updating each intermediate pairwise correlation until the final pairwise correlation is within epsilon of the target correlation or the maximum number of iterations has been reached. This function uses norm_ord to calculate the ordinal correlation obtained from discretizing the normal variables generated from the intermediate correlation matrix. The ordcont has been modified in the following ways: 1) the initial correlation check has been removed because this is done within the simulation functions 2) the final positive-definite check has been removed 3) the intermediate correlation update function was changed to accommodate more situations This function would not ordinarily be called by the user. Note that this will return a matrix that is NOT positive-definite because this is corrected for in the simulation functions using the method of Higham (2002) and the a list of length equal to the number of variables; the i-th element is a vector of the cumulative probabilities defining the marginal distribution of the i-th variable; if the variable can take r values, the vector will contain r - 1 probabilities (the r-th is assumed to be 1) the target correlation matrix a list of length equal to the number of variables; the i-th element is a vector of containing the r ordered support values; if not provided (i.e. support = list()), the default is for the i-th element to be the vector 1, ..., r the maximum acceptable error between the final and target pairwise correlations (default = 0.001); smaller values take more time the maximum number of iterations to use (default = 1000) to find the intermediate correlation; the correction loop stops when either the iteration number passes if TRUE, Spearman's correlations are used (and support is not required); if FALSE (default) Pearson's correlations are used A list with the following components: SigmaC the intermediate MVN correlation matrix rho0 the calculated final correlation matrix generated from rho the target final correlation matrix niter a matrix containing the number of iterations required for each variable pair maxerr the maximum final error between the final and target correlation matrices Barbiero A, Ferrari PA (2015). Simulation of correlated Poisson variables. Applied Stochastic Models in Business and Industry, 31:669-80. doi: 10.1002/asmb.2072. Barbiero A, Ferrari PA (2015). GenOrd: Simulation of Discrete Random Variables with Given Correlation Matrix and Marginal Distributions. R package version 1.4.0. Ferrari PA, Barbiero A (2012). Simulating ordinal data, Multivariate Behavioral Research, 47(4):566-589. doi: 10.1080/00273171.2012.692630. Add the following code to your website. For more information on customizing the embed code, read Embedding Snippets.
OPCFW_CODE
Ever since Ad Noiseam started to sell digital versions of the label's releases directly from this website, one question kept on being asked again and again: when would we start offering lossless versions, as opposed to mp3 (which is a lossy format, meaning that the sound quality is lesser than what you hear on record and CD). The answer was always that we couldn't do so because it would require some heavy re-programming of the online store, as well as humongous amount of server space, which doesn't come cheap. But as demands for lossless versions of the digital releases grew, we decided to jump the gun: starting today, you will be able to purchase digital releases in FLAC formats. These files will be available alongside the physical ones and mp3 right from the Ad Noiseam online store. People who have already bought an item as mp3 will also be able to upgrade to FLAC by paying only the price difference between both formats. We go more in the details below, but one thing before going any further: this FLAC format is restricted to only a very limited amount of releases at the moment, as we still do not have enough server space to host the whole catalog as lossless files. It might come in the future, but a whole server move will take some time and be costly. If there's enough interest in FLAC files, though, it will come sooner rather than later. And now for a few more details: The FLAC files are available on the same ordering page as the mp3, meaning the one which lists the item format as "digital" (rather than "CD" or "vinyl"). You are then presented with a drop down menu showing you the available file formats (320kbs mp3 only so far for most of the releases, and FLAC for a few). You can then select the format you prefer, and order it as normal (note that FLAC downloads are a couple of Euros more expensive than mp3 ones). Your files are available for download on the "Your Account" page, once you have paid for your order (via Paypal, bank transfer or whichever method you have chosen). This all applies to the first time you purchase an album. However, we don't think that it would be fair for people who have already bought the mp3 release to pay full price for the FLAC one. Therefore, once you have logged in, the site will detect whether or not you already have the mp3 version. If you do, the drop down menu shows you an extra option, listed as an "upgrade from mp3 to FLAC": Selecting it allows you to get the full FLAC format and pay only for the price difference between both. In the end, you therefore pay only the price of the FLAC file, and nothing more. Finally, for the people who have read all this but have no idea what this FLAC thing is: unlike mp3, which compress audio data by removing part of the information (therefore ending with a lesser quality), FLAC is a "lossless" file format, which compress the audio without decreasing its quality (as a zip archive of a text file, for example). It therefore enable people to save on disk space while keeping the same audio quality as a CD. Most mainstream mp3 players and audio software will not play FLAC files out of the box, but it is extremely easy to convert them to WAV, which will in turn be played by all applications and hardware. A list of programs performing this conversion can be found here and there. Here it is, folk. You've asked for it, and we're bringing this change to the store. Enjoy, please don't complain too much about the limited amount of releases available as FLAC (we know this not enough yet), but let us know in the comments below what you think of this all. blog comments powered by Disqus Bandcamp • Discogs • Twitter • Facebook • Ello • Diaspora • Vimeo • Youtube • Soundcloud • Mixcloud • HearThis • Tumblr • Flickr • Pinterest Continue même si peu de gens on l'échine qui en vibre au moins tu files du taf au chiropracteurs..XD 'fin soit mec... du coup on va se r'mettre un ptit niveau zaero.
OPCFW_CODE
An app can combine a multitude of technologies to make a seamless, fully functional desktop interface, and the same can be said for a browser. But, how do you know which technology is best suited for your particular needs? With a little help from the experts, we’ve rounded up some of the best tools for using different web technologies to combine and create a seamless experience. We’ve compiled our picks into a handy guide, with links to our expert tips. The Google Chrome web browser and extensions We’ve used Chrome and other browsers before, but the extensions that Google Chrome provides are just too powerful to ignore. It’s no secret that we use a lot of extensions in our day-to-day computing, but there’s a very simple way to get them to work with our web apps: Use the Chrome browser’s extension manager. When you download Chrome from the Google Chrome website, you’ll find all the popular extensions that will help you add new features to your app. Extensions are also available from third-party developers who offer extensions that enhance the user experience, so you can always opt for the best one. You can download a list of the extensions available for Chrome here. Firefox extensions For web developers, it’s not often that we see the tools for adding extensions to Firefox. If you’re new to the industry, however, you can find a lot to love with Firefox extensions. The first thing you’ll notice is that there are so many options for adding new features and extensions to your apps, from a quick and dirty Chrome extension to a full blown Firefox extension. We recommend that you only install extensions you need and are absolutely committed to using, because the more extensions you add, the more complicated your web app will get. However, you don’t need to use these extensions constantly if you use a mobile browser like Firefox. Most of the time, you should only install the extensions you really need, and those are probably extensions that aren’t available in other browsers. Extensions for Windows and Linux extensions Windows users have been around a long time, and Windows extensions are a huge part of web development, with Windows Phone apps often being the most popular. Extensions can add a lot more functionality to your web apps than extensions available in Macs or iOS. This is because Windows Phone is built around a graphical user interface that is easy to use and intuitive to use. You’re not required to use the latest versions of Adobe Flash or a third-parties Flash plug-in to use a Windows-based browser, so it’s usually a breeze to add a Chrome extension or Firefox extension to your browser. The Chrome extensions you can use are typically available in the Developer Tools or the Windows Tools menu. If your web browser supports extensions, you’re probably familiar with the tools available for adding them. Extensions available in Windows 8, Windows 10, and macOS 10.1: Firefox, Opera, Internet Explorer, and WebKit extensions For developers looking to make the most of their existing development tools, it is important to understand that Windows users can’t just grab a Mac, Linux, or Android version of an extension, because those extensions can only work on Windows 8.x and 10.x platforms. In order to make extensions work on these platforms, you need to install Windows 8 or higher and the Windows Runtime for Windows. You’ll need to download and install the Windows 10 Runtime, which can be installed via Windows Update. The easiest way to install the Runtime is by using the Microsoft Download Center. You will need to be a member of the Microsoft Developer Program to do this, as this is required to install a new Windows version of your app on a new device. Windows 8: Click the Start button on your computer and click “Start”, then type “programs” in the search box. You should see a Windows Installer window appear. Click the “Next” button to select “Install a Microsoft Runtime for this computer”. The Windows Installers should appear. In the “Programs” window, click “Add a program” and then select the “Windows Runtime” tab. In a new window, expand “Windows 10” and select the option to “Add the Windows Install Package for Windows”. If you don´t see this window, select the button to “Uninstall” and click OK. You now need to choose the Windows Store app from the list of apps that appear. Once you’ve selected the Windows install package for your computer, select “Add this program” in order to install your extension. Windows 10: Click on the Start menu and select “Control Panel”. In the search field, type “Programming and Security” and press Enter. From the Programs list, click on “Windows Store” to open the “Microsoft Runtime for your operating system”. If your app is available on the Windows App Store, you may want to choose that for your app in order for it to be installed. 3rd-party Chrome extensions If you need help
OPCFW_CODE
How do I create a stand alone application with Access "Stand alone" application How can I make an acccess file look like "stand alone" application, without access menus, options, etc., just the Switchboard Accessing my inventory remotely I have a simple Inventory application (developed using Access 2000) which is stand alone. That is the changes to the inventory can be done only on a single PC. Now, I need to make it available from different geographical locations i.e. via internet. However, it will be accessed from one location/one user at a time. My inventory application requires Access 2000 to run. Is there any way that I can upload Access 2000, my inventory application and associated folders to a remote 24/7 Server and access it at will. What is the simplest way to do this Producing a stand-alone Access Program Is it possible to compile an Access 2010 program into a stand-alone which could be run on a computer without Access? How to display JPG images I have an application which uses many thousands of part numbers. I also have a folder with many thousands of photos of those parts, all in JPG format. How can I display those from my application? It need not be on a form, just stand-alone images would be fine. In the past I have used BMP images in Access, but it is not practical to convert all the images to BMP. Ms Access database application Can I create database application using only MS Access. If yes please tell me tutorial how can I achieve this. I like my application to have buttons,password fields for entering in the application etc . If this is not possible creating application using only MS Access please tell me tutorial for achieving this using C++. Please tutorials to have 20 pages your network access was interrupted. Stand alone DB. Getting error message: your network access was interrupted. It is a stand alone DB. We are on a network, but this database does not go out and retrieve any information Data Access Objects The core of Microsoft Access and an important part of Visual Basic (the stand-alone application development environment) is the Microsoft Jet database engine. The relational DBMS functionality of Access comes from the Jet engine; Access itself merely provides a convenient interface to the database engine. vb.net stand alone login I'm still new in vb.net. I want to develop a stand alone program using vb.net.net login with microsoft access database.? Query through multiple tables My project is set up so that I have several tables with different TV sizes (ie 20 inch, 22 inch, . 60 inch) Each contain roughly the same types of info - like each have a TV Model no. and aproduct number with few containing specific info that only that TV would have, like a special part to the TV. So what I want to do is relate these many TV TABLEs to the STAND TABLE that they would be using. I have like 10 different tables of TV sizes and 1 stand list. The TV TABLEs are related to the stand list by TV model. Sounds easy enough right? But my problem is this- how do I make a query where I am able to just put in TV Model and it will search throught the tables and put out the necessary info - lets say I want TV Model, related Stand, and Product Number outputed. Basics for Building Access 2007 Runtime-Based Solutions Find out how to prepare your application for use with the Microsoft Office Access 2007 Runtime. If you are creating an application that runs in an Access Runtime environment, you must carefully consider how to provide an interface for the user. You must also consider the fact that some users may own the correct version of Access and run the application in a full Access environment. Take care to test your application under both environments to make sure it properly balances usability in the Access Runtime environment with code security in a full Access environment. After you create and test your application, you can use the Package Solution Wizard that is included in the Access 2007 Developer Extensions to create a final version that you can deploy to end users. Access 2007 Error 3197 Firstly I apologise if this is a stupid or repeated question, but I cannot find the same question or a solution anywhere. I have been running Access 2007 on a stand alone internet connected pc running XP. All was well until a couple of days ago. I cannot open any databases or create new ones. I get the same error saying another user is also accessing the DB, and the error code is 3197. Even when trying to create a new one. There is no other user ! I have un-installed and reinstalled Office 2007 and it makes no difference. I have used the same office disk and installed office onto a stand alone laptop running Vista. I copied all my database files to cd and loaded on to the lap top. They are all fine ! So, the db's are not corrupted. Office is not corrupted. The pc's are not networked at all. Any ideas PLEASE ? Thank you all very much for reading and hopefully solving Stored Procedure in MS Access 2007 I am using MS Access 2007 and VS 2010. I want to develop an application with back end as MS Access. But I want to create stored procedures in MS Access. Is it possible to create in MS Access or I have to hard-code the necessary queries in my application. If possible please send how to create and maintain it in MS Access. Over a period of time I developed several stand alone applications I would now like to combine them into one overall application. Can I set up the main form to select the application you need to work with. Set the relationships to all the tables in all applications . For instance in app A request a report from app B with out going back to the main form and running app B. If this can be done is there a limit on the number of tables that an app can have Can I make my application portable? I have created several successful Access applications. I now want to create an application which can be maintained on more than one computer, some which may not have MS Office with Access installed. Can I put the entire application and data files on a CD which could then be maintained on different computers. I guess that means all would be in a .exe file and maybe use CD-RW discs I was asked to improve an existing old Access application. I was given no documentation at all. All I have is the icon which starts the application which runs with lots of flaws. Now, once in the application I don't have the options neither the menus to see and modify tables, queries,forms etc. How can anyone get access to all this on a running application Switch to Another Application that is running in the background. This was not as simple as I had hoped! Does anyone know how to bring an application to the foreground from within Access? For example - Access is the foreground application (full screen) and Excel is running in the background (or minimized). What is desired is a button on a form that will make the background application (Excel in this case) come to the foreground. The application in the background has built in functions to activate any other application that is running in the background. Sure hope I haven't missed it but a search of Access help, forums etc has not shown me the answer so far! windows forms and console application together? Could I Create a windows forms application that would have a console application form I'll use an example: say I was creating an application that would need some text written in the command line. I create the windows forms application because I want a GUI for the start page. then when I click a button, it should open a console program that is inside the windows forms application (not a separate project). Creating a Microsoft Access Application Once you have worked through the stages of Planning a Microsoft Access Application, you will then move onto creating the application in Microsoft Access. If you have correctly structured the database design, the application design will be much easier to implement. Having spoke to your database user's, they will have given you ideas for what is required in the database application and you can begin work on the interface that will be used. Create and customize a web app in Access 2013 Access 2013 features a new application model that enables subject matter experts to quickly create web-based applications. Included with Access are a set of templates that you can use to jump start creating your application. Is Access the right application for this? I'm looking for an application that will allow me to create a database which allows my users to make selections and based on those selections, it could spit out a Word document. The selections are tied to various pieces of text and possibly tables that would populate the Word document. It sounds simple, but the document I'm trying to generate can get complicated. Can Access accomplish this and if so, do I need Access 2010 or could I do this with Access 2007? If not, is there another application which could do this
OPCFW_CODE
Error in yaml file are no longer reported Describe the bug Since version 0.6 the errors in the file are no longer reported. Before parsing errors were shown in the preview window, now the preview is empty. How to Reproduce Following file asyncapi: 2.6.0 info: title: Message version: "0.1.0" Output in previous version: Output in new version: Expected behavior Errors should be reported. Hi @SzymonMalczak thanks for taking the time to report this error, it was reported upstream already.. any update on this? is there an upstream link? (i cannot find) this feature feels pretty important thanks! @ivangsa The current "@asyncapi/react-component": "^2.2.5" has got support for showing errors in the document. Issues: https://github.com/asyncapi/asyncapi-react/issues/1048 https://github.com/asyncapi/asyncapi-react/issues/874 The pr which introduces the changes: https://github.com/asyncapi/asyncapi-react/pull/1068 But by simply upgrading the @asyncapi/react-component dependency in the extension doesnt cause any change in the working of the extension. It still shows a blank page on error. ( i tried this while testing locally ) Am i missing something here which is causes this ? cuz i think upgrading the dependency should fix the issue of blank screen. @catosaurusrex2003 can you have a look at this code: https://github.com/asyncapi/vs-asyncapi-preview/blob/master/src/PreviewWebPanel.ts#L142 maybe now we need to instantiate this with different parameters @ivangsa i found what the issue is. only the schema.options needs to be changed to schema.requestOptions But even doing this wont make errors render. the problem is present in https://github.com/asyncapi/asyncapi-react/blob/master/library/src/containers/AsyncApi/Standalone.tsx The problem is that AsyncApiComponent inside Standalone.tsx relies on the error props being passed down onto it from AsyncApiComponent inside AsyncApi.tsx. ( Hence why errors are rendered in the playground and not in the extension. cuz we are directly using standalone without any parent ) This might have been my short-sightedness while doing the PR. So the solution for this is to make it so that SpecificationHelpers.retrieveParsedSpec supports returning errors so that <Error /> can be conditionally rendered even without the need for the props ( This will make it work in our extension ) Pls correct me if i am wrong or if i am missing ourt on the bigger picture. @catosaurusrex2003 you know more than me about this propose what do you think is the best solution, can we pass any kind of object to AsyncApiStandalone.render that will held the errors in case there are any? can we pass any kind of object to AsyncApiStandalone.render that will held the errors in case there are any? I dont think this will be possible because this will require us to parse the document so that we can pass errors to AsyncApiStandalone.render. This defeats the purpose of having a AsyncApiStandalone in itself. So the solution for this is to make it so that SpecificationHelpers.retrieveParsedSpec supports returning errors so that can be conditionally rendered even without the need for the props ( This will make it work in our extension ) I think this is the only way to go as of now. Let me work on modifying AsyncApiStandalone and test the playground and this extension both locally and raise a PR for this. I will get back to you @ivangsa. @AceTheCreator you are more well versed with @asyncapi/react-component. Please let me know if i am on the right track. @ivangsa needed to do a lot of changes in @asyncapi/react-component and now working. Will create a PR in that repo and hopefully it will get merged and start working here after we bump the version. Update: The issue has been fixed in v0.6.5. There is still a minor CSS issue with the gray text. I'll investigate further to pinpoint the exact problem and work on a fix.
GITHUB_ARCHIVE
Importing and enabling an SSL certificate for Microsoft Exchange has evolved from earlier versions of Microsoft Exchange. Previously you would have to use IIS MMC console to initiate the certificate request. If your mail server is a hosted exchange solution then you this article will not be necessary as this is taken care of on the hosting end. In this article I will provided 3 easy to follow steps to complete this task: Request the Exchange 2007 certificate Requesting the certificate requires a lengthily powershell command, one incorrect character or typo may prompt an irritating error. The best way to do this is to use this Exchange Certificate Request Generating tool found at digicert.com/easy-csr/exchange2007.htm. Here is an example of the command you should receive: New-ExchangeCertificate -GenerateRequest -Path c:mail_mybusinessdomain_com.csr -KeySize 2048 -SubjectName “c=GB, s=london, l=london, o=my business, cn=mail.mybusinessdomain.com” -DomainName autodiscover.mybusinessdomain.com, mybusinessdomain.com -PrivateKeyExportable $True. In the example shown above the common name (CN) will be mail.mybusinessdomain.com. autodiscover.mybusinessdomain.com and mybusinessdomain.comwill be alternative subject names also valid under the certificate once issued. We use multi-named certificates to meet the autodiscover best practices for Exchange 2007, but this however is another article on its own. Now what you need to do is copy the shell command that the exchange certificate request generator generated then paste it into a powershell command prompt on your Exchange server (if you are using Server 2008 remember to run right click > Run as administrator). Once this is complete you can locate the file in the root of your C: drive (in our example the file name will be c:mail_mybusinessdomain_com.csr). Open this file (c:mail_mybusinessdomain_com.csr) in notepad and copy the whole content of the encrypted text including the -start- and -finish- lines. Now go ahead and login to your control panel through the certificate authority you purchased SSL certificate from (e.g. godaddy, Verisign etc) and paste that encrypted text as advised above when instructed by your CA. Import the Exchange 2007 Certificate Once the certificate has been issued by your CA download the “certificate.cer” to the root of your C: drive on the Exchange server. Open a Powershell prompt on your Exchange server and type the following command: Import-ExchangeCertificate -Path “c:certificate.cer” Be sure to copy the thumbprint of the certificate as you will need it in the next step. Once imported we need to enable the use of the certificate. Next type: Enable-ExchangeCertificate -Thumbprint [thumbprint] -Services “SMTP, IIS” (Do not use [ ] brackets in the thumbprint) In this case the new certificate would be enabled for OWA, autodiscover and SMTP security which in most cases is sufficient. You can use the following service identifiers also if you wish to secure other services such as POP or IMAP: SMTP, POP, IMAP, UM, and IIS. (Use the same command above and use a comma to separate them). That’s it all done. Now to test visit the Common name you used to register your certificate (using the HTTPS://) to make sure that its working as it should. If you do however use a hosted exchange 2007 solution you may have to create a CNAME record in your DNS. This information can be found from your hosted exchange provider.
OPCFW_CODE
The world wide web (WWW) system has made it easy for us to access resources on the internet. All you need to do is type in a specific domain name and you can access the website immediately. The internet is an invention that has had an undeniable impact on our lives. But, before the modern-day internet, there was a similar system called Usenet. It was a distributed discussion system accessible through computers, enabling people around the globe to exchange digital resources. Usenet was invented in 1980 by two graduate students at the University of North Carolina at Chapel Hill and Duke University. It made long-distance communication possible via computers before the Internet became commonly accessible. Usenet popularized many well-known concepts like "FAQ", "spam", and "sockpuppet". Usenet users can post messages or articles on a virtual repository known as a newsgroup. They are functionally similar to discussion forums that you can access on the internet today but are technically different. News servers are the software that manages the routing and storage of messages on a newsgroup and a newsreader is a software that enables Usenet users to read the content of newsgroups. Before the advent of the World Wide Web system, newsgroups were one of the most popular online services. They have largely lost ground to discussion forums on the world wide web today but still remain in existence. There are over 100,000 newsgroups that you can interact with currently. Oh and just in case you're interested, we've got a best Usenet providers guide should you want to dive in and explore. Retention for newsgroups Whenever someone posts an article to a newsgroup, the news server keeps the article online for a specific period before deleting it to free up storage space for other articles. This period is known as retention, and it’s one of the most important things to look out for when selecting a news server. Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Different news servers can have different retention periods for the same newsgroup. Some may keep articles up for a few weeks while some can keep them up for many years. The longer the retention period of your news server, the wider the collection of articles you’ll have access to. History of newsgroup retention As the Usenet network grew in the early 1980s, one of the major problems news server providers faced was storage. Keeping articles online means keeping servers up and running, which incurs significant costs. You’ll have to pay for the space to keep the servers, the energy to keep them powered and cooled, and for regular maintenance if anyone breaks. To tackle the data storage issue, Usenet providers began limiting the amount of time to host data. Another option would have been keeping only the most read or downloaded articles up but it could complicate things further, as the most recent posts will always have the lowest downloads and views. The fairer way seemed to limit storage times, and Usenet providers went with it. Usenet providers also have limits on the size of articles they can accept. The larger the limit, the better, especially for people that want to use Usenet to upload or download large files. Binary vs Text Retention There are two main types of files; text and binary files. The former contains only textual data while the latter contains custom binary data. Examples of binary files are photos, audio, and videos. By design, binary files are much larger than text files, meaning they take up more storage space. Most Usenet providers offer different retention periods for binary and text files, with the latter being much longer because it’s less costly to store. Sometimes, the providers outsource the storage of binary files but store text files in their own data center because it's more cost-efficient. However, some providers offer the same retention periods for both types of files. Spooling is a technique that allows data to be stored indefinitely instead of in traditional retention where it's deleted after a certain amount of time. It involves sending data to an intermediate storage before it is requested. The posts are stored on intermediate servers and whenever someone accesses the article stored on one of those servers, it is retrieved and shown to them. Spooling has enabled Usenet providers to offer retention for thousands of days. It’s common to find providers offering over 5,000 days of retention thanks to this technique. Retention is one of the most important factors when choosing a Usenet provider. The longer the retention they offer, the broader the selection of content you can access on newsgroups when connecting through that provider. We’ve provided some examples of Usenet providers with long retention periods, and there are many more you can choose from. Stefan has always been a lover of tech. He graduated with an MSc in geological engineering but soon discovered he had a knack for writing instead. So he decided to combine his newfound and life-long passions to become a technology writer. As a freelance content writer, Stefan can break down complex technological topics, making them easily digestible for the lay audience.
OPCFW_CODE
Summary: “Our daily bread” isn’t a prayer just for boring old bread. Instead, “Our daily bread” is a prayer for the basic physical needs of life. The Lord’s Prayer is but 38 words (in its original form in Luke’s gospel) that change the very way we understand God, ourselves, and the world. It’s the richest single source in the entire Bible of information on how to pray. It’s not the only place where prayer is taught inside the pages of the Bible but it the richest place to go for teaching on prayer …because it’s a model of true prayer for everyone. And I can say to you now without any reservations at all that all of the answers to all of your problems, when they’re rightly understood, are here. Let me say that again: the answers to all of your problems, if you rightly understand them, are in the Lord’s Prayer. Why This Series? 1) I want to help you develop a powerful prayer life. I’m calling on all of us who believe in Christ to turn up the volume of our prayer life. To turn up the intensity of our prayer life. 2) I want to encourage you by showing you that God hears prayer. Now Jesus was praying in a certain place, and when he finished, one of his disciples said to him, “Lord, teach us to pray, as John taught his disciples.” 2 And he said to them, “When you pray, say: “Father, hallowed be your name. Your kingdom come. 3 Give us each day our daily bread, 4 and forgive us our sins, for we ourselves forgive everyone who is indebted to us. And lead us not into temptation.” (Luke 11:1-4) Introduction to Today’s Sermon This is the third week for us to look at the Lord’s Prayer. And today we enter into the second half of the Lord’s Prayer. There’s five different “asks” in this prayer and today we’re focusing on praying for what we need. To many people, this is the whole point of prayer. The whole point of prayer is how do you get God to give you things, right? It can be the prayer of a lost hiker who can’t find his way back out of the dense forest, the frightened airplane passenger, or the mother who hovers over her sick child. Sometimes your prayer is nothing more than two words: “Oh, God!” We often ask ourselves and others, “How do I get God to give me what I need?” 1. Why Pray? It was the former Soviet Union leader, Vladimir Lenin, who wrote, “Electricity will replace God. The peasants should pray to it; in any case they will feel its effect long before they feel any effect from on high.” Some doubt the power of prayer altogether. While others wonder, “If God cures one person’s cold because they prayed, why didn’t He prevent Auschwitz?” Let’s tackle one common questions to help you pray… Why Pray if God Already Knows? We Don’t Pray So God Can Learn Our Needs. Jesus tells us: “your Father knows what you need before you ask him” (Matthew 6:8). In the second half of the Lord’s Prayer, Jesus teaches us to pray for three things: 1) We are to pray for the food we need; 2) We are to pray for grace to cover our sins; 3) We are to pray for deliverance away from temptation. And God knows we need each of these three things before we pray. God isn’t ignorant. He doesn’t need to go to school. He knows what we need before we ask. So why should we pray? Let’s compare prayer to rubbing Aladdin’s Lamp for a moment. If prayer worked like Aladdin’s Lamp and you placed the lamp in the hands of fourth grader where God had to do what she asked for… God says, “I’ll grant you three wishes, no matter how stupid or smart your request is.” Some of you would be married to fifteen people while others would have killed everyone you work with. How comfortable would you be with this scenario? You say, “But I’m smarter than a fourth grader. God could trust me with Aladdin’s Lamp.” Really? Listen to this: Between 2000 and 2004 more than four million live animals were brought weekly into Miami’s International Airport. And those were only the ones that we individually counted. An additional twenty-one tons of animals that no one bothered to count. These non-invasive animals do $120 billion worth in environmental damage last year alone. Among the animals that are brought into the United States is the Burmese Python. Some of these snakes are released into the wild while others escape. No matter, because the Burmese Python is a nonnative species to Florida so it can find things to eat but few things are eating pythons as of yet. They are aggressive predators as they can even kill deer and alligators. They are eating raccoons, opossums, as well as bobcats. In 2012, a Burmese python was discovered that measure 17 feet, 7 inches. This mother python was carrying 87 eggs. People have purchased Burmese pythons as pets only to release them because they fear the snake will harm their other pets. But it is a disaster for the Florida environment.
OPCFW_CODE
SSRS monthly/daily report I have a variable in my report which holds 2 possible values: 'monthly' and 'daily'. How can I put this variable (lets call it @reportModel). I think it should be somewhere in GROUP BY clause, but don't know how should it look like. DECLARE @reportModel VARCHAR(10) SET @reportModel = 'monthly' SELECT P.product, SUM(O.price * O.quantity), O.orderDate FROM Products AS P INNER JOIN Orders AS O ON P.ID = O.ID And what now? I would put it in the Group On Expression of the table or chart rather than doing it in the query. =IIF(Parameters!reportModel.Value = "monthly", Month(Fields!orderDate.Value), Fields!orderDate.Value) If you'd rather do it in the query and don't want to wait for DBAs to get around to deploying a Stored Procedure (not to mention maintaining it whenever there's a change), you could use your parameter in a CASE like: SELECT P.product, SUM(O.price * O.quantity), CASE WHEN @reportModel = 'monthly' THEN CAST(MONTH(O.orderDate) AS VARCHAR(12)) ELSE CAST(O.orderDateAS VARCHAR(12)) END AS DT FROM WORKFLOW_SHARED.MAIN.VW_CLAIMSOVERPAYMENT WHERE DATECOMPLETED > '7/1/2015' GROUP BY P.product, CASE WHEN @reportModel = 'monthly' THEN CAST(MONTH(O.orderDate) AS VARCHAR(12)) ELSE CAST(O.orderDateAS VARCHAR(12)) END This way you don't have to maintain two separate reports. This would be potentially very slow. Imagine thousands of records per day, this would bring all that data back from the database. @DavidG - There's not that much of a difference in time with SQL Server aggregating the data vs Report Server doing it. I have dozens of reports that aggregate thousands (some millions) of records. I'm sorry, but there's a huge difference between doing it in the database versus in a client app. @HannoverFist I have seen many example where we were having report performance issues and as soon as we moved the the data processing closer to source (SQL Server instead of SSRS), a major improvement was seen immediately. Also why do the extra work anyway? doesnt make any sense mate. How about a stored procedure to handle this, something like..... CREATE PROCEDURE rpt_GetData @reportModel VARCHAR(10) AS BEGIN SET NOCOUNT ON; DECLARE @Sql NVARCHAR(MAX); IF (@reportModel = 'daily') BEGIN SET @Sql = N' SELECT P.product , SUM(O.price * O.quantity) AS Total , O.orderDate FROM Products AS P INNER JOIN Orders AS O ON P.ID = O.ID GROUP BY P.product , O.orderDate' Exec sp_executesql @Sql END ELSE IF (@reportModel = 'monthly') BEGIN SET @Sql = N' SELECT P.product , SUM(O.price * O.quantity) AS Total , MONTH(O.orderDate) AS [Month] FROM Products AS P INNER JOIN Orders AS O ON P.ID = O.ID GROUP BY P.product, MONTH(O.orderDate)' Exec sp_executesql @Sql END END I'd rather throw an exception instead of a PRINT. Also worth mentioning that the execution plan for this will potentially horrible. Oh, and this is also returning different data formats - DATETIME column versus INT. @DavidG you don't need an except/error here, its only an invalid output , some informational message is enough I think, Also different output/format can be handled in SSRS very easily don't see any reason why we should make it anymore complex at all. Good thing about this proc is all the heavy data processing is being done closer to data source and only required data will be pulled across , and presentation(format) can be handled in reporting application as it should be :) Then the PRINT is just confusing matters. I know that SSRS can handle the differences, but it's very awkward. The comment about execution plan still stands though. @DavidG true, agree about execution plan issue, I have fixed it now anyway :) Hmm yes, though I hate dynamic SQL! :) By the way, my preferred solution would be to have 2 reports, or let the report call a different procedure based on the parameter. Adding this as an answer because I mentioned it in the comments in @M.Ali's answer. So I would suggest you change thinking slightly with one of these options. Two reports - Simply make a report for daily and another for monthly. Now you have no worries with complex SQL etc. Make 2 stored procedures, one with the GROUP BY daily and one monthly. Then in your SSRS dataset, create an expression for you SQL that chooses the procedure based on parameter: =IIf(Parameters!reportModel.Value = "monthly", "GetMonthlyData", "GetDailyData") How about something simple like this select P.product ,Total = sum(O.price * O.quantity) , O.orderDate from Products as P inner join Orders as O on P.ID = O.ID where @reportModel = 'daily' union all select P.product ,Total = sum(O.price * O.quantity) ,[Month] = MONTH(O.orderDate) from Products as P inner JOIN Orders as O on P.ID = O.ID group by P.product ,[Month] = MONTH(O.orderDate) where @reportModel = 'monthly'
STACK_EXCHANGE
the procedures described in this tutorial are to be considered "as is", without any warranty. There is no relationship between the original distro Slackware (great distro!) and the ideas contained in this article. I think that the usbboot.img idea, as system to make a bootable USB-stick, has to be overcome, It is slow as boot time and uses a proprietary FAT filesystem. The only reason whereby Slackware team mantains this system in use is, perhaps, because the windows users cannot run bash scripts in their own system. But to run a simple bash script is sufficient any Linux system, also the mini-system inside the Slackware DVD, so, also windows users that have at least a PC with DVD reader can run an external script. This is a simple script that can make a bootable USB-stick (or USD-HD) similar at Slackware installation DVD, that may contains also the packages to perform Slackware installation in computers without DVD reader. Download the script File:UsbslackDVDboot.sh Rev.03 - 2013-03-13 some improvments. Notes: #Uncomment below if you get errors type: Warning: '/proc/partitions' does not match '/dev' directory structure. cp -Rpdf /dev/* $USB_TEMP_MOUNTPOINT/$SLACK_INSTALL_PATH/dev/ |usbslackDVDboot.sh this is the code:| #!/bin/sh # usbslackDVD.sh - Make a bootable USB-stick (or USB-HD) from Slackware # copyleft Fabio Zorba 2012-2013 # rev.03 - 2013-03-13 # Note: Slackware14 kernel needs LILO version 23.2 to boot SLACK_INSTALL_PATH="SlackDVD" USB_TEMP_MOUNTPOINT="../USB_SLACK_TEMP_MOUNTPOINT" if [ $# -eq 2 ] ; then ROOT_DEV=/dev/$1 #-------- check device --------- case "$1" in sda) BOOT_DEVICE="/dev/sda$2" ;; sdb) BOOT_DEVICE="/dev/sdb$2" ;; sdc) BOOT_DEVICE="/dev/sdc$2" ;; sdd) BOOT_DEVICE="/dev/sdd$2" ;; sde) BOOT_DEVICE="/dev/sde$2" ;; sdf) BOOT_DEVICE="/dev/sdf$2" ;; sdg) BOOT_DEVICE="/dev/sdg$2" ;; sdh) BOOT_DEVICE="/dev/sdh$2" ;; *) BOOT_DEVICE="not_in_list" ;; esac if [ "a$BOOT_DEVICE" = "anot_in_list" ] ; then echo "Error device $ROOT_DEV, valid names are: sda,sdb,sdc,sdd,sde,sdf,sdh ... exit" exit fi echo " --------------------------------------------------- You selected $BOOT_DEVICE device to install SLACK filesystem. Warning ! All data in selected device may be LOST " echo -n "type 'okay' to proceed : " read NNN if [ "a$NNN" = "aokay" ] ; then EXIST_DEVICE=`ls $BOOT_DEVICE` if [ "a$EXIST_DEVICE" == "a" ] ; then echo "Error device $BOOT_DEVICE not exists ... exit" exit fi #Check if Slackware DVD is present in system DVD_MOUNTPOINT=`mount | grep iso9660 | cut -d" " -f3` if [ ! -d $DVD_MOUNTPOINT/slackware ] ; then echo " Error Slackware DVD not found or not mounted ... exit try to mount by hand: mount /dev/sr0 /mnt/cdrom or mount /dev/hda /mnt/cdrom " exit fi USB_MOUNTPOINT=`mount | grep $BOOT_DEVICE | cut -d" " -f3` if [ "a$USB_MOUNTPOINT" == "a" ] || [ ! -d $USB_MOUNTPOINT ] ; then echo "Format the partition $BOOT_DEVICE ? [ENTER=no]" echo -n "... or type 'okay' to proceed with formatting: " read NNN if [ "a$NNN" = "aokay" ] ; then mkfs.ext2 $BOOT_DEVICE fi mkdir $USB_TEMP_MOUNTPOINT mount $BOOT_DEVICE $USB_TEMP_MOUNTPOINT IS_USB_MOUNTED=`mount | grep $USB_TEMP_MOUNTPOINT` if [ "a$IS_USB_MOUNTED" = "a" ] ; then echo "Error: $ROOT_DEV device not mounted in $USB_TEMP_MOUNTPOINT ... exit" exit fi else echo "$BOOT_DEVICE already mounted in $USB_MOUNTPOINT, I will install inside this one ..." USB_TEMP_MOUNTPOINT=$USB_MOUNTPOINT fi echo "Make usb-disk devices" mkdir -p $USB_TEMP_MOUNTPOINT/$SLACK_INSTALL_PATH mkdir -p $USB_TEMP_MOUNTPOINT/$SLACK_INSTALL_PATH/boot mkdir -p $USB_TEMP_MOUNTPOINT/$SLACK_INSTALL_PATH/dev mkdir -p $USB_TEMP_MOUNTPOINT/$SLACK_INSTALL_PATH/etc echo "Copying files ..." cp -f $DVD_MOUNTPOINT/kernels/hugesmp.s/bzImage $USB_TEMP_MOUNTPOINT/$SLACK_INSTALL_PATH/boot cp -f $DVD_MOUNTPOINT/isolinux/initrd.img $USB_TEMP_MOUNTPOINT/$SLACK_INSTALL_PATH/boot #Add Slackware packages to root of partition echo "Copy Slackware packages to $BOOT_DEVICE ? [ENTER=no]" echo -n "type 'yes' to begin copying files: " read NNN if [ "a$NNN" = "ayes" ] ; then echo "Copying Slackware packages ..." cp -Rf $DVD_MOUNTPOINT/slackware $USB_TEMP_MOUNTPOINT/ fi #Make usb devices in /dev because Lilo needs them START=0 MINOR=0 for DISK in "a" "b" "c" "d" "e" "f" "g" "h" do MINOR=$START for i in 0 1 2 3 4 5 6 do if [ $i -eq 0 ] ; then mknod $USB_TEMP_MOUNTPOINT/$SLACK_INSTALL_PATH/dev/sd$DISK b 8 $MINOR else mknod $USB_TEMP_MOUNTPOINT/$SLACK_INSTALL_PATH/dev/sd$DISK$i b 8 $MINOR fi let MINOR=$MINOR+1 done let START=$START+16 done #Uncomment below if you get errors type: #Warning: '/proc/partitions' does not match '/dev' directory structure. cp -Rpdf /dev/* $USB_TEMP_MOUNTPOINT/$SLACK_INSTALL_PATH/dev/ echo "LILO will be installed to $ROOT_DEV by default ..." echo echo -n "You may select another device [enter=default] : " read ELSE_ROOT_DEV if [ "a$ELSE_ROOT_DEV" != "a" ] ; then ROOT_DEV=$ELSE_ROOT_DEV fi echo echo "flushing disk cache before install Lilo ... please wait ..." echo " # LILO configuration file # generated by Zoros # # Start LILO global section # Append any additional kernel parameters: boot = $ROOT_DEV prompt compact lba32 vga = normal timeout = 50 # Linux bootable partition config begins image = /boot/bzImage initrd=/boot/initrd.img label = SlackDVD read-write # Linux bootable partition config ends " > $USB_TEMP_MOUNTPOINT/$SLACK_INSTALL_PATH/etc/lilo.conf lilo -P ignore -r $USB_TEMP_MOUNTPOINT/$SLACK_INSTALL_PATH if [ "a$USB_MOUNTPOINT" == "a" ] ; then umount $USB_TEMP_MOUNTPOINT rmdir $USB_TEMP_MOUNTPOINT fi fi else echo " This script make a bootable USB-Stick (or HD) from Slackware DVD ... Pay attention: all data in selected device may be lost. Usage: $0 device [sdb,sdb,sdc...] n.partition [1,2,3...] " fi Note: the script is not wide scale tested, but so far it has worked well on several PCs To make a bootable USB-stick, you must mount Slackware DVD in your system before run the script. You must be sure that you known exactly the name of USB disk connected to your PC: the script can format the disk, so all data in target device will be lost. However, you can make the bootable USB-stick without formatting, infact the script will install entire system in a folder named "SlackDVD". The script will not format the USB device if this is already mounted, but, in every case, the script will copy the right files and will install the boot manager Lilo. $su - # fsisk -l Disk /dev/sda: 251.0 GB, 251000193024 bytes 255 heads, 63 sectors/track, 30515 cylinders, total 490234752 sectors ... Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors ... Disk /dev/sdc: 4022 MB, 4022337024 bytes 255 heads, 63 sectors/track, 489 cylinders, total 7856127 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x516cb889 Device Boot Start End Blocks Id System /dev/sdc1 63 7438094 3719016 83 Linux /dev/sdc2 * 7438095 7855784 208845 b W95 FAT32 # in this example the usb device is /dev/sdc1, already formatted with ext2 ... in this case you can also mount also the device, or to be sure that is umounted, so the script will begin formatting it. ./usbslackDVDboot.sh sdc 1 then follow the program instructions, have fun! When using the Slackware installer to create the usb-stick you have to make additional operations, in particular install "lilo", because it is not present in the system by default. Here is an example: mkdir /slackcdrom mount /dev/sr0 /slackcdrom installpkg /slackcdrom/slackware/a/lilo-*.txz and, of course, copy the script usbslackDVDboot.sh in the home directory, /root for example Zoros 00:45, 10 October 2012 (CEST)
OPCFW_CODE
1. Your GPU is now assigned more tasks than previously. This means that you'll be losing less CPU cycles to the flashy graphics and animations that come embedded in most DAWs, like meters, clocks, automation animations, and your GUI in general. What I hope this means is that Windows Vista will also use your GPU to run all the eye candy that comes with the OS. I always hated super-flashy OSes, like OSX and Vista, because I thought I was probably using too much of my CPU on the graphics that were there to sell the OS to general purpose users. Supposedly, Vista scales to the hardware it's installed on, meaning that slower PCs or PCs with weaker GPUs will have less animation in the OS. Personally, I could deal just fine with NO animation in my OS. 2. Finding things: easier. I probably don't have to tell you this, because its been one of the most touted features of Vista. The new native search function in Vista claims to be almost instantaneous, providing results with each keystroke, a lot like the search in the latest firefox does. The idea is, this will help people move away from the file-folder sort of logic thats dominated computing for 20 years. As hard drives get bigger and cheaper, this makes good sense, but I'll still be using plenty of folders in my audio apps at the very least. 3. Driver Hell: I don't have to tell you this either. Since computers are currently standing on the fence between 32-bit and 64-bit OSes, you'll have that many more drivers to accidentally try and install. The 32-bit driver, and the 64-bit driver(signed and unsigned). Here's what the manufacturer of my audio card has to say about the issue: M-Audio has been keeping pace with changes to the Windows Operating System since the release of Windows 95 nearly 12 years ago. We are very excited about the opportunity to offer continued support to our Windows customers as the Windows Pro Audio community begins the gradual transition to the Vista era. Over the past year, we have worked directly with Microsoft’s Vista team to prepare for this release. Currently, M-Audio does not offer Vista drivers or Vista software updates (beta or otherwise). As soon as Vista drivers or updates for any product are available, this FAQ and other portions of our Web site will be immediately updated to reflect this. Due to the nature of software and driver development, we are not able to provide exact dates or timeframes for when specific drivers will become available—but please rest assured that supporting Vista is a top priority for us. Gee thanks, guys. I feel like I can really "rest assured" now. Other manufacturers have been a lot better, so I'm sure this is probably not going to be a big deal. But its going to be a hassle for the first year, like it is with every major OS change. From what I can glean from microsoft's site, just switching to Vista is not going to improve or diminish your audio performance. But if your soundcard manufacturer does choose to release drivers that take advantage of the new Vista audio architecture, you could experience better performance. 4. You're going to approve a lot of things. UAC(User Account Control) is the default mode in Vista. With User Account Control in the new Windows Vista operating system, you can reduce the risk of exposure by limiting administrator-level access to authorized processes. Right! And I can click allow for about a million separate applications. I get the feeling that this is going to become a lot like firewalls are to your average user: this annoying box that just keeps popping up asking you to approve things until you just become so frustrated that you click them all the instant they come up. Whether or not these things will pop up and hang audio applications remains to be seen. Is anybody out there using Vista for pro-audio and willing to comment on how its going?
OPCFW_CODE
There are several strategies that can be used for user research, including: Interviews: Interviews are a common method for gathering in-depth information from users. They can be conducted one-on-one or in groups, and can be structured or unstructured. Structured interviews follow a predetermined set of questions, while unstructured interviews allow for more flexibility and follow-up questions. Interviews can be conducted in person or remotely, and can be recorded for further analysis. Example : A Sample interaction between Interviewer and User Interviewer: Hi [Name], thanks for taking the time to speak with me today. Could you tell me a bit about your background and how you use [Product]? User: Sure. I’m a [Job Title] at [Company], and I use [Product] on a daily basis to [Task]. Interviewer: That’s interesting. Can you tell me more about your experience using [Product]? What do you like about it? User: I’ve been using [Product] for about [Length of Time], and I really appreciate how [Positive Feature]. It saves me a lot of time and effort, and I’ve been able to [Benefit]. Interviewer: That’s great to hear. Are there any features or improvements that you would like to see in [Product]? User: I think it would be really helpful if [Feature] was included in the next update. It would make my job a lot easier and allow me to [Benefit]. Interviewer: Thank you for your feedback. Is there anything else you’d like to share about your experience with [Product]? User: No, that’s all. Thanks for the opportunity to share my thoughts. This structured interview follows a predetermined set of questions, and allows the interviewer to gather specific information from the user. The interviewer can follow up on the user’s responses and ask for further clarification or detail as needed. Structured interviews can be useful for gathering consistent data from multiple users. Surveys: Surveys are a quick and efficient way to gather data from a large number of users. They can be conducted online or through paper forms, and can include both multiple choice and open-ended questions. Surveys are useful for gathering quantitative data, but may not provide as much context or detail as other methods. Example: survey questions - How often do you use [Product]? - How satisfied are you with [Product]? - Very dissatisfied - Somewhat frustrated - Somewhat satisfied - Very satisfied - What is the primary reason you use [Product]? - [Option 1] - [Option 2] - [Option 3] - Other (please specify) - How likely are you to recommend [Product] to a friend or colleague? - Very unlikely - Somewhat unlikely - Somewhat likely - Very likely - Do you have any additional comments or feedback about [Product]? (optional) This survey includes both multiple choice and open-ended questions, and allows the respondent to provide more detailed feedback in the optional comments section. Surveys are useful for gathering quantitative data, but may not provide as much context or detail as other methods. User testing: User testing involves observing users as they interact with a product or prototype, and can be conducted in a lab or natural environment. User testing can reveal usability issues and areas for improvement, and can be conducted with a small number of users to identify common patterns and issues. Here is an example of a user testing scenario: - The user is asked to complete a specific task using the product, such as booking a flight or making a purchase. - The user is observed as they interact with the product, and any difficulties or confusion they encounter are noted by the researcher. - The user may be asked to verbalize their thoughts and actions as they use the product. - After completing the task, the user is asked to provide feedback on their experience with the product. - The researcher analyzes the data collected from the user testing session, and uses it to identify areas for improvement or potential design changes. This example illustrates how user testing can be used to gather data on the usability of a product, and how it can be used to identify areas for improvement. User testing can provide valuable insights into the user experience, and can be an important part of the product development process. Focus groups: Focus groups involve bringing a group of users together to discuss a specific topic or product. These discussions can be facilitated by a moderator and can be recorded for further analysis. Focus groups can provide a deeper understanding of users’ perspectives and experiences, and can be useful for generating new ideas or identifying common themes. - A group of users is brought together in a room with a moderator. - The moderator introduces the topic or product being discussed and asks the group to share their thoughts and experiences. - The group engages in a discussion, with the moderator asking questions and prompting further conversation. - The discussion is recorded for further analysis, and the moderator takes notes on key points and themes that emerge. - After the focus group has finished, the moderator and researcher analyze the data collected and use it to inform product development or marketing strategies. This example illustrates how a focus group can be used to gather in-depth information from a group of users, and how the data can be used to inform decision-making. Focus groups can be a useful tool for understanding users’ perspectives and experiences, and can provide valuable insights into the market or industry. Contextual inquiry: Contextual inquiry involves observing users in their natural environment as they perform tasks or activities related to a product or service. This method can provide a deeper understanding of users’ context and needs, and can reveal insights that might not be apparent in a controlled setting. - The researcher visits the user in their natural environment, such as their home or office. - The user is asked to perform a specific task or activity using the product or service, such as grocery shopping or booking a vacation. - The researcher observes the user as they perform the task, and takes notes on their actions, thoughts, and any difficulties or challenges they encounter. - After the task is completed, the researcher asks the user to reflect on their experience and provide feedback. - The researcher analyzes the data collected from the contextual inquiry, and uses it to inform product development or marketing strategies. This example illustrates how contextual inquiry can be used to gather data on the usability of a product or service in a real-world setting, and how it can provide valuable insights into the user experience. Contextual inquiry can be an effective method for understanding the context in which a product or service is used, and can reveal insights that might not be apparent in a controlled setting. Diary studies: Diary studies involve asking users to document their experiences and thoughts over a period of time, typically using a journal or online platform. Diary studies can provide a more comprehensive view of users’ behaviors and needs, and can reveal patterns or trends that might not be apparent in a single user testing session. - The researcher provides the user with a diary or journal, and asks them to document their experiences, thoughts, and activities related to a specific product or service over a period of time, such as a week or month. - The user is asked to record their entries at regular intervals, such as once a day or several times a week. - The user is also asked to provide any additional materials, such as photos or receipts, that might be relevant to their entries. - After the diary study period is over, the researcher reviews the user’s entries and any additional materials, and analyzes the data to identify patterns and themes. - The researcher uses the data collected from the diary study to inform product development or marketing strategies. This example illustrates how a diary study can be used to gather data on the user experience over a longer period of time, and how it can provide insights into users’ daily routines and habits. Diary studies can be an effective method for understanding the context in which a product or service is used, and can reveal insights that might not be apparent in other types of research. Card sorting: Card sorting is a method for understanding how users organize and categorize information. Participants are given a set of cards with content or features on them, and are asked to group the cards into categories that make sense to them. Card sorting can reveal users’ mental models and expectations, and can be conducted in person or online. - The researcher provides the user with a set of cards, each containing a piece of content or information. - The user is asked to sort the cards into groups that make sense to them and to label the groups. - The user is also asked to provide any additional comments or feedback on the content or labeling. - The researcher analyzes the data collected from the card sorting, and uses it to inform the design of the information architecture or navigation system. This example illustrates how card sorting can be used to gather data on how users think about and categorize information, and how it can be used to inform the design of an information architecture or navigation system. Card sorting can be an effective method for understanding how users expect to find and access information, and can help to improve the usability and effectiveness of a product or service. A/B testing: A/B testing is a method for comparing the performance of two different designs or approaches. One version (the “A” version) is shown to a group of users, while the other version (the “B” version) is shown to a different group of users. The performance of each version is then compared to determine which one is more effective. A/B testing can be used to optimize a product or service by identifying the elements that have the greatest impact on user behavior. - The researcher creates two versions of a landing page for a website, with one version having a red call-to-action button and the other having a green call-to-action button. - The researcher randomly divides users into two groups, and exposes each group to one of the two versions of the landing page. - The researcher tracks the performance of each version, such as the number of clicks on the call-to-action button or the conversion rate. - After a sufficient amount of data has been collected, the researcher compares the performance of the two versions and determines which one performs better. This example illustrates how A/B testing can be used to compare the performance of two versions of a product or service, and how it can be used to identify which version is more effective. A/B testing can be a useful method for testing hypotheses and making data-driven decisions, and can help to improve the performance and effectiveness of a product or service.
OPCFW_CODE
I'm using the setting in PlayIt Recorder to "Delete old recordings after..." and I've set it to 20 days or 10% disc space left. It doesn't do it. Neither of the limits work. The disc fills up and the program stops recording. As it happens, I'm also using a program called Deep Freeze (which is currently turned off) that, when on, restores a computer to it's original state upon rebooting. Deep Freeze seems to do this partly by creating a virtual disc of some kind. The large size of this disc I could imagine might flummox Recorder's algorithm for getting the remaining disc space. However, that doesn't explain the not-deleting-after-20-days limit. Also, I've tried using both a C drive folder and Deep Freeze's "Thaw Space" virtual disc to hold the recordings and the result is the same. No matter what I do, the disc fills up and Recorder stops recording, never deleting the old material. Any help or fix would be appreciated. -John Schwenk, Chief Engineer, WRTC Radio (a college radio station) PlayIt Recorder requires that the recording history is stored in order to know which files to delete. This data is stored in C:\ProgramData\PlayIt Recorder. If this is reset then PlayIt Recorder will forget it ever recorded the files and will therefore not know to delete them. There is a post here to show how you can change the path of the application data using the CustomApplicationDataPath switch to specify a different folder that does not get reset: http://support.playitsoftware.com/support/discussions/topics/5000071814 Has this been running fine for the past year and has only started stopping to delete historical files? You can try sending me your Recorder data and logs using this tool and I can take a look: http://downloads.playitsoftware.com/DiagnosticsTool/PlayItDiagnosticsTool.exe No, it has never deleted the files automatically. I periodically do so manually (when I get to it). I've tried various settings in the meantime. I'm part time where the system is and won't return there until Monday at which point I'll run your tool and get back to you. I just ran your diagnostic tool (which is quite impressive) and sent you 24 days worth of data. As I mentioned in my first post, I suspect this may have something to do with our Deep Freeze software even though that software is currently set to off. Thanks for your attention! Can you confirm the version you are using? On PlayIt Recorder go to Help > About... Version 1.05 (Build 192). Indeed, I haven't updated since I installed it. I think we found the source of your problem: I'd suggest updating your copy of PlayIt Recorder to the latest version. I installed the update and it immediately deleted old files. (Or at least I think there were old files there before...) Thanks! I'm very sorry I didn't try that. As a former software developer myself, I do know better. I don't make purchasing decisions, but I'll look into modules or support that we might spend some of our tiny budget on. -John Schwenk, WRTC Chief Engineer
OPCFW_CODE
Spring AOP with GraalVM Native Image Is there any way to use aspects in Spring Boot GraalVM native-image? I need it for logging purpose. I got following error on image run: Caused by: org.aspectj.weaver.BCException: AspectJ internal error at org.aspectj.weaver.reflect.ReflectionWorld.makeAnnotationFinderIfAny(ReflectionWorld.java:132) ~[na:na] at org.aspectj.weaver.reflect.ReflectionWorld.<init>(ReflectionWorld.java:97) ~[na:na] at org.aspectj.weaver.reflect.ReflectionWorld.getReflectionWorldFor(ReflectionWorld.java:51) ~[na:na] at org.aspectj.weaver.tools.PointcutParser.setClassLoader(PointcutParser.java:222) ~[na:na] at org.aspectj.weaver.tools.PointcutParser.<init>(PointcutParser.java:208) ~[na:na] at org.aspectj.weaver.tools.PointcutParser.getPointcutParserSupportingSpecifiedPrimitivesAndUsingSpecifiedClassLoaderForResolution(PointcutParser.java:170) ~[na:na] at org.springframework.aop.aspectj.AspectJExpressionPointcut.initializePointcutParser(AspectJExpressionPointcut.java:242) ~[na:na] at org.springframework.aop.aspectj.AspectJExpressionPointcut.buildPointcutExpression(AspectJExpressionPointcut.java:221) ~[na:na] at org.springframework.aop.aspectj.AspectJExpressionPointcut.obtainPointcutExpression(AspectJExpressionPointcut.java:198) ~[na:na] at org.springframework.aop.aspectj.AspectJExpressionPointcut.getClassFilter(AspectJExpressionPointcut.java:177) ~[na:na] at org.springframework.aop.support.AopUtils.canApply(AopUtils.java:226) ~[na:na] at org.springframework.aop.support.AopUtils.canApply(AopUtils.java:289) ~[na:na] at org.springframework.aop.support.AopUtils.findAdvisorsThatCanApply(AopUtils.java:321) ~[na:na] at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findAdvisorsThatCanApply(AbstractAdvisorAutoProxyCreator.java:128) ~[com.fon.footballfantasy.FootballFantasyApplication:5.3.1] at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findEligibleAdvisors(AbstractAdvisorAutoProxyCreator.java:97) ~[com.fon.footballfantasy.FootballFantasyApplication:5.3.1] at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.getAdvicesAndAdvisorsForBean(AbstractAdvisorAutoProxyCreator.java:78) ~[com.fon.footballfantasy.FootballFantasyApplication:5.3.1] at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.wrapIfNecessary(AbstractAutoProxyCreator.java:337) ~[com.fon.footballfantasy.FootballFantasyApplication:5.3.1] at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.postProcessAfterInitialization(AbstractAutoProxyCreator.java:289) ~[com.fon.footballfantasy.FootballFantasyApplication:5.3.1] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:444) ~[na:na] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1792) ~[na:na] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:609) ~[na:na] ... 37 common frames omitted I suppose that problem is Spring AOP runtime weaving, but how to solve it? EDIT: Thank you for answers! Sorry for not providing additional info earlier. Sample project: https://github.com/programer20/graalvm-demo I was creating native image by following official documentation getting started steps https://repo.spring.io/milestone/org/springframework/experimental/spring-graalvm-native-docs/0.8.3/spring-graalvm-native-docs-0.8.3.zip!/reference/index.html#_getting_started I tried with both 0.8.3 and 0.8.5 versions. I will try to notify Andy Clement about this post, because he is both the maintainer of AspectJ and one of the brains behind the efforts to make Spring compatible with GraalVM. Meanwhile, it would be good if you could provide more than a stack trace out of context without any code or build information. Please provide an MCVE, ideally a Maven project on GitHub, also mentioning how you run which GraalVM version etc. I think you are right about the problem. If you were doing build time weaving, it would be totally fine as the modified byte code will be fed into GraalVM native-image for analysis and inclusion in the image. If doing loadtime weaving I believe it can work but haven't confirmed recently if you use loadtime weaving at the point the native-image is being built (via setting the java options to include the aspectjweaver agent), the classes will be woven as they are loaded and the woven form will be included in the image. It can never really work at image runtime because there is no notion of classes any more, and classes cannot be dynamically defined. So yes, since Spring AOP can be done quite late on, as configuration is resolved, there may be problems. Take a look at the spring native project for the very latest support building your Spring projects into native-images, but we have no samples there for Spring AOP right now as I recall. I'd encourage you to raise issues against that project, including a sample project that show your specific problem can be invaluable. You haven't mentioned how you are creating the native-image right now which may influence my recommendations. I think pushing some analysis/weaving a bit earlier in the process could make it work but haven't been into that space yet.
STACK_EXCHANGE
I’m trying to play with inter-process communication and since I could not figure out how to use named pipes under Windows I thought I’ll use network sockets. Everything happens locally. The server is able to launch slaves in a separate process and listens on some port. The slaves do their work and submit the result to the master. How do I figure out which port is available? I assume I cannot listen on port 80 or 21? I’m using Python, if that cuts the choices down. Bind the socket to port 0. A random free port from 1024 to 65535 will be selected. You may retrieve the selected port with getsockname() right after You can listen on whatever port you want; generally, user applications should listen to ports 1024 and above (through 65535). The main thing if you have a variable number of listeners is to allocate a range to your app – say 20000-21000, and CATCH EXCEPTIONS. That is how you will know if a port is unusable (used by another process, in other words) on your computer. However, in your case, you shouldn’t have a problem using a single hard-coded port for your listener, as long as you print an error message if the bind fails. Note also that most of your sockets (for the slaves) do not need to be explicitly bound to specific port numbers – only sockets that wait for incoming connections (like your master here) will need to be made a listener and bound to a port. If a port is not specified for a socket before it is used, the OS will assign a useable port to the socket. When the master wants to respond to a slave that sends it data, the address of the sender is accessible when the listener receives data. I presume you will be using UDP for this? Do not bind to a specific port. Instead, bind to port 0: import socket sock = socket.socket() sock.bind(('', 0)) sock.getsockname() The OS will then pick an available port for you. You can get the port that was chosen using sock.getsockname(), and pass it on to the slaves so that they can connect back. sock is the socket that you created, returned by For the sake of snippet of what the guys have explained above: import socket from contextlib import closing def find_free_port(): with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s: s.bind(('', 0)) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) return s.getsockname() import socketserver with socketserver.TCPServer(("localhost", 0), None) as s: free_port = s.server_address Note that the port is not guaranteed to remain free, so you may need to put this snippet and the code using it in a loop.
OPCFW_CODE
By Werner Ruotsalainen on Thu, 10/27/2011 In my Wednesday's article, I've quickly mentioned PhoneDisk by Macroplant as the most recommended application for (Mac) OS X computers to access the installed (AppStore or Xcode) applications' home directories on non-jailbroken devices. Now that I'll be traveling the next week lodging in hotels not necessarily having safes to keep my expensive 17” MacBook Pro, I've decided to leave it at home and take my Windows 7 Ultimate-based, old and, therefore, not too expensive IBM Thinkpad t42p with me. (And, after all, next week I only need to do some Windows Phone 7 coding so I don't necessarily need a Mac.) However, in order to be able to upload new videos (to GoodPlayer as I only use non-iOS-native .ts files – mp4 files native to the built-in iPod / Videos player aren't the best quality at my legal [subscription] video source, tvkaista.fi) and ebooks (to GoodReader as I still prefer it to iBooks, even with the latter having, at last, received PDF in-text searching capabilities) on my ThinkPad without having to link these iDevices with it (with all its problems: having to register the iTunes account; having to delete all the original contents of all iDevices etc.), I've decided to give the current (188.8.131.52) version of Phone Disk a try on it as, when I mailed the developers in July, they promised they would release a working version in the near future. To my delight, everything went just fine. I'm pretty happy I've purchased a 5-license Family Pack (for some $40 after taxes) back in July – I can access my iDevices on even my inexpensive, no-problem-if-stolen travel notebook. If you've faced the same problem as me, that is, you don't (necessarily) want (or can't, as is the case of iPhone 4S's non-4.3.3 iPad 2's) jailbreak your iDevice but would still want to transfer large media files to it as quickly as possible, you'll want to give a try to Phone Disk. It's way faster at transferring over USB2 than the traditional (Wi-Fi, Web browser- or, in some rare cases, FTP-based) methods of transferring media to players or readers. (I've measured about an order(!) of magnitude speed difference. I'll return to this question in a later article, which also shows you some examples of doing all this in your own TCP/IP code. Only for programmers, of course.) This is pretty much a difference if you want to upload a, say, 2-3 GByte video file to your iPad for watching. (With much smaller PDF files and the like, you may still want to try Wi-Fi transfer.) In addition, if the app in question doesn't support any kind of wireless access to its home directory and you haven't jailbroken your phone, Phone Disk is the only way to access the contents of your application home directories to make, say, a backup of them to, for example, save (and, later, restore or just transfer to some other iDevice) the high scores in your games if they aren't synchronized via, say, Game Center. Make sure you give it a try before purchasing. I don't think it wouldn't work on your system (it works just fine on mine under a legal Win7 Ultimate), but it's still better to be on the safe side – and you can use it up to 100 Mbytes of file transfer or some two or three app mount point changes. Tips on accessing your apps' home directories Speaking of app mount point changes... by default, when you start PhoneDisk, a new virtual driver will become accessible to all your file browsers (for example, the built-in File Explorer or the third-party Total Commander). By default, it shows the media directory of your iDevice – the one all Windows-based file handler tools are only capable of showing you if your iDevice isn't jailbroken. (Some of these tools may already be known to some of my long-time readers: for example, the T-PoT plug-in for Total Commander). To switch to an app directory, right-click the PhoneDisk icon in the dock and select your iDevice's name / Change Mount Point / Apps. Then, select the app name you'd like to access. What about iExplorer? PhoneDisk's free (!) little brother, iExplorer, formerly known as iPhone Explorer (which was also mentioned in several of my past articles), is also able to access the application directories. The current version (Mac: 184.108.40.206, Windows: 220.127.116.11), at last, has no problems of transferring more than one files or complete subdirectories at a time. Some of its disadvantages, compared to PhoneDisk, is - the lack of full system integration. For example, It provides no access from Finder (OS X) / Explorer (Win) or TC (Win) - lack of quick access. E.g., if you just want to peek into a JPG or video file, you can't double-click it to launch the system-wide photo viewer/video player etc. on it. Sure, it has some kind of an Auto-Previewer (enabled by default; you can disable it in the lower right corner), but it's more of a nuisance than a usable feature. - lack of known Finder (Explorer etc.) shortcuts. E.g., if you double-click a folder, it won't be made the current one. This also means it in no way can order the files inside the folders (the ordering headers are pretty much useless) - lack of reliable file transfer progress meter. You'll only see the transfer is in progress but get absolutely no information on where it actually is. PhoneDisk has a real meter. Nevertheless, if you want to save some money and can live with its handicaps, I can only recommend it now that the mass file / (sub)directory transfer problems plaguing old(er) versions have all been fixed. What about i-FunBox? On my Windows 7 Ultimate, I've also tested the current (1.6.658.564) version of the also highly recommended i-FunBox. Unfortunately, while it's indeed an excellent app, it (still) can't access the application home directories of non-jailbroken iDevices. (On jailbroken ones, it can - but so can all the other file handler tools and plug-ins like T-PoT.) It, however, has a really-really excellent feature: AppFastIn, which allows you to install IPA files on non-jailbroken devices without (!) ever touching iTunes on the desktop. You just collect the IPA files otherwise authorized to be used on your mobile device, press the App Install (AppFastIn) button on the toolbar, navigate to the directory your IPA files in (you can just transfer them from your regular computer, registered with Apple, to your other ones, not registered in AppStore, by simple file copying – on the Mac, they're at /Users/username/Music/iTunes/iTunes Media/Mobile Applications), select one or more (!) of them and click Open. The IPA files will be quickly deployed. Note that this will only work with IPA files you've purchased from Apple and in no way breaks any kind of license agreements. This way, another two of my main problems have been fixed. That is, my not wanting to - add another of my computers to Apple's pool of (restricted) authorized computers (I hate it when my computers break down and, therefore, can't deauthorize my iTunes copies other than deauthorizing all of them – once a year) - unlink application synchronization (or, for that matter, any kind of synchronization!) from my regular computer and link it to my temporary one. This would have resulted in massive deletions and reinstalls, which I really wanted to avoid. Addendum – the list of iDevices I've tested with all the three apps mentioned (Phone Disk (18.104.22.168) , i-FunBox and iExplorer) I've tested accessing (reading / writing to) the app directories and/or installing signed, legally purchased IPA's on the following iDevices without any problems: iPhone 3G (non-jailbroken, 4.2.1) iPod 2G (jailbroken, 4.2.1) iPod 4G (jailbroken, 4.3.3) iPad 1G (jailbroken, 5.0) iPad 2G (non-jailbroken, 5.0) I could transfer files (or install apps) from/to/on all these devices. I had absolutely no problems.
OPCFW_CODE
Empty screen ChatMessageActivity when open by new way Hi, I want to login by these steps and base on your example. I replace method login in LoginActivity by this: var user1 = ConnectycubeUser() user1.login = "hung1" user1.password = "password1" ConnectycubeUsers.signIn(user1).performAsync(object : EntityCallback<ConnectycubeUser> { override fun onSuccess(p0: ConnectycubeUser?, p1: Bundle?) { p0!!.password = user1.password Toast.makeText(this@LoginActivity, "user 1 login success", Toast.LENGTH_SHORT) .show() SharedPreferencesManager.getInstance(this@LoginActivity).saveCurrentUser(p0!!) createChatPrivate(1479875) } override fun onError(p0: ResponseException?) { Toast.makeText(this@LoginActivity, p0?.message.toString(), Toast.LENGTH_SHORT) .show() } }) ////////////////////// private fun createChatPrivate(id: Int) { val occupantIds = ArrayList() occupantIds.add(id) val dialog = ConnectycubeChatDialog().apply { type = ConnectycubeDialogType.PRIVATE setOccupantsIds(occupantIds) } ConnectycubeRestChatService.createChatDialog(dialog) .performAsync(object : EntityCallback<ConnectycubeChatDialog> { override fun onSuccess(createdDialog: ConnectycubeChatDialog, params: Bundle) { startChatActivity(createdDialog) } override fun onError(exception: ResponseException) { } }) } ////////// private fun startChatActivity(chat: ConnectycubeChatDialog) { val intent = Intent(this@LoginActivity, ChatMessageActivity::class.java) intent.putExtra(EXTRA_CHAT, chat) startActivity(intent) } 2. after login success I will create a private chat and then open ChatMessageActivity . then I get issue empty screen with a message "Something went wrong, try again later" You should uninstall the app before test this. I got an issue for a few days and I hope you can help me fix this. Thanks, Please provide full log from logcat and add more information on which step you get this error (login to the chat or load messages or another ways). Please provide full log from logcat and add more information on which step you get this error (login to the chat or load messages or another ways). in logs I found next line 'Caused by: kotlin.UninitializedPropertyAccessException: lateinit property modelMessageSender has not been initialized'. Please check if you initialise field modelMessageSender in your ChatMessageActivity class before send message. in logs I found next line 'Caused by: kotlin.UninitializedPropertyAccessException: lateinit property modelMessageSender has not been initialized'. Please check if you initialise field modelMessageSender in your ChatMessageActivity class before send message. 1, I see, But I want to know why the input when start ChatMessageActivity is an instance of ConnectycubeChatDialog, then I pass that object but the code did not work well as your example. There are two ways to start ChatMessageActivity The first case is I click on the item of screen ChatDialogActivity then open ChatMessageActivity. it's oke, the message synchronized. The second case I create an instance of ConnectycubeChatDialog then pass it into ChatMessageActivity. it's not work well, the message not synchronizes. I still receive the message but it does not have two icons check blue at the bottom of the message. How I can fix this? I want to use it to make a real app. I really need solution for these case, Could you answer me soon? We need FULL log to investigate behavoir of your app logic. I asked this information in my previous comments, but you provided only stacktrace of error. From provided information we can't understand did you send message or not, did sender receive message status package or not, etc. I think this issue related with your new logic, but without asked information we can't help you. We need FULL log to investigate behavoir of your app logic. I asked this information in my previous comments, but you provided only stacktrace of error. From provided information we can't understand did you send message or not, did sender receive message status package or not, etc. I think this issue related with your new logic, but without asked information we can't help you. Can you check out this link: https://drive.google.com/file/d/1LdRHn5XIglWWiCncCi9bYMQSa-V8MUBS/view?usp=sharing Please download the project and run, When you login with different user (I already set two user in the project) and chat you will understand. Thank you for your response to me soon. today I built your app and tested. After first start of app behavior looks like normal, but when I clicked back button I receive behavoir like you described before. After fast review your code I found next part ConnectycubeRestChatService.createChatDialog(dialog) .performAsync(object : EntityCallback<ConnectycubeChatDialog> { override fun onSuccess(createdDialog: ConnectycubeChatDialog, params: Bundle) { // startChatActivity(createdDialog) chatDialogListViewModel.createNewChatDialog(createdDialog) why you call chatDialogListViewModel.createNewChatDialog in success block of request ConnectycubeRestChatService.createChatDialog? As result you call same request again, because method chatDialogListViewModel.createNewChatDialog calls ConnectycubeRestChatService.createChatDialog inside. If you see logs, you will found new request for dialog creation with next parameters: name=null occupants_ids=1479801,1479875 type=3 I result you get new dialog with name 'null'. Then you asked messages for this dialog and receive empty list, it is expected behavior, because you don't have any messages in new dialog. Please review your code and fix not clear behavior of your app . today I built your app and tested. After first start of app behavior looks like normal, but when I clicked back button I receive behavoir like you described before. After fast review your code I found next part ConnectycubeRestChatService.createChatDialog(dialog) .performAsync(object : EntityCallback<ConnectycubeChatDialog> { override fun onSuccess(createdDialog: ConnectycubeChatDialog, params: Bundle) { // startChatActivity(createdDialog) chatDialogListViewModel.createNewChatDialog(createdDialog) why you call chatDialogListViewModel.createNewChatDialog in success block of request ConnectycubeRestChatService.createChatDialog? As result you call same request again, because method chatDialogListViewModel.createNewChatDialog calls ConnectycubeRestChatService.createChatDialog inside. If you see logs, you will found new request for dialog creation with next parameters: name=null occupants_ids=1479801,1479875 type=3 As result you get new dialog with name 'null'. Then you asked messages for this dialog and receive empty list, it is expected behavior, because you don't have any messages in new dialog. Please review your code and fix not clear behavior of your app . I was my bad I'm so sorry about that. I was check and write new code to make it more clear about my issue. Please help me check out this link : https://drive.google.com/file/d/1z8MIiRLtEZKgXOhGop3o_tDvM10VjHGS/view?usp=sharing When you run a project you will get a message "Error while loading users" But don't pay attention to that. The issue I was mentioned here is the message I sent it not synchronized. When I reinstall and log in again the message synchronized, like the first message in the picture below. now I can't reproduce issue from yesterday, what help you need now? now I can't reproduce issue from yesterday, what help you need now? Like I said before the I still receive the message but it does not have two icons check blue at the bottom of the message. When I'm chatting, Could you try function chat on two devices, you will more understanding. I run the app on two devices at the same time and chat but the message still not have two blue sticks at the bottom of the message. Like the picture below As you can see in our sample ChatMessagesStatusListener initialises in ChatDialogActivity, but in your code you don't use this activity, it is because you don't get statuses of messages. You can move needed logic from ChatDialogActivity to ChatMessageActivity but pay attention what and how you do in your code. As you can see in our sample ChatMessagesStatusListener initialises in ChatDialogActivity, but in your code you don't use this activity, it is because you don't get statuses of messages. You can move needed logic from ChatDialogActivity to ChatMessageActivity but pay attention what and how you do in your code. Finally, I'm done with this issue, Thank you for spending time to support me. Thank you so much and Have a nice day,
GITHUB_ARCHIVE
Jellyfiction – Chapter 2476 – Buddhist Spell minor fumbling read-p3 Novel–The Legend of Futian–The Legend of Futian Chapter 2476 – Buddhist Spell elbow violent There were several effective spells in Buddhism that were extremely effective. There are even spells that could transcend the deceased and send them in the periods of reincarnation. The spell Ye Futian applied just now was the Vajra spell, that has been a particularly domineering form of spell. It was actually a perfect accentuate with the Alacanatha Battle Variety, and collectively, they formed a formidable and unstoppable duo. It turned out no wonder that the Buddhist cultivator have been not able to avoid him from evolving. Bang! One more excelllent Buddha stepped out right now. This excellent Buddha was actually a Buddhist cultivator under Tianlun Vajra Buddha Lord. He got an incredible atmosphere that gifted the onlookers a feeling of outstanding tension of aggression. As he was position before Ye Futian, a golden Dharma sprang out behind him, as being a domain suddenly demonstrated between heaven and world with Ye Futian in the midst of it. Hight over the heavens, many glaring Vajra Buddhas appeared, hitting down with mighty coercion. From your best level, people Buddha Lords looked at Ye Futian generating his way up towards them. Some Buddha Lords whispered, “I didn’t count on that someone from your Divine Prefecture, after having developed Buddhism for a couple of many weeks, could access this standard of achievement. Evidently unless the strong disciples on the Buddha Lords intervene, it will be not easy to end Benefactor Ye.” The Submarine Boys on Duty In an additional direction, a lot of Buddhist cultivators considered the other person. Shenyan Arhat was among these cultivators. Not prolonged earlier, they explained that Ye Futian got only cultivated Buddhism for months and did not keep extended on the a multitude of locations he stopped at. He would check out some historical temple for 2 or 3 days or weeks well before moving on to another 1. They did not feel this was the way in which for any individual to grow Buddhism appropriately. anthropology jobs virginia Have he actually cultivate the Buddhist spell? Although the Huge Spirit Buddha had not been some major Buddhist body, he was, all things considered, an presence in the Ninth-World of the Buddhist Direction. Nevertheless, he was unable to crack apart Ye Futian’s challenge type. The space in between the two was glaring. This proven that Ye Futian’s energy was huge that unless it were top notch-level Buddhist cultivators, it will not be simple to reject him. “Benefactor Ye has purchased the heart and soul of Alacanatha Combat Shape. Apparently he carried out a lot over these recent few months of farming. n.o.human body should take too lightly him,” a fantastic Buddha commented because he investigated Ye Futian, who was below. All the buddhas cultivated the identical strategies, but Buddhism was unlimited, and exactly how it had been developed by every person was distinct as well. It had been precisely the same with numbers including the Buddha Lords their ideals and philosophies differed determined by who they had been. After hearing the language of Shenyan Buddha Lord, certainly one of his disciples immediately arrived frontward. He was nevertheless a cultivator from the Ninth-World that has a horrifying atmosphere of farming. He endured facing Ye Futian, made available his divine eyes, and investigated Ye Futian. It was subsequently almost like he could see through Ye Futian. Ye Futian brought up his head to look in the other male and imagined, A disciple of Shenyan Buddha Lord? Earlier, these people acquired ceased him during the sacred ground in the American Heaven. When they had been not on a wiping out hiatus as a result of All Buddha’s Fest, probably they could be seeking vengeance for Zhu Hou nowadays! Ye Futian heightened his head to look with the other guy and imagined, A disciple of Shenyan Buddha Lord? Before, many people got quit him in the sacred ground of your Developed Paradise. When they have been not using a eradicating hiatus as a result of All Buddha’s Fest, probably they could be searching for vengeance for Zhu Hou at this point! Out of the maximum issue, the Buddha Lords witnessed Ye Futian producing his way up towards them. Some Buddha Lords whispered, “I didn’t expect that someone in the Divine Prefecture, after you have cultivated Buddhism for some many months, could achieve this measure of achievements. It seems that unless the strong disciples from the Buddha Lords intervene, it will probably be tough to cease Benefactor Ye.” All at once, with the noise of Buddha that added beyond Ye Futian’s mouth area, many of those Buddha phantoms within the void started to fracture, they then shattered. A Buddhist mantra as runes landed about them, triggering their fantastic body to collapse and pulverize. With hearing the language of Shenyan Buddha Lord, considered one of his disciples immediately arrived onward. He was nevertheless a cultivator within the Ninth-World which has a horrifying aura of farming. He stood ahead of Ye Futian, exposed his incredible sight, and considered Ye Futian. It was subsequently just as if he could see through Ye Futian. Having said that, a range of historic, gold terms ongoing to put from Ye Futian’s mouth area, when the Noises of Buddha lingered. That Buddhist cultivator who walked out enjoyed a appear of extreme caution on his encounter. This is a Buddhist spell. Most of the buddhas cultivated the same solutions, but Buddhism was boundless, and the way it was actually cultivated by each one was distinct as well. It was subsequently the same with statistics such as Buddha Lords their ideals and concepts differed according to who these were. Ye Futian established his eyes and investigated all the Buddhas. Then he went forward using a solemn term, hands clasped jointly in front of him. He kept a severe and dignified attitude without getting the very least impertinent. His lips transferred a little, and Looks of Buddha appeared to come out of his mouth. It had been rather challenging to pick up what he was stating just the residual Seem of Buddha was perceptible. On both aspects, there was an accumulation of a good number of wounded cultivators. On the other hand, Ye Futian was merciful. He didn’t go overboard and injured any one grievously, as them all ended up only marginally hurt. All things considered, this became the Spirit Hill with the Western Heaven, the supreme holy land around the world of Buddhism, the place where the Lord of all the Buddhas once cultivated. poisonous peasant ‘concubine skynovel All around Ye Futian, strong and domineering Vajras positioning the Dharma, spewing out mantras for an unparalleled gold Gentle of Buddha radiated from their website. When those a lot of biceps and triceps blasted down for any wipe out, they found they may not transfer him an individual “. These great Buddhas felt a sense of déjà vu when they beheld this vision. Hundreds of in the past, Donghuang the truly amazing, like him, walked entirely up to the top and had fulfilled the Lord of the Buddhas. Seeing that Ye Futian was so impossibly major, some Buddhist cultivators stepped ahead one immediately after one other. Some desired to stymie Ye Futian’s ahead improvement, and many want to encounter Ye Futian’s strength. But every one of them, without having exception, were definitely struggling to end him. Individuals wonderful Buddhas noticed a feeling of déjà vu when they beheld this sight. Many in the past, Donghuang the Great, like him, walked completely up to the top and had fulfilled the Lord of All Buddhas. These fantastic Buddhas believed a sense of déjà vu once they beheld this view. Hundreds of in the past, Donghuang the fantastic, like him, went completely up to the very top and had became aquainted with the Lord of All Buddhas. It was absolutely pure coincidence when Ye Futian very first developed this spell. He had already cultivated the Vajra Demon-slaying Beat before, which had been a rhythm manner of Buddhism. It turned out that Vajra Demon-slaying Tempo was based on the Vajra Spell, that had been also a part of the Spell itself. Bang! One more excelllent Buddha stepped out at this time. This brilliant Buddha was obviously a Buddhist cultivator under Tianlun Vajra Buddha Lord. He possessed a wonderful aura that gave the onlookers a sense of amazing force of aggression. As he was standing when in front of Ye Futian, a gold Dharma shown up behind him, like a sector unexpectedly manifested between heaven and earth with Ye Futian down the middle of it. Hight above the skies, quite a few obvious Vajra Buddhas appeared, hitting down with mighty coercion. “Could it be how the Buddhas cultivated the methods for quite some time and are also nonetheless not as nice as the few months somebody put in farming?” other terrific Buddhas appeared across the herd and questioned. This great Buddha was none other than Shenyan Buddha Lord, whose dialog was as belligerent. His gaze was scary. Zhu Hou, who had been wiped out in Jianan Metropolis, had been a disciple of his. There were quite a few effective spells in Buddhism which were extremely helpful. There were clearly even spells that may transcend the departed and mail them to the periods of reincarnation. The spell Ye Futian utilized just now was the Vajra spell, that has been a particularly domineering form of spell. It absolutely was a great match along with the Alacanatha Conflict Type, and alongside one another, they established a formidable and unbeatable duo. It had been obvious why the Buddhist cultivator was not able to quit him from progressing. On sides, there seemed to be a selection of a number of injured cultivators. Having said that, Ye Futian was merciful. He didn’t go overboard and hurt anyone grievously, as every one of them were only slightly seriously injured. Naturally, it was the Heart Mountain / hill from the Western Heaven, the superior holy land of the planet of Buddhism, the place where the Lord of all the Buddhas once cultivated. One Good Turn “Benefactor Ye has acquired the fact of Alacanatha Conflict Type. It seems that he accomplished a great deal during these previous few months of farming. n.o.human body should undervalue him,” a great Buddha commented when he looked over Ye Futian, who has been directly below. Using an established basis and an specialist in the form of rhythm, Ye Futian’s farming on the Vajra Spell had been a natural decision. He managed to take control of it easily, along with its power was indeed domineering and tyrannical. Nevertheless, a series of historic, wonderful phrases continued to fill beyond Ye Futian’s mouth, because the Noises of Buddha lingered. That Buddhist cultivator who walked out possessed a appear of caution on his facial area. This has been a Buddhist spell.
OPCFW_CODE
I received a mail from google home that google will stop to manage any network setting for OnHub users. I never heard any history about a router company let the customers stop using their router without any working issues. Google never asked a lot of users if they need newer Nest Wi-Fi router instead of OnHub router. OnHub was quite expensive router at that time. Even Google wants to stop router services, I think they must give the options about managing SSID, PW, connected devices, Bridged/NAT, DHCP/Static IP, speed test and error logs. I don’t want my OnHub router be managed through Google servers. I want to my router on my internal network at home without Google Nest Wi-Fi servers. lots of users paid lots of money purchasing OnHub routers. And I think lots of OnHub router work without any problem. And give us other options except 10% coupon practically. You know it is not 40% actually. If someone from google with this issue, reconsider this decision. Sorry that you're frustrated with the OnHub discontinuation. I know for a lot of people it's been a solid product. While the OnHub will still give a WiFi signal, as you mentioned management through the app will no longer be possible. I can pass your feedback onto our teams internally and you're actually able to do so as well through the Home app. If there's anything else I can do for you, please let me know. Oh, let me clarify that a bit, kungmo. Within the Home app, there's the ability to submit feedback to our internal teams. When you submit feedback, it gives you the option to type in your own messages. What I meant is that I'll pass your comments along from this thread, and you can submit your own feedback through the Home app as well. I bought a new TPLink Deco 5300 three pack for $349 from Costco. I am not considering further Google networking products since they don't support WiFi 6 (Deco 5300 does) and this artificial curtailing of management to my OnHub management. The Deco has an app and web interface so I feel confident they won't bork me like Google in the future. So, what to do with my four onhub devices that no one will want because of this artificial sunsetting? Some enterprising hacker will likely come up with a way to hack into the onhub either with an app or web interface that will circuvent Google's foolishness. Google's stance on discontinuing any way to manage your existing Onhub wifi router is utterly ridiculous and just another example of how out of touch the elites in SanFran have become. I was in Costco today and they are selling the old Google Onhub WiFi router three packs for about $200. I feel sorry for those shoppers who buy these devices, and they get a rude awakening a few months later when Mother Google borks them and says, "No Soup for You!" Yet another example of Evil from the company that once declared "Don't Be Evil." Let's tally recent Google Evil: 1. Borked Onhub routers - December 2022 2. Borked Google Workspaces for individuals - Summer 2022 3. Borked USB Audio support for external USB Headphone DACs on Pixel 6. Software fix pending for a year due Sumer 2022. 4. I'm sure there are others that people can add.
OPCFW_CODE
✅ Fibaro HC2 ✅ Fibaro HC3 ✅ Fibaro HCL As from v0.421 of the automationbridge, there is a new notifications platform, this replaces and adds more features to the already successful Google Assistant and Sonos Voice notifications service. This new platform allows you to create multiple notifications, and use any device in your controller as a trigger, including scene activations, to perform and send notifications to multiple destinations. All existing functionality for the Google & Sonos announcement services remains in place alongside this new platform. You can now setup and send Push Notifications to your mobile phone, using the Pushover app and platform (more info on this platform can be found here). This is a paid app that you will need to buy to use the service, however it is available for a 7 day trial, so that you can fully test out the service in conjunction with the automationbridge. You can now also configure any notification to send a web hook, or action a web url in addition to other notifications. How to use the Notifications platform You will find the notifications page, on the main header bar on the home page of the automationbridge. Before you can send notifications, you must setup some destinations, these can be - Google Assistant Speakers - Sonos Speakers - Push Notifications To enable each destination, just click the Enable button in the relevant section, and then follow these guides. Once you have your destinations setup and ready, then you can continue setting up your notifications. Just follow these steps below, in this guide I have all available destinations enabled to show all options, depending on your platform some of these may not be available. Step 1 - Click the Add button in the top section of the page, Notifications. Step 2 - Select the trigger device or scene for this notification and then click Next (Only those devices that are currently supported will be shown, further support for other device types is under development and will be added in future updates) Step 3 - Select the state change that you want this notification to be actioned on, this will change based on the device type selected in Step 2. Step 4 - Enter the notification alert text that is to be sent or announced. Step 5 - Now select the destinations/actions to be executed, any combination can be selected. Now click the Add button to complete the addition of this notification. You will be returned to the Notifications page. From here you can; Add more notifications. See all notifications that you have created. Test the notification (using the speaker icon) Edit and Delete the notification. This completes the setup of a notification, at this time the following devices types are supported as triggers. - Switches/ Lights/ Dimmers/ RGBW : ON & OFF - Motion Sensors : Motion Detected / No Motion - Door/Window Sensors : Opened / Closed - Temperature & Humidity Sensors : Value over threshold. - Scenes : Activation - Alarm : State Changes (ARM/AWAY/OFF etc) Further device types and their actions will be added in future updates.
OPCFW_CODE
A few days ago we mentioned that Microsoft testing new pop-ups on the bottom right side of the desktop above all apps and games. The pop-up window appears even during duration περιόδων παιχνιδιού. Ορισμένοι χρήστες ανέφεραν ότι είδαν το αναδυόμενο παράθυρο όταν έπαιζαν ένα παιχνίδι σε πλήρη screen. Δεν γνωρίζω αν μπορείτε να διανοηθείτε το μέγεθος αυτής της είδησης, αλλά είμαι σίγουρος, ότι κανείς δεν συναίνεσε στην Microsoft να κάνει κατάχρηση της ικανότητάς της, να αναλύει τη χρήση του υπολογιστή για να δείχνει αναδυόμενα παράθυρα του bing μόνο και μόνο επειδή χρησιμοποιείτε τον Chrome με το Google Search. Of course I'm sure Microsoft is legally covered by the dozens of license agreements that no one reads but agrees to. The notification that appeared to too many users wasn't even a normal notification. It didn't appear in the notification center in Windows 11, nor as connected to the section of Windows 11 that suggests new features. It was literally a malicious executable that magically appeared in the c:\windows\temp\mubstemp path. This file is digitally signed by Microsoft. "We are aware of these reports and have paused this notification while we investigate and take appropriate action to address this unintended behavior," said Caitlin Roulston, director of communications at Microsoft. on The Verge. You probably haven't seen the latest pop-up, and that's because Microsoft is experimenting with a small number of Windows users. If there is an outcry the company will try to find another way to serve you its propaganda. You may consider this to be Microsoft's operating system and you should accept it as is. After all, Google has similar notifications on its websites to get people to use Chrome, or it constantly serves up the spam message for premium YouTube. But Microsoft's behavior is more than a mere exhortation. You honestly don't need to close windows while playing a game, or on the apps you're working on. Windows is not free software, it requires a license which almost every consumer pays for. This could be in the form of the price of a Windows OEM licensed laptop or a product key if you built your own PC. Microsoft should respect the fact that there are many people who pay for Windows and don't want to see ads. Windows is an important productivity tool for many people and should not be treated like an ad-laden streaming window.
OPCFW_CODE
What is the rarest color of guinea pig? What is the rarest color of guinea pig? White crested guinea pigs are the smallest and most delicate of all guinea pigs. They are also one of the rarest colors, with only around 1,000 being born each year. Introduction: What is an Albino guinea pig? Albinism is a condition where the pigmented cells in the skin, hair, and eyes are not produced or are deficient. Albino guinea pigs have a patchy white coat and may have red eyes. Albino guinea pigs are not usually born with albinism; it is caused by a lack of melanin pigment in the hair and skin cells. Appearance: What does an Albino guinea pig look like? The Albino guinea pig is a rare breed of guinea pig that has a lighter colored coat than the other guinea pigs. Albino guinea pigs are generally smaller and have a reddish-orange hue to their skin. They also have a very light brown nose and eyes, which give them an eerie appearance. Is an albino guinea pig rare? Albinism is a rare condition in which the body lacks the usual pigment, resulting in a white or light colored skin color. It is estimated that one out of every 20,000 people is albino. Guinea pigs are especially susceptible to this disorder because their sweat glands produce very little melanin, the pigment that gives skin color. Albino guinea pigs are not typically considered as pets because of their rarity and potential health problems. Behaviour: What is the albino guinea pig’s personality like? Behaviour is an important part of a guinea pig’s personality. Albinos tend to be more active than other guinea pigs, and they may be more playful. Genetics: How are albino guinea pigs born? Guinea pigs are one of the most common breeds of rodents in the world. They originate from South America and were originally used for agricultural purposes. Guinea pigs have a wide variety of colors, but albino guinea pigs are rare and considered to be a special breed. Albino guinea pigs are born with a lack of pigment in their skin and hair, which makes them very visually appealing. While there is no scientific explanation for why albino guinea pigs occur, it is believed that it is a result of a recessive gene. Do albino guinea pigs have problems? Albino guinea pigs are very rare and have been bred for generations as show animals. They are usually considered to be of a high quality, but some owners report that their albinos have problems. These may include health problems that are not associated with the color of their fur, such as respiratory issues or seizures. Some owners also report that albino guinea pigs are more difficult to housetrain than other guinea pigs, and may require more attention and care. Health: What are the health issues of albino guinea pigs? Health issues of albino guinea pigs are generally the same as those of other guinea pigs, with a few exceptions. Albino guinea pigs are more likely to develop skin conditions, such as atopic dermatitis (eczema), and can be more prone to developing allergies. In addition, albino guinea pigs may also be at risk for certain infections, such as pneumonia or urinary tract infections. Albino guinea pigs are a rare breed of guinea pig that is typically white with patches of color on their body. Albino guinea pigs are not albino in the sense that they lack pigment, but rather have a genetic mutation that causes them to lack certain types of pigments. This mutation can cause them to be very light in color or completely white, and is most commonly seen in crossbreds between different albino strains of Guinea Pigs. Some people keep albino guinea pigs as pets because they are unique and interesting animals, while others use them for scientific research because they are good models for studying cancer.
OPCFW_CODE
You can serve media assets referenced in Web playlist files (files with .isx file name extensions), provided that the files are stored in folders on the Web server computer that can be accessed by the Web Playlists feature. By default, these folders include the Web site root (<systemdrive>\inetpub\wwwroot). If you want to serve media assets stored in your user account folders (for example, music files stored in <systemdrive>\Users\<username>\Music>, you can create impersonation credentials in the Web Playlists feature to enable Web Playlists to connect to the media assets under the context of an authenticated Windows client. This allows you to maintain one set of media files that you can serve to customers, rather than creating copies of the files in the Web site root. This article describes how to provide folder access to Web Playlists so that you can reference media files that reside outside of the Web site root in Web playlists (.isx) files. The procedure in this article uses the Music folder on the local Web server as an example; however, you can use the procedure for any folder on the local Web server computer that stores media assets. For more information about adding media assets to Web playlist files, see IIS Media Services Help. Web Playlists can access folders by using the credentials for local user accounts that have at least Read access to the folder. To allow Web Playlists to access the Music folder on your Web server, perform the following procedure. To impersonate user account credentials - In Web Playlists, in the Actions pane, click Edit Impersonation Settings. - In the Impersonation feature page, in the Actions pane, click Add. - In the Add Impersonation Setting dialog box, do the following: - In Path, enter the path of the folder that you want Web Playlists to access (for example, enter C:\Users\<username>\Music). You can also use the Browse button to enter the path. - In Logon method, select Clear Text. - Click Set. - In the Set Credentials dialog box, enter user account logon credentials (user name and password) for an account that has at least Read access to the folder. - In the Add Impersonation Setting dialog box, click OK. The example procedure in this article created impersonation credentials in the Web Playlists feature to enable Web Playlists to connect to media assets stored in the Music folder under the context of local Administrator account; however, you can use the local account credentials for any authenticated Windows client that has at least Read access to the folder. You can add media assets stored in the Music folder (or in another user folder) to a Web Playlist (.isx) file; however, Web Playlists cannot download these assets to clients until you complete the above procedure. For more information about adding media assets to Web playlist (.isx) files, see IIS Media Services Help. Discuss in IIS Forums
OPCFW_CODE
From: Guillaume Melquiond (gmelquio_at_[hidden]) Date: 2002-09-10 02:25:40 On Mon, 9 Sep 2002, David Bergman wrote: > I got the impression that you have fixed your library to one-dimensional > continuums (in C++ defined by float, double and long double). > This impression stemmed from the arguments in this verification process. > That is what I meant by you regarding the template instantiations as > something I denote by "implementation specialization". This is why the > requests for more generic use (traversal, partial ordered types etc.) > went unresponded, or responded by reminders of the scienific purpose of > this library, that it should allow for direct substiution of exact > arguments with these intervals in arithmetic and transcendental > There is nothing wrong with that, by the way. A purely "mathematical" > (read "operating on a linear continuum") interval is good in itself. I > just do not want us to cheat ourselves by thinking that the interval > abstraction could be used for other elements. And, I do not mean "being > able to instantiate without compiler objections". I mean real use. > No problem, by inserting it into "boost::numeric" the nature of the > abstraction will be further defined, as well. > To reiterate what I meant with my two categories of template > specializations (or instantiations): (1) conceptual specialization, is > where we get some new concept, inheriting properties from the template > and (2) implementation specialization, is where we get the same concept, > but with other implementation-defined characteristics. I argue that > "interval<float>" and "interval<double>" are category-2 variants. And interval<rational<...> > and interval<int> and interval<mpfr> and interval<a_continuous_numeric_type> and etc? So yes, the library deals with one-dimensional continuums. But, after all, this library is an *interval arithmetic* library. And since the library was designed in order to be used with any numeric type (with an arithmetic total order), I think the library is more than just an "implementation specialization" for 'float', 'double' and 'long double'. The specializations really inherit properties from the template. Indeed, all the operations on intervals are defined once and for all; they are generic and aren't specialized. > A lot of Boosters complain and warn other developers to not be overly > abstract, so the pragmatic choice you made, with heavy focus on rounding > policies is probably the right one. Concerning 'complex', you seem to prefer 'interval<complex>' rather than 'complex<interval>'. So, let's see, what could be an 'interval<complex>'? The first thing which come to mind is probably to define 'interval<complex>' by a rectangle of the complex plane. And the bounds would be the upper left corner and the lower right corner. And the order would be the partial order that allows to do that (I would be glad to draw a little figure, but I don't like drawing ascii-art, sorry). With this definition the addition of two 'interval<complex>' works correctly. But unfortunately, the multiplication does not work at all: the inclusion property isn't respected. Whose fault is it? It's the fault of the interval arithmetic which only works with continuous numeric types. And the purpose of the library is to implement these genric operations. On the other hand, 'complex<interval>' correctly works if 'complex' has defined generic operations (like '(a,b)*(c,d) = (a*c-b*d,a*d+b*c)'). And the inclusion property is always respected. It's the reason why a lot of people have spoken about 'complex<interval>' rather than 'interval<complex>' during this review. However, the best way to define an 'interval_complex' which isn't 'complex<interval>' is to use a spherical representation: a point 'z' is in the interval '(a,r)' (of type complex*real) iff '|z-a| <= r'. This way, by redefining all the operations, you get a new type that respects the inclusion property. Moreover, in some situations, this type can be more precise than 'complex<interval>' (since complex multiplication preserves sphere and not rectangle). But the operations are no more the generic ones of interval arithmetic. The new type is no more an interval (there is no bounds, it can't be linearly traversed, it can't be bisected, etc). I hope I made it clear there are two straight-forward ways to define a type on complex that respects the inclusion property. The first one is to use 'complex<interval>' but it is impeded by the way the Standard defines 'std::complex' (I personally would be happy to have a generic 'boost::complex'). The second one is to use a spherical representation but it's outside the scope of this library since it doesn't use generic operations anymore (it doesn't even require the numeric type to be Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
OPCFW_CODE
I extensively researched the companies that will sell laptops with no-OS or Linux preinstalled. This information is distressingly difficult to find, so I present a list below. I encourage you all to vote with your dollar and do not send a single penny to the monopoly in Redmond. If you want a linux laptop, pay for a linux laptop or there will be nobody selling them. The "Windows refund" efforts have largely failed, so it's best not to send any money to Redmond in the first place. There are two kinds of companies with respect to manufacturing laptops. Original Design Manufacturers (ODM's) actually manufacture the hardware, which they then sell to Original Equipment Manufacturers (OEM's) that then install a CPU, RAM, hard drive, and software. Some major ODM's are Asus, Clevo, Compal, FIC, Mitac, Quanta, Uniwill, and Wistron. Major OEM's are the ones you're probably familiar with: Compaq, Dell, Gateway, HP, IBM, Sony, and Toshiba. TuxMobil has a more complete table of some of these manufacturing relationships. There are vendors out there that will sell you a branded OEM laptop with linux on it. Since none of the major OEM's sell linux preinstalled on laptops at this time, this means that any vendor claiming to sell an OEM laptop with linux on it paid for windows and removed it (now with the exception of Dell, see below). I think this is a deceptive practice so I do not list such vendors below. Vote with your dollar folks. Also if you are interested in a laptop from a major OEM, call them up and ask for linux. If we make enough noise, eventually even the big guys will sell linux too (IBM and Dell tried in the past but sales were low). You should realize that most of these companies below purchase the hardware from these ODM's just like the big name OEM's. So when you find some no-name laptop, it is usually equivalent to some branded laptop that never touched the hands of one of the major OEM's. (And figuring out exactly *which* brand-name laptop it is equivalent to can be extremely difficult) Mine for instance goes by the names Compal CL00, ChemBook 3020, and Toshiba 3000-S304. Some of the below claim to manufacture their own notebooks, but what this means is that they buy them from an ODM and put in a hard drive/CPU/RAM, which is why you will find identical looking cases at several of these vendors. Lastly I encourage people to post their experiences to Reseller Ratings or a similar company/product ranking site, so that we can all learn from your experience, and vendors are pressured economically into being nice to their customers. Update: TuxMobil now has a more complete list than me (and so does LXer) which you should look at. (But beware many vendors in their lists will pay for windows and remove it -- the vendors below definitely do not.) Most (all?) of the vendors that preinstall linux offer some form of software and hardware support. As always, check the individual vendor's warranty and support policies to see if they meet your needs before buying. Purchasing a no-OS laptop can be somewhat treacherous. Since there is no software, there is no reliable way for the vendor to identify hardware problems. Some tech support requests are hardware problems, others are software problems, and others are user error. When some customers use Debian, others Red Hat, others XBSD, you can see tracking down problems would be extremely difficult. If you purchase a no-os laptop, you should be capable of installing your favorite OS and performing basic tech support for yourself.
OPCFW_CODE
A great example of Blind Testing is the famous Pepsi vs Coke taste test. No one gets to see the brand name on the can until they have tasted the contents. - To get the best results, we need to remove any personal bias. Ezoic does this by testing new layouts to your users - and keeps the testing 'blind'. - The system rates the merits of each variation by looking at the results of each test rather than relying on the opinion of one person or a small group of people. - The good news is that you are able to easily see how each variation is performing in our publisher-user interface (under the Experiments tab). It is very important to realize that the system optimizes based on data, not on a subjective opinion (how one variant 'looks' versus that of another). Sometimes, a variation that one individual might not find aesthetically pleasing, actually works very well for the site's visitors and the user metrics reflect this with higher time on site, page views per visit and lower bounce rate. Attachment to a particular layout or look restricts your ability to improve the site for your users. [Above images - original site (purple background) and some example Ezoic test layouts] You've been working on your site for a long time. It's normal to feel some attachment to how the site currently 'looks'. But remember, you have your traffic because of the quality of your original content that you created, not because of its design in 2013. Remember that there are a huge number of 'ugly' sites out there getting a ton of traffic, because they serve their niche better than any other site. It's unlikely that search engines rank sites by how nice they look. What is 'pretty' to one person won't appeal to another person. Once we eliminate personal bias (including our own), the system is programmed to go back to basics and start testing from the ground up with brand new layouts. In the early stages of testing, the Ezoic platform will test big changes - like different column layouts, menu placement and backgrounds. As the system learns, it starts testing out smaller refinements such as colors, fonts, menu styles etc. The system then takes apart the variations that work well and combines them into new variations. Big sites test layout - Ezoic helps you do the same Whether it's an ecommerce site like Amazon, or a digital newspaper like theguardian.com, most big online properties are testing their layout to improve usability, income or conversion rate. Mobile navigation is especially crucial to future success. Is Amazon a 'good looking' site? Did the color of the 'buy it now' button just happen to be orange, or did it get tested to see which color worked the best? What about on mobile or Tablet? It may seem like an elementary example, but it holds true for informational sites as well as ecommerce sites. If you don't test the usability of a site, you cannot know how it stacks up against better performing versions of the same content. The look of a site is subjective to the publisher, but really should be directly related to visitors' actions and user metrics. It's very difficult to find the right layout by testing things manually (or guessing the right layout based on personal taste). The Ezoic system tests thousands of variations simultaneously. The best way to think about it is that the Ezoic platform is a dynamic layout engine, not a list of layouts. The end result is a constantly improved site that is getting better every day on all screen sizes. Testing Layout is hard to do all by yourself Unfortunately, if you don't have a big technical team around you, testing is actually quite hard to do and takes a lot of time. If you used an off the shelf testing tool like Adobe Target, you would need to construct the experiments (write the new layouts in css/html), run them to your users concurrently against your control (your old site layout), decide upon the things you're going to change for each test including things like menus on mobile and tablet ad placements. You need to make sure those experiments work on all browsers (opera mini anyone?), and once you've run those tests, you need to collect the user metrics and income from your ad exchange accounts and make decisions about which layouts are winning. You need to make sure that you don't slow the site down, and that it's loading fast on all devices... You then need to keep it all up to date as you conduct tests into the future. If a new browser or device goes mainstream - you need to make sure you're ready for that. Ezoic does all this for you. It's free to use (for 2 weeks) and then on a revenue share thereafter. Our aim is to keep your site improving and crowd source content improvement ideas and technology in one place.
OPCFW_CODE
import * as eth from 'eth-connect' import { getUserAccount } from '@decentraland/EthereumController' import * as ERC20 from '../currency/index' /** * Send MANA to an address * * @param toAddress Receiver address * @param amount Amount in ether to send * @param waitConfirm Resolve promise when tx is mined or not */ export function send(toAddress: eth.Address, amount: number, waitConfirm: boolean = false) { return ERC20.send( '0x0f5d2fb29fb7d3cfee444a200298f468908cc942', toAddress.toLowerCase(), +eth.toWei(amount.toString(), 'ether').toString(), waitConfirm ) } /** * Return the balance of the current user */ export async function myBalance() { const fromAddress = await getUserAccount() return ERC20.balance('0x0f5d2fb29fb7d3cfee444a200298f468908cc942', fromAddress) } /** * Return the balance of the address * * @param address Address you are checking */ export async function balance(address: eth.Address) { return ERC20.balance('0x0f5d2fb29fb7d3cfee444a200298f468908cc942', address) }
STACK_EDU
methods for solving this simple routing problem In my transport geography class, we were asked to do a "best guess" on the route which does the following on this graph: Starts at A Visits each node Returns to A minimizes distance. I wrote up program which calculated all of the five step paths that started from A, visited every node, and returned to A. I then calculated the cost of these paths. How would other people solve this? My teacher gave us a "hint" that we should read about Dijkstra's algorithm, but I couldn't see how to apply that to solve this particular problem. My solution would fall apart pretty quickly as the number of steps or nodes increased. Your program was lucky, because it's not guaranteed to get the best answer. For instance, if all the edges at E had costs of 1 (instead of 3,4,5,3), your program's solution would be markedly inferior to the optimum (with a cost of 8). @Whuber I was hoping that you would share some of your math knowledge here ;) I considered doing a range of steps, maybe between 5 and 10, to guard against that, recognizing it wouldn't provide a guarantee. Not sure how to provide that guarantee, though. A brute force solution uses recursion to solve a slightly more general problem: get from node 'v' to node 'w' while visiting a specified set of additional nodes. Thus, to visit all nodes starting at 'a', look at the solutions that start at 'v', end at 'a', and visit all other nodes, for the cases 'v' = 'b', 'v' = 'd', and 'v' = 'e'. Add the costs 3, 7, and 3 to those solutions, respectively. Pick the smallest cost. @whuber: That's very clear, as far as it goes. I think I can work out what the recursion would look like. I'd like to accept your response, but it's not in the "answers" section.. I'd advise you to look up, research, the travelling salesman problem for some soutions. Here's a link to a way to formulate a solution. In short, this is a tricky subject to get into/understand: TSP solutions "It's complicated." ;) Good link, though. I spent some time researching TSP, but didn't find anything as useful as that. Imho, "Dijkstra" is not a good hint. What you are looking at is the so-called "Travelling salesman problem". Given a list of cities and their pairwise distances, the task is to find a shortest possible tour that visits each city exactly once. Since a brute-force approach performs with O(n!), they reach their limit pretty fast (WP says 20 nodes is already the limit). Thanks! I didn't find Dijkstra to be especially useful, though another student claimed to. Do TSPs also include the part where you have to return to origin? I didn't see that mentioned when I looked them up yesterday, although, of course salespeople need to drive home, too... @canis You are correct: the Dijkstra algorithm is useless for this problem. Note that your problem as stated is not the TSP, because you have not indicated that each node must be visited exactly once, only that it must be visited at some point in the route. In all fairness Whuber, I think that's more of an over simplification of the problem. I actually do think the OP has been tasked with the TSP; I got the same questions in my dfegree, and I believe most people will/do
STACK_EXCHANGE
import Screen from './modules/screen' import Animals from './modules/animals' import Zoo from './modules/zoo' import './assets' import runtime from 'serviceworker-webpack-plugin/lib/runtime' if ('serviceWorker' in navigator) { const registration = runtime.register() } const screen = new Screen() const animals = Animals const zoo = new Zoo(animals) class App { constructor(name) { this.started = false this.soundReady = false this.isShuffling = false this.container = document.querySelector('.animal') this.animalName = document.querySelector('.animal .name') this.instructions = document.querySelector('.animal .instructions') this.thumbBox = document.querySelector('.thumb-box') this.btnStart = document.querySelector('.start-app') this.btnShuffle = document.querySelector('.shuffle') this.btnPlay = document.querySelector('.play') this.btnPlayWord = document.querySelector('.play-word') this.playerSound = document.querySelector('.player-sound') this.playerWord = null } init() { screen.init() window.addEventListener('keyup', this.manageKeyEvents.bind(this), false) this.btnStart.addEventListener( 'click', () => { if (!this.started) { this.start() } }, false ) } manageKeyEvents(event) { if (event.keyCode === 32) { if (!this.started) { this.start() } } else if (event.keyCode === 83) { if (this.started) { this.btnShuffle.click() } } else if (event.keyCode === 80) { if (this.started && this.soundReady) { this.btnPlay.click() } } } start() { this.started = true screen.start() this.btnShuffle.addEventListener( 'click', () => { this.shuffle() }, false ) } shuffle() { if (!this.isShuffling) { let animations = ['shuffle', 'shuffle-alt'] let randomAnimation = Math.floor(Math.random() * animations.length) this.isShuffling = true this.btnShuffle.classList.add('disabled') this.btnPlay.classList.add('disabled') this.soundReady = false if (!this.playerSound.paused) { this.playerSound.pause() this.playerSound.currentTime = 0 } if (this.playerWord !== null) { if (!this.playerWord.paused) { this.playerWord.pause() } this.playerWord.remove() } if (document.querySelector('.animal .instructions') !== null) { this.thumbBox.removeChild(this.instructions) } let thumb = document.querySelector('.animal-thumb') if (thumb != null) { this.thumbBox.removeChild(thumb) } this.thumbBox.classList.add(animations[randomAnimation]) this.animalName.classList.add('fade') let animal = zoo.getRandomAnimal() setTimeout(() => { this.thumbBox.addEventListener( 'animationend', this.createAnimal(animal, animations, randomAnimation), false ) this.btnShuffle.classList.remove('disabled') this.isShuffling = false }, 300) } } createAnimal(animal, animations, randomAnimation) { let thumb = document.createElement('img') if (this.thumbBox.querySelector('.animal-thumb') != null) { let oldThumb = this.thumbBox.querySelector('.animal-thumb') this.thumbBox.removeChild(oldThumb) } thumb.setAttribute('src', animal.file) thumb.setAttribute('alt', animal.name) thumb.setAttribute('class', 'animal-thumb') this.thumbBox.appendChild(thumb) this.animalName.innerHTML = animal.name this.thumbBox.classList.remove(animations[randomAnimation]) this.animalName.classList.remove('fade') this.setWord(animal) this.setSound(animal) } setWord(animal) { const animalName = animal.name.toLowerCase() const mp3File = `public/audio/names/${animalName}.mp3` const oggFile = `public/audio/names/${animalName}.ogg` const mp3AudioSurce = document.createElement('source') const oggAudioSurce = document.createElement('source') mp3AudioSurce.setAttribute('src', mp3File) oggAudioSurce.setAttribute('src', oggFile) this.playerWord = document.createElement('audio') this.playerWord.setAttribute('class', 'player-word') this.playerWord.append(mp3AudioSurce) this.playerWord.append(oggAudioSurce) this.container.append(this.playerWord) this.playerWord.play() } setSound(animal) { let timeInit = animal.audio.sound.start let timeEnd = animal.audio.sound.end this.playerSound.currentTime = timeInit this.btnPlay.classList.remove('disabled') this.soundReady = true this.btnPlay.addEventListener('click', () => { this.playerSound.play() this.btnPlay.classList.add('disabled') }) this.playerSound.ontimeupdate = () => { if (this.playerSound.currentTime > timeEnd) { this.playerSound.pause() this.playerSound.currentTime = timeInit this.btnPlay.classList.remove('disabled') } } } } const myApp = new App() myApp.init()
STACK_EDU
Reinforcement learning is a subfield of machine learning that focuses on training agents to make decisions and take actions in an environment in order to maximize a cumulative reward. Reinforcement learning is inspired by the principles of behavioral psychology, where an agent learns by interacting with its environment and receiving feedback in the form of rewards or penalties. In reinforcement learning, the agent learns through a trial-and-error process. It starts with minimal knowledge about the environment and takes actions based on its current state. After each action, the agent receives feedback from the environment in the form of a reward signal that indicates the desirability of the action taken. The agent’s goal is to learn the optimal sequence of actions that maximizes the cumulative reward over time. What are the key components of reinforcement learning? - Agent: The learner or decision-maker that interacts with the environment. It takes actions based on its policy. - Environment: The external system with which the agent interacts. It provides feedback to the agent in the form of rewards or penalties based on the actions taken. - State: The current situation or condition of the environment at a given time. The state helps the agent to make informed decisions. - Action: The choices or decisions made by the agent based on its policy and the current state. - Reward: The feedback signal from the environment that informs the agent about the desirability of its actions. The agent aims to maximize the cumulative reward over time. - Policy: The strategy or set of rules that guides the agent’s actions based on the current state. It maps states to actions and determines the agent’s behavior. Reinforcement learning – Applications and use cases - Game playing: Reinforcement learning has achieved remarkable success in game playing, surpassing human-level performance in complex games like chess, Go, and video games. Deep reinforcement learning algorithms have been used to train agents to master these games through trial and error. - Robotics: Reinforcement learning enables robots to learn complex tasks and manipulate objects in real-world environments. By interacting with the environment, robots can learn grasping, locomotion, and navigation skills. Reinforcement learning is also used for autonomous drone control and industrial automation. - Autonomous vehicles: Reinforcement learning plays a crucial role in training autonomous vehicles to navigate and make decisions in complex driving scenarios. Agents learn to interpret sensor inputs, respond to traffic conditions, and optimize driving behaviors to maximize safety and efficiency. - Natural language processing (NLP): Reinforcement learning is applied to NLP tasks such as dialogue systems, machine translation, and text summarization. Agents learn to generate human-like responses, translate languages, and summarize information based on feedback and rewards. - Recommendation systems: Reinforcement learning can enhance recommendation systems by learning user preferences and adapting recommendations over time. Agents can optimize recommendations based on user feedback, improving personalization and user satisfaction. - Finance and trading: Reinforcement learning is used in algorithmic trading, portfolio management, and financial decision-making. Agents learn to make optimal trading decisions, manage risks, and adapt to changing market conditions. - Healthcare: Reinforcement learning has applications in healthcare, such as personalized treatment recommendations and optimizing medical interventions. Agents learn treatment policies by analyzing patient data and medical outcomes, leading to improved patient care. - Resource management: Reinforcement learning is utilized in optimizing resource allocation and scheduling in areas like energy management, logistics, and manufacturing. Agents learn to make efficient decisions on resource allocation and utilization to improve operational efficiency. - Cybersecurity: Reinforcement learning can aid in cybersecurity by detecting and mitigating threats. Agents learn to identify patterns of malicious behavior, detect anomalies, and adapt security measures to protect against cyberattacks. - Personalized education: Reinforcement learning techniques can be used to develop adaptive learning systems that tailor educational content and learning paths to individual students. Agents learn to provide personalized recommendations and adapt teaching strategies based on student progress and feedback. These are just a few examples. Reinforcement learning’s ability to learn from experience and optimize behavior in dynamic environments makes it a promising approach in many complex and data-rich scenarios.
OPCFW_CODE
I'm new to creating themes and have a lot of questions (Zelda Item Screen theme) Tamoketh last edited by So I want to make a theme for myself based on Zelda games, Link to the Past specifically. If you just want to read the questions, they are at the end of the post. The main idea is that, instead of having the carousel scrolling through the systems as normal, all systems (with roms) would be displayed and a selection cursor would move through the systems. For reference, this is what I'm going off of: So the main ITEM area taking up the top-left corner of the screen would be showing icons for all the systems that have roms for them. Since this is mostly for personal use, having a "limit" of 20 icons shown won't be a problem. Instead of scrolling the carousel of systems, I'd have a cursor that would move though the list. The top-right window in green would show the system's name. The two yellow windows on the right and bottom-right would show 2 screenshots of games on the system. The red window along the bottom would have system information like release date, sales, etc. The background showing a blurred 3rd screenshot. So, there are probably many ways I could do this if I had more experience with how EmulationStation/RetroPie works, but here's what I'm planning on doing. Showing the screenshots, system information and system name should be easy enough. I'll have the carousel be hidden and just have the system icons loaded directly from the main XML. Then, I'll move the cursor image over whatever the currently selected system is. Even if it still moves the carousel, I'll arrange the icons in the same order as what the carousel would show. You won't be able to move the cursor in all 4 directions, but at least it'll move as expected left-right. So, my list of questions: - Is there an easier method than my planned work-around? Instead of having a fake cursor that moves around governed by the theme.xml for each system, is there an easier/simpler way to do the desired effect? - Despite the carousel only be horizontal or vertical, is there a way to have my planned item selection allow moving in 4 directions? Since all it would do is use the default carousel, but have it be hidden to only show my item selection, I almost assume this isn't possible. If that's the case, is there a way to have it jump to specific systems? Like have variables that track it's position and I could know if I'm on the NES and press down, that it would jump to the SNES that happens to be below it? - How would I go about hiding the Carousel? Is there a "visible" or "hidden" tag I can assign to it? Do I have to just move it off screen or set it's Z position to be under the background so it's blocked off? - If something is set in the main xml and in the theme.xml files, which takes priority? EX: I set the POS of my cursor in the main xml as a default like 0.15 0.15, then in the theme.xml of the NES, SNES and GBA set the POS to correspond to their system icons (EX: 0.15 0.15 for NES, 0.2 0.2 for SNES and 0.25 0.25 for GBA). Is the POS from the main xml always used and the cursor would not move? Would it take the POS in the theme.xml of the system I'm currently selecting (EX: SNES), and if I scrolled to the NES it would then use the NES's theme.xml POS? - If a system doesn't have any ROMs and would normally be hidden from the carousel, is there anything from its theme.xml that would get loaded? So, if I continue the above example and had the PSX with no ROMs available, if the theme.xml for the PSX added images or changed variables, would it ever get read/loaded? Or it would only be loaded/seen if I added a PSX ROM?
OPCFW_CODE
BBVA API Market Not only new technologies emerge, existing ones also evolve or improve. Java 8 brings many features that have been already available for years in other languages (lambdas, default implementations, optionals) and Microsoft announced .NET is going open source. The best programming language Most of the time when I meet someone who wants to learn computer programming, I get the question: "What is the best computer language out there?". That's when I say: "It depends", and then try to explain how difficult is to give an answer. Every programming language (as many other things in life) has positives and negatives. That question can be compared to answer which is the best food in the world. There is no one absolute reply, and it usually depends on many conditions. It is well known that in the world of Banking, technologies are chosen in a rather conservative way. Security, stability and long term support are the winner factors, and this criteria also applies to programming languages. If you do a simple Google search you will find that most websites agree in the same most used list for banking: Java, C# (or other .NET) and C++. The big boys, Java, C#, C++ Java, as expected, is used for many things. But one of the biggest areas is around web services. Many of the internal APIs and web applications of banks are running on Tomcat. Also, there are still Java Desktop Apps or some Applets out there. As impressive as it sounds, some banking Web Portals are still running as Java Applets as their main customer frontend portal. One plus side of Java, is it compatibility and portability. If you develop a solution to run in a specific version of Java, it is guaranteed that it will run on all the subsequent versions of the language with no alterations. In other words, Java is crossplatform and does not introduce breaking changes with new releases. Because of this, it is a very attractive language to use in Banking. C# is right on top with Java or C++. This is an interesting fact, since the other two have been around for more time. One of the reasons C# has been adopted this quickly is the fact that it is so tightly integrated with Windows. Many of the internal applications are developed in C# because it is very easy to integrate with other Microsoft products, like Active Directory and Office. And Microsoft knows how to do something really well: to create tools and environments that make developers really productive. If you're working in a bank, and you are requested to develop a Windows application using Active Directory SSO, that hits a couple of internal Web Services to display some data, C# would be a very smart choice. C++ has been always associated with real time, performance and efficiency. There are hundreds of debates discussing whether C++ is faster than non native code (Java or .NET). Since this is a very sensitive subject (like discussing what is the best engine oil for your car), I will only add that even being slower or faster than C++, I consider that the nonnative languages are fast enough to be used in many real time applications out there (e.g. Twitter is largely coded on Scala, which runs on the JVM). In my opinion, there are more important factors to consider nowadays when comparing C++ with the other two, like speed of development, availability of engineers, cost of maintenance/debugging, interop capabilities, and others. Dynamic and agile Python is an extremely flexible language. It can be used to create maintenance scripts. data analysis algorithms and web services. But it also has downsides, like the slow execution speed of the runtime. However, since it interfaces very well with C++, in many cases it is also used to build frontends or layers of abstraction reusing many of the existing C++ codebase. The main advantage of dynamic languages is the mindset of agility around them. Their community of developers, embraces the culture of building fast and deploying early. There are package managers and frameworks that allow developers to create any kind solutions really fast. As an example, with Node.js it is possible to bootstrap a REST Api Backend in literally minutes. This is a great plus for building small, self contained projects, since developers can focus on the functionality and not the implementation. What about architecture? The architecture in which a project is going to be built also influences what language is used. If a Monolithic Architecture is chosen, then moving away from it can be a difficult task. The result of Monolithic stacks is huge code bases, developed usually in a single programming language. Most of the time it is unflexible and deployed as a whole (of course, there are exceptions). On the contrary, if the architecture is based on (or migrated to) Microservices, then the diversity of technologies can flourish. Many different developers, with different background and expertise, can combine their efforts using the best tool for the job for each particular service. This is an ideal scenario given that there is enough time and resources to implement it. Usually this architecture takes more time to be put together, and it is more complex, but it is cheaper to expand and maintain in the long run. For example, multiple backend services developed in Java and .NET, can expose a set of API endpoints to one or more frontends developed HTML5/JS or Python. So, what's the best Language 🙂 ? We've seen a very high level overview of the top languages used in banking. Although technology adoption in Banking it is usually slow, it catches up eventually. The strong, static typed classic languages will be in the top list for a long time. There are plenty of seniors developers out there, the libraries, compilers and runtimes are very mature and they are supported by big companies (Oracle or Microsoft). Also there is already a big infrastructure already built on these languages. What I believe will happen, is that we will see more Dynamic Languages entering into play. JP Morgan's Athena platform is built on Python, C++ and Java. Cases like this one will be more common in the future, and there will even be a point, in which knowledge of open source frameworks will be required in the job description. Finally, I believe the generations to come of Software Engineers will be used to a much more diverse environment, with easy access to the open source community. Banks should embrace this philosophy and incorporate these technologies, at least internally. This will allow them to attract great talent and maintain a challenging and motivating environment for Software Engineers. In a connected world, APIs are the glue that keeps all the parts that form our day-to-day lives in place. The same way the power of glue depends on the material it is used on and the knowledge of its properties, APIs are only as useful as their documentation allows for. There are different solutions to monitor the performance of an API, in terms of traffic, quality and speed of the answers it provides. Detecting faults in the code or quantifying the generated revenues are also some of the options offered by these useful tools. App users must be aware that a button... is in fact a clickable button. Therefore, app buttons must be designed in such a way that users should feel invited to interact with the interface and actually click on it.
OPCFW_CODE
The semester is finishing up, and as usual, the most productive week for me is during finals. Not necessarily productive regarding school work or current research projects, but I always rediscover side projects and hobbies. This week I rekindled my interest in Guarani. I’ve been working on Guarani off and on since 2009. I was living in Campo Grande, Mato Grosso do Sul, Brazil, as a Mormon missionary at the time. Fairly regularly I would meet people that spoke this language called Guarani, and I had friend (a fellow missionary), who had some pedagogical materials that taught Spanish speakers Guarani. So I had to work through the Spanish (I had only been speaking Portuguese for 9 months or at that point), but I was able to decipher some of the basic Guarani morphology and grammar. A while later my dad sent me a copy of the Book of Mormon in Guarani and said I ought to learn what I could. So I sat there with the Guarani, Portuguese, and English translations and would try to figure out new words and morphology. Again, I was a Mormon missionary at the time, so I didn’t have a lot of time to spent learning this language. I hadn’t begun studying linguistics yet, so I had no idea what a non-Indo-European language could possibly be like and there were a few things that had me stumped. I also didn’t have access to a computer, so I couldn’t keep track of notes and vocabulary very well. So every couple of weeks I’d sit there with a dozen sheets of paper spread all over my desk, trying in vain to keep things alphabetized as I added vocabulary and translations. My Brazilian buddies all thought I was insane for trying to learn this language, but I found it to be a LOT of fun. One of the more frustrating things was that I wanted to see how a single word was used in other contexts. If I was looking through a sentence and there were three Guarani words I didn’t know, I often had no way of knowing which word corresponded with the meaning in the English sentence. If only I could control+F the book and find the Guarani words in other contexts and figure out the meaning. After I came back to the United States and went back to college at BYU, I found that there were some books written about Guarani grammar, but they were mostly older ones. I didn’t know it at the time, but a former Department Chair in the Department of Linguistics and English Language at BYU was Robert W. Blair, who published some Guarani pedagogical material. I found his Guarani Basic Course at the library as well as his student, Charles Graham’s, Guarani Intermediate Course, and did what I could going through those. There were some other more descriptive grammars of the language written in the mid 20th Century, and I even sat in on a Guarani course for a semester1. I was in my last year at BYU. I was working as a programmer, creating eBooks for WordCruncher and had access to an HTML file of the Guarani Book of Mormon. I had taken a class in Perl already and had gotten pretty proficient through that job. I had also taken Mark Davies’ Corpus Linguistics course. So when I took an NLP course as the capstone to my minor in Linguistic Computing, I decided to write a Guarani translator. The program worked pretty well and was exactly what I was dreaming of in Brazil. I had paired the Guarani and English text as a “parallel corpus”, meaning each line in one file corresponded to a translated line in the other2. What the translator does is it takes an input string (say, mba’apo) and it displays all the Gurani sentences with that word with the English underneath it. Made it very handy to see how words (or parts of words) were used in other contexts. What it then does it is look at all the words in both the English and Guarani sentences with the word, keeps track of their frequencies, then looks at the frequencies for all words in the entire corpus and compares the two. Words that have nothing to do with the translation will occur with roughly the same frequency in the matched sentences as they do in the full corpus. But words that correspond to the same meaning will occur relatively much more often in the matched sentences compared the corpus as a whole. So say the word work appears once every 1000 words in the whole corpus. If it suddenly appears once every 25 words in the matched words, statistically that’s a big difference, and odds are pretty good that work is a translation for mba’apo (and it is). So using this I could find out which English words correlated with which Guarani words. Not a perfect translator, especially since it didn’t use any fancy NLP processing, but not bad. My interest in Guarani, which was mostly about its nasal harmony, verbal morphology, and trying to document the grammar as a whole, started to wane as I started grad school and focused more on sociolinguistics and dialectology. But my reading comprehension is still… okay let’s face it, not that great, but I’m surprised at how much I was able to learn through self-study and a custom computer program. I think what started this recent resurgence in Guarani was, strangely enough, making this website. I’ve acquired some more HTML and CSS skills and realized that I could make something useful with a web browser. So I dusted off my old files and started something fun. In just a week I was able to make a pretty useful webpage (locally hosted only for now) with two main pages. The first is the entire corpus. Unlike what I had before, I could take advantage of the formatting to display useful information. All the words I know are in regular black text, but the words I don’t know stand out in blue. That makes it easy to figure out which ones I need to learn next. For the words I do know, the roots are underlined, so I can quickly see the base and what morphology is stemming off of it. The interactive part is that if I mouseover the root, a basic definition shows up. So if I’ve forgotten a word, I can very quickly remind myself of what it means. Very handy. How am I keeping track of what I know and don’t know? The other page on the site is a dictionary. I usually kept all this stuff in a spreadsheet somewhere, but here I can utilize the formatting to make it look like a real dictionary. I’ve got roots, possible word forms, derivatives, translations, parts of speech, etymology, other notes, and the infrastructure to include example sentences and other metadata. All this is stored on a file on my computer, and when I learn a new word, I just add it to the bottom of the file and a Perl script will take care of alphabetizing it and making sure it looks good for the CSS to take over. The result is a slick system where I can quickly see what words I need to learn and I can easily add them to the dictionary. I then run a lightning fast Perl script and refresh my browser, and I’ve got an updated corpus and dictionary. The system is set up to handle as big of a corpus or dictionary as I’m willing to feed it. For now, I’m only a couple paragraphs in and I’ve got over 100 entries in the dictionary. It will take hundreds of hours to go through my entire corpus. But for the first time I’ll be creating a decent Guarani dictionary, which is kinda what I had in mind to do the whole time. 1 Yes BYU offers a course in Guarani! The class was taught only every once in a while and was intended for Mormon missionaries who had spent time in Paraguay. The class was taught in Spanish (again—not a language I’ve studied) by a native Guarani speaker, and was intended to add some formal instruction to people already familiar with the language. I was overwhelmed with other courses so I couldn’t keep up for more than a few weeks. 2 This corpus might actually be the largest Guarani-English parallel dictionary. It had 329K Guarani words when first wrote the translator, but it’s now up to 606K after adding some more translated church material. I’ve got another ≈250K to add to it, whenever I get the time. I could nearly double it even then if I get access to the Guarani Bible, though I don’t know if that’ll happen anytime soon. Granted, these are all translated texts from English, and are religious-based, obviously representing a very different style than naturally occurring, spoken Guarani.
OPCFW_CODE
#include <iostream> #include <cstring> // header file for file input/output #include <fstream> class StockItem { public: StockItem() { item_name = nullptr; item_quantity = -1; item_cost = -1; } StockItem(char* name, int quant, int cost) :item_quantity(quant), item_cost(cost) { item_name = new char[strlen(name)+1]; strcpy(item_name, name); item_name[strlen(name)] = '\0'; } virtual ~StockItem() { delete [] item_name; } public: void ReadFromString(char* buff) { char* token = strtok(buff, ","); if(item_name == nullptr) { item_name = new char[strlen(token)+1]; strcpy(item_name, token); item_name[strlen(token)] = '\0'; } item_quantity = atoi(strtok(nullptr, ",")); item_cost = atoi(strtok(nullptr, ",")); } void WriteToFile(std::ofstream& fout) { fout << item_name << "," << item_quantity << "," << item_cost << std::endl; } public: char* get_name(){ return item_name; } int get_quantity(){ return item_quantity; } int get_cost(){ return item_cost; } void set_quantity(int quan) { item_quantity = quan; } void set_cost(int cost) { item_cost = cost; } void print() { std::cout << "item name: " << item_name << std::endl; std::cout << "item quantity: " << item_quantity << std::endl; std::cout << "item cost: " << item_cost << std::endl; } private: char* item_name; int item_quantity; int item_cost; }; int main() { // Read from stock.txt file { std::ifstream fin; fin.open("stock.txt"); char buff[1000]; while(!fin.eof()) { fin.getline(buff, 1000); if(buff[0] != 0) { StockItem si; si.ReadFromString(buff); si.print(); std::cout << std::endl; } } fin.close(); } // Write from stock.txt file { std::ofstream fout; fout.open("stock2.txt"); std::ifstream fin; fin.open("stock.txt"); char buff[1000]; while(!fin.eof()) { fin.getline(buff, 1000); if(buff[0] != 0) { StockItem si; si.ReadFromString(buff); si.set_quantity(si.get_quantity() + 10); si.WriteToFile(fout); } } fin.close(); fout.close(); } }
STACK_EDU
import argparse import numpy as np import theano import theano.tensor as T import lasagne from tqdm import tqdm from neuralnet import NeuralNet from layers import ConvLayer, ReLU from config import Configuration as Cfg def compile_make_fully_convolutional(nnet): # for naming convenience nnet.dense3_layer = nnet.svm_layer pad = 'valid' nnet.dense1_conv_layer = ConvLayer(nnet.maxpool5_layer, num_filters=4096, filter_size=(7, 7), pad=pad, flip_filters=False) relu_ = ReLU(nnet.dense1_conv_layer) nnet.dense2_conv_layer = ConvLayer(relu_, num_filters=4096, filter_size=(1, 1), pad=pad, flip_filters=False) relu_ = ReLU(nnet.dense2_conv_layer) nnet.dense3_conv_layer = ConvLayer(relu_, num_filters=1000, filter_size=(1, 1), pad=pad, flip_filters=False) W_dense1_reshaped = \ nnet.dense1_layer.W.T.reshape(nnet.dense1_conv_layer.W.shape) W_dense2_reshaped = \ nnet.dense2_layer.W.T.reshape(nnet.dense2_conv_layer.W.shape) W_dense3_reshaped = \ nnet.dense3_layer.W.T.reshape(nnet.dense3_conv_layer.W.shape) updates = ((nnet.dense1_conv_layer.W, W_dense1_reshaped), (nnet.dense2_conv_layer.W, W_dense2_reshaped), (nnet.dense3_conv_layer.W, W_dense3_reshaped), (nnet.dense1_conv_layer.b, nnet.dense1_layer.b), (nnet.dense2_conv_layer.b, nnet.dense2_layer.b), (nnet.dense3_conv_layer.b, nnet.dense3_layer.b)) return theano.function([], updates=updates) def compile_eval_function(nnet): X = T.tensor4() y = T.ivector() # get prediciton by fully convolutional network prediction = lasagne.layers.get_output(nnet.dense3_conv_layer, deterministic=True, inputs=X) # get output scores on first dim # before flattening on 2dim and then get scores on second dim prediction = prediction.transpose((1, 0, 2, 3))\ .flatten(2).transpose((1, 0)) prediction = T.nnet.softmax(prediction) # spatial averaging prediction = T.mean(prediction, axis=0) # compute top1 and top5 accuracies sorted_pred = T.argsort(prediction) top1_acc = T.mean(T.eq(sorted_pred[-1], y), dtype='floatX') top5_acc = T.mean(T.any(T.eq(sorted_pred[-5:], T.shape_padright(y)), axis=1), dtype='floatX') return theano.function([X, y], [top1_acc, top5_acc]) def evaluate(weights_file): Cfg.compile_lwsvm = False Cfg.batch_size = 1 Cfg.C.set_value(1e3) nnet = NeuralNet(dataset="imagenet", use_weights=weights_file) n_batches = int(50000. / Cfg.batch_size) make_fully_convolutional = compile_make_fully_convolutional(nnet) print("Weight transformation compiled.") make_fully_convolutional() print("Network has been made fully convolutional.") eval_fun = compile_eval_function(nnet) print("Evaluation function compiled") # full pass over the validation data: top1_acc = 0 top5_acc = 0 val_batches = 0 count_images = 0 for batch in tqdm(nnet.data.get_epoch_val(), total=n_batches): inputs, targets, _ = batch inputs = np.concatenate((inputs, inputs[:, :, :, ::-1])) top1, top5 = eval_fun(inputs, targets) top1_acc += top1 top5_acc += top5 val_batches += 1 count_images += len(targets) print("(Used %i samples in validation)" % count_images) top1_acc *= 100. / val_batches top5_acc *= 100. / val_batches print("Top-1 validation accuracy: %g%%" % top1_acc) print("Top-5 validation accuracy: %g%%" % top5_acc)
STACK_EDU
How to create string with invalid unicode characters, in Zsh? For some testing purposes I need a string with invalid unicode characters. How to create such string in Zsh? I assume you mean UTF-8 encoded Unicode characters. That depends what you mean by invalid. invalid_byte_sequence=$'\x80\x81' That's a sequence of bytes that, by itself, isn't valid in UTF-8 encoding (the first byte in a UTF-8 encoded character always has the two highest bits set). That sequence could be seen in the middle of a character though, so it could end-up forming a valid sequence once concatenated to another invalid sequence like $'\xe1'. $'\xe1' or $'\xe1\x80' themselves would also be invalid and could be seen as a truncated character. other_invalid_byte_sequence=$'\xc2\xc2' The 0xc2 byte would start a 2-byte character, and 0xc2 cannot be in the middle of a UTF-8 character. So that sequence can never be found in valid UTF-8 text. Same for $'\xc0' or $'\xc1' which are bytes that never appear in the UTF-8 encoding. For the \uXXXX and \UXXXXXXXX sequences, I assume the current locale's encoding is UTF-8. non_character=$'\ufffe' That's one of the 66 currently specified non-characters. not_valid_anymore=$'\U110000' Unicode is now restricted to code points up to 0x10FFFF. And the UTF-8 encoding which was originally designed to cover up to 0x7FFFFFFF (perl also supports a variant that goes to 0xFFFFFFFFFFFFFFFF) is now conventionally restricted to that as well. utf16_surrogate=$'\ud800' Code points 0xD800 to 0xDFFF are code points reserved for the UTF16 encoding. So the UTF-8 encoding of those code points is invalid. Now most of the remaining code points are still not assigned in the latest version of Unicode. unassigned=$'\u378' Newer versions of Unicode come with new characters specified. For instance Unicode 8.0 (released in June 2015) has (U+1F917) which was not assigned in earlier versions. unicode_8_and_above_only=$'\U1f917' Some testing with uconv: $ printf %s $invalid_byte_sequence| uconv -x any-name Conversion to Unicode from codepage failed at input byte position 0. Bytes: 80 Error: Illegal character found Conversion to Unicode from codepage failed at input byte position 1. Bytes: 81 Error: Illegal character found $ printf %s $other_invalid_byte_sequence| uconv -x any-name Conversion to Unicode from codepage failed at input byte position 0. Bytes: c2 Error: Illegal character found Conversion to Unicode from codepage failed at input byte position 1. Bytes: c2 Error: Truncated character found $ printf %s $non_character| uconv -x any-name \N{<noncharacter-FFFE>} $ printf %s $not_valid_anymore| uconv -x any-name Conversion to Unicode from codepage failed at input byte position 0. Bytes: f4 90 80 80 Error: Illegal character found $ printf %s $utf16_surrogate | uconv -x any-name Conversion to Unicode from codepage failed at input byte position 0. Bytes: ed a0 80 Error: Illegal character found $ printf %s $unassigned | uconv -x any-name \N{<unassigned-0378>} $ printf %s $unicode_8_and_above_only | uconv -x any-name \N{<unassigned-1F917>} $ With GNU grep, you can use grep . to see if it can find a character in the input: l=(invalid_byte_sequence other_invalid_byte_sequence non_character not_valid_anymore utf16_surrogate unassigned unicode_8_and_above_only) for c ($l) print -r ${(P)c} | grep -q . && print $c Which for me gives: non_character not_valid_anymore utf16_surrogate unassigned unicode_8_and_above_only That is, my grep still considers some of those invalid, non-characters or not-assigned-yet characters as being (or containing) characters. YMMV for other implementations of grep or other utilities. What shell is that for c ($l) print -r ${(P)c} | @1_CR, well the question is quite explicit which shell we're talking of here as it's in the subject, body and tags.
STACK_EXCHANGE
Can somebody explain how this mode works? What view has cipher text and how does it help to authenticate input’s data? - Anybody can ask a question - Anybody can answer - The best answers are voted up and rise to the top There are either too many possible answers, or good answers would be too long for this format. Please add details to narrow the answer set or to isolate an issue that can be answered in a few paragraphs.If this question can be reworded to fit the rules in the help center, please edit the question. In addition to information on the wiki, add some explanations and definitions contained in RFC 4106 "The Use of Galois/Counter Mode (GCM)" (AES-GCM) can help them better understand the working of the mode. authenticated encryption operation has four inputs: a secret key, an initialization vector (IV), a plaintext, and an input for additional authenticated data (AAD). It has two outputs, a ciphertext whose length is identical to the plaintext, and an authentication tag. In the following, we describe how the IV, plaintext, and AAD are formed from the ESP fields, and how the ESP packet is formed from the ciphertext and authentication tag. ESP also defines an IV. For clarity, we refer to the AES-GCM IV as a nonce in the context of AES-GCM-ESP. The same nonce and key combination MUST NOT be used more than once. Because reusing an nonce/key combination destroys the security guarantees of AES-GCM mode, it can be difficult to use this mode securely when using statically configured keys. For safety's sake, implementations MUST use an automated key management system, such as the Internet Key Exchange (IKE) [RFC2409], to ensure that this requirement is met. ESP Payload Data The ESP Payload Data is comprised of an eight-octet initialization vector (IV), followed by the ciphertext. The payload field, as defined in [RFC2406], along with the ICV associated with the payload. Initialization Vector (IV) The AES-GCM-ESP IV field MUST be eight octets. For a given key, the IV MUST NOT repeat. The most natural way to implement this is with a counter, but anything that guarantees uniqueness can be used, such as a linear feedback shift register (LFSR). Note that the encrypter can use any IV generation method that meets the uniqueness requirement, without coordinating with the decrypter. The plaintext input to AES-GCM is formed by concatenating the plaintext data described by the Next Header field with the Padding, the Pad Length, and the Next Header field. The Ciphertext field consists of the ciphertext output from the AES-GCM algorithm. The length of the ciphertext is identical to that of the plaintext. Implementations that do not seek to hide the length of the plaintext SHOULD use the minimum amount of padding required, which will be less than four octets. Integrity Check Value (ICV) The ICV consists solely of the AES-GCM Authentication Tag. Implementations MUST support a full-length 16-octet ICV, and MAY support 8 or 12 octet ICVs, and MUST NOT support other ICV lengths. Although ESP does not require that an ICV be present, AES-GCM-ESP intentionally does not allow a zero-length ICV. This is because GCM provides no integrity protection whatsoever when used with a zero- length Authentication Tag. The IV adds an additional eight octets to the packet, and the ICV adds an additional 8, 12, or 16 octets. These are the only sources of packet expansion, other than the 10-13 octets taken up by the ESP SPI, Sequence Number, Padding, Pad Length, and Next Header fields (if the minimal amount of padding is used). For more information you can contact to another RFC for examle 5647 which describe AES Galois Counter Mode for the Secure Shell Transport Layer Protocol. (AES-GCM in the TLS) The best explanation is in Dan Bernstein's Poly1305 paper, where he provides references to the original works on Carter-Wegman authentication. AES-GCM works by viewing the message as a polynomial and evaluating it at a random point. The sole way to forge is to blindly chose a polynomial which is zero at that point, but you have no information to do so, since the tags are encrypted.
OPCFW_CODE
## ## Created by C. Cesarotti (ccesarotti@g.harvard.edu) 04/2019 ## Last updated: 04/24/20 ## ## This file allows users to upload lists of 4-momenta and calculate its EMD from spherical geometry ## Event must be formatted as <event> .... </event> ## Particle information per line is E px py pz ## ## Calculates spherical event isotropy of single event ###################### import sys import time import warnings import numpy as np import matplotlib.pylab as plt from eventIsotropy.spherGen import sphericalGen, engFromVec from eventIsotropy.emdVar import _cdist_cos, emd_Calc from matplotlib import rc from mpl_toolkits.mplot3d import Axes3D rc('text', usetex=True) from prettytable import PrettyTable ############################################ ## Specify input file if len(sys.argv)<3: print 'Error: user did not specify input file and sphere index' sys.exit(2) ## Generate spherical sample sphereSample = np.array([sphericalGen(i) for i in range(5)]) sphereEng = np.array([engFromVec(sphereSample[j]) for j in range(5)]) ## Choose sphere n points sphInd = int(sys.argv[2]) if sphInd>5: print 'Warning! You are generating a sphere with'+str(12*(4**sphInd))+' particles. This is an extremely long calculation time' spherePoints1 = sphereSample[sphInd] sphereEng1 = sphereEng[sphInd] fileName=sys.argv[1] momenta=[] engL=[] file = open(fileName, 'r') if file.closed: print('Error: could not open file') sys.exit(2) nextline=file.readline() while nextline[0:7]=="<event>": nextline=file.readline() while nextline[0:8]!="</event>": particle = [float(n) for n in nextline.split()] eng, px, py, pz = particle[0], particle[1], particle[2], particle[3] if eng > 1e-05: momenta.append(np.array([px, py, pz])) engL.append(eng) nextline = file.readline() nextline=file.readline() file.close() ## Calculate the \semd values M = _cdist_cos(spherePoints1,np.array(momenta)) # Calculates distance with 1 - \cos metric emdval = emd_Calc(sphereEng1, np.array(engL), M) # Computes EMD print(emdval)
STACK_EDU
Restore content db sp 2010 I am new to SharePoint, I recently added a content db to a web app. The db was a backup from another server. On adding the content db, a site was automatically added, although i still had to deploy the solution. I am trying to understand how add a content db, automatically added a site. Any inputs would be appreciated. Thanks Not sure if I understood your question. When you add a content db SharePoint created the web application and sites automatically, that is by design. The content db should be from a farm of similar patch level. Here is more information on this: https://technet.microsoft.com/en-us/library/ff628965%28v=office.14%29.aspx In SharePoint, everything store in the Content DB. If you move a Content DB from One server to other or one farm to other, It will move everything( all site collections in that db) which is in that db. Now issue, On old farm, all sharepoint Site collections(in a given web app where this db attached) are in that DB or only few site collections in it. So whatever in that db will move to new locations. You should take the backup of DB from Old server Restore the DB to new Server Now attach the Restored DB to the Web Application in SharePoint( Central admin or Powershell). Now you will see all the sites in that web application For solution deployment, You can deploy it before or after DB attachment. For that you need to understand what is Content Database and role of content database in SharePoint. Content databases store all content for a site collection. This includes site documents or files in document libraries, list data, Web Part properties, audit logs, and sandboxed solutions, in addition to user names and rights. All of the files that are stored for a specific site collection are located in one content database on only one server. A content database can be associated with more than one site collection. Below are some of the basic tables within a content database and a very high level diagram on some of the relationships between them. Features: Table that holds information about all the activated features for each site collection or site. Sites: Table that holds information about all the site collections for this content database. Webs: Table that holds information about all the specific sites (webs) in each site collection. UserInfo Table that holds information about all the users for each site collection. Groups: Table that holds information about all the SharePoint groups in each site collection. Roles: Table that holds information about all the SharePoint roles (permission levels) for each site. All Lists: Table that holds information about lists for each site. GroupMembership: Table that holds information about all the SharePoint group members. AllUserData: Table that holds information about all the list items for each list. AllDocs: Table that holds information about all the documents (and all list items) for each document library and list. RoleAssignment: Table that holds information about all the users or SharePoint groups that are assigned to roles. Sched Subscriptions: Table that holds information about all the scheduled subscriptions (alerts) for each user. ImmedSubscriptions Table that holds information about all the immediate subscriptions (alerts) for each user. Hence, when you restore a content DB, it pulls data from related table and created site structure accordingly.
STACK_EXCHANGE
Adding Attack vector for finding vulnerabilities related to JWE Is your feature request related to a problem? Please describe. We have currently only handing JWS but we have not handled JWE so under this enhancement we are looking to add: Analysing Vulnerabilities related to JWE by going through various blogs, bug bounties, other scanner add-on's Implement the Attack vectors Adding the Vulnerable code in https://github.com/SasanLabs/VulnerableApp/blob/master/src/main/java/org/sasanlabs/service/vulnerability/jwt/JWTVulnerability.java so that we can test the attack vectors. Add a design document regarding the same. Code References Attack vectors: https://github.com/SasanLabs/owasp-zap-jwt-addon/tree/master/src/main/java/org/zaproxy/zap/extension/jwt/attacks Adding Support for parsing JWE: https://github.com/SasanLabs/owasp-zap-jwt-addon/blob/ec58672c0951a23cf4544fd0e41b72eb9328a78d/src/main/java/org/zaproxy/zap/extension/jwt/utils/JWTUtils.java#L139 Fuzzer code: https://github.com/SasanLabs/owasp-zap-jwt-addon/blob/master/src/main/java/org/zaproxy/zap/extension/jwt/fuzzer/ui/JWTFuzzPanelView.java Scan Rule code: https://github.com/SasanLabs/owasp-zap-jwt-addon/blob/master/src/main/java/org/zaproxy/zap/extension/jwt/JWTActiveScanRule.java Testing the changes build the addon by running ./gradlew spotlessApply ./gradlew build Then go to the ZAP -> File -> Local addon file -> Navigate to project -> build -> bin -> jwt*.zap and done. This seems like a good summary: A signed JWT is known as a JWS (JSON Web Signature). In fact a JWT does not exist itself — either it has to be a JWS or a JWE (JSON Web Encryption). Its like an abstract class — the JWS and JWE are the concrete implementations. https://medium.facilelogin.com/jwt-jws-and-jwe-for-not-so-dummies-b63310d201a3 https://auth0.com/blog/critical-vulnerability-in-json-web-encryption/ -> Attack against JWE document: https://owasp.slack.com/archives/C0F7D6DFH/p1692972988225639?thread_ts=1692958820.853539&cid=C0F7D6DFH which can help Content: there were a couple of talks at OWASP events mentioning JWE, but as it is a pure Encryption standard I do not see much resources about it in OWASP apart from the general guidelines in safe use in JWT. In terms of vulnerabilities in JWE I have found this article in Auth0 blog talking about a critical vulnerability in JWE: https://auth0.com/blog/critical-vulnerability-in-json-web-encryption/ and a few more JWE Security Considerations were listed here: https://www.jbspeakr.cc/jwe-token-json-web-encryption/ ```
GITHUB_ARCHIVE
Possible mistake in LSTM cell Hello, I was reading your paper and noticed that code in trellisnet.py (https://github.com/locuslab/trellisnet/blob/master/TrellisNet/trellisnet.py ) lines 124-129, does not correspond to the formula in the paper, section 5.1 formula(12). Could you clarify if this is true or I am wrong and don’t understand something. Thank you Hi Jurijs, Thanks for your interest in our work. What line 124-129 do is basically the following: it, ot, gt, ft = out.chunk(4, dim=1) it, ot, gt, ft = torch.sigmoid(it), torch.sigmoid(ot), torch.tanh(gt), torch.sigmoid(ft) ct = ft * ct_1 + it * gt ht = ot * torch.tanh(ct) This corresponds exactly to formula (12), where z_{t+1, 1} corresponds to ct and z_{t+1, 2} corresponds to ht above. I did permute the order of \hat{z}_{*, 1/2/3/4}, which correspond to ft, it, gt and ot, respectively. But this is a trivial change and doesn't affect the correctness of the code. Hope this helps! Thank for your prompt reply. I think the issue is still not clear to me. So, lets look in the following line: ht = ot * torch.tanh(ct) According to the paper ot is supposed to be \sigma{\hat{z}{*, 4}}, however, according to the code ot is \sigma{\hat{z}{*, 2}}. Do i understand correct that you claim that this is because of permutation, which is a trivial change? I am wondering why it would not be an issue, because \hat{z}{*, 2} corresponds to a different part of truncated RNN than \hat{z}{*, 2} according to formula (9). Oh, sorry, I think you may be confused. \hat{z}_t comes from formula (9) indeed, but I'm not permuting the layers. \hat{z}_t in formula (9) has dimension 4dL (it has L rows), which can be broken into L vectors, each with dimension 4d (i.e., a row in formula (9)). \hat{z}_t^{(i+1)} in formula (11) has dimension 4d. I'm simply permuting within this vector. If you look at Figure 5 in the appendix of the paper, it, ot, gt and ft correspond to the violet-color blocks. The permutation doesn't matter as long as we make sure we consistently use, for instance, the first d channels for ft, channel d-2d for it, channel 2d-3d for gt and channel 3d-4d for ot. Shaojie, thanks for your explanation, I understand that there is no mistake in the code, so, issue can be closed. However, I have some questions about formulas, is it ok to ask them in this thread or is there is another/ better (for you) way to reply? I am not sure about formula (11). I thought that (11) is supposed to be a result from formula (9) with L=2, then I would expect to see vector of dimension 2, according to (9), and not 4, as we can see in (11). Could you clarify?
GITHUB_ARCHIVE
Billions of users use various social media daily and see a lot of new suggestions there. The content includes text, images, videos, and so on depending on the social platform. Do you know how that content is suggested? We will learn about it in this blog. It is an algorithm that suggests relevant products to users based on a variety of factors. Sometimes, when you search for a certain product on a website you notice that you start receiving several suggestions of similar products, there is a system behind this. It is generally used to target potential users more efficiently and improve the user experience by suggesting new items, saving users’ time, and narrowing down the set of choices. Learn about Data Science here Watch the video to see what a recommendation system is and how it is used in various real-world applications. Now that we know the concept, let’s dive deeper into a real-world application to better comprehend it. YouTube’s recommendation system journey YouTube has over 800 million videos, which is about 17,810 years of continuous video watching. It is hard for a user to repeatedly search for certain sorts of videos from millions of videos. This problem is solved by recommendation systems, which provide relevant videos based on what you are currently watching. The system also works when you open YouTube’s home page and do not watch any videos. In this case, it shows the mixture of the subscribed, most up-to-date, promoted, and most recently watched videos. Let’s discuss the journey of the recommendation system on YouTube. In 2008, YouTube’s recommendation system ranked videos based on popularity. The issue with this approach was sometimes violent or racy videos get popular. To avoid this, YouTube built classifiers to identify this type of content and avoid recommending them. After a couple of years, YouTube started to incorporate video watch time in its recommendation system. The reason for this was that users often watched different types of videos and there were different recommendations for them. Later, YouTube took surveys where users rated the watched videos and answered the questions upon giving low or high stars. Soon, YouTube’s management realized that everyone did not fill out the survey. So, YouTube trained a machine learning model on completed surveys and predicted the survey responses. YouTube did not stop there; they started to consider the likes/dislikes and share information to make the recommender system better. Nowadays, they are also using classifiers to identify authoritative and borderline (doesn’t quite violate community) content to make a better recommender system. Read more about social media algorithms in this blog Before diving deep into the technical detail, let’s first discuss common types of recommendation systems. Classification of recommendation system: These types of recommendation systems are widely used in industry to solve different problems. We will go through these briefly. 1. Content-based recommendation system According to the user’s past behavior or explicit feedback, content-based filtering uses item features (such as keywords, categories, etc.) to suggest additional items that are similar to what they already enjoy. 2. Collaborative recommendation system Collaborative filtering gives information based on interactions and data acquired by the system from other users. It is divided into two types: memory-based, and model-based systems. a) Memory-based system This mechanism is further classified as user-based and item-based filtering. In the user-based approach, recommendations are made based on the user’s preferences that are similar to the preferences of other users. In the item-based approach, recommendations are made based on items similar to other items the active user likes. Let’s see the below illustration to understand the difference: b) Model-based system This mechanism provides recommendations by developing machine learning models from users’ ratings. A few commonly used machine learning models are clustering-based, matrix factorization-based, and deep learning models. 2. Demographic-based recommendation system This system provides recommendations based on user demographic attributes, such as age, sex, and location. This system uses demographic information, such as a user’s age, gender, and location, to provide personalized recommendations. This type of system uses data about a user’s characteristics to suggest items that may be of particular interest to them. For example, a recommendation system might use a user’s age and location to suggest events or activities in the user’s area that might be of interest to someone in their age group. 3. Knowledge-based recommendation system This system offers recommendations based on queries made by the user rather than a user’s rating history. Shortly, it is based on explicit knowledge of the item variety, user preference and suggestion criteria. This strategy is suited for complex domains where products are not acquired frequently, such as houses and automobiles. 4. Community-based recommendation system This system provides recommendations based on user-interacted items within a community that shares a common interest. A community-based recommendation system is a tool that uses the interactions and preferences of a group of people with a shared interest to provide personalized recommendations to individual users. This type of system takes into account the collective experiences and opinions of the community in order to provide personalized recommendations. 5. Hybrid recommendation system This system is a combination of two or more discussed recommendation systems such as content-based, collaborative-based, and so on. Sometimes a single recommendation system cannot solve an issue, thus we must combine two or more recommendation systems. We now have a high-level understanding of the various recommendation systems. Recall the YouTube discussion, what do you think, which recommendation method suits YouTube the most. It is a memory-based collaborative recommendation system. YouTube can use an item-based approach to suggest videos based on other similar videos using users’ ratings (clicked on and watched videos). To determine the most similar match, we can use matrix factorization. This is a class of collaborative recommendation systems to find the relationship between items’ and users’ entities. However, this approach has numerous limitations, such as - Not being suitable for complex relations in the users and items - Always recommend popular items - Cold start problem (cannot anticipate items and users that we have never encountered in training data) - Can only use limited information (only user IDs and item IDs) To address the shortcomings of the matrix factorization method, deep neural networks are designed and used by YouTube. Deep learning is based on artificial neural networks, which enable computers to comprehend and make decisions in the same way that the human brain does. Let’s watch the video below to gain a better understanding of deep learning. YouTube uses the deep learning model for its video recommendation system. They provide users’ watch history and context to the deep neural network. The network then learns from the provided data and uses the softmax classifier (used for multiclass classification) to differentiate among the videos. This model provides hundreds of videos from a pool of over 800 million videos. This procedure was named “candidate generation” by YouTube. But we just need to reveal a few of them to a certain user. So, YouTube created a ranking system in which they provide a rank (score) to each of a few hundred videos. They used the same deep learning model that assigns a score to each video for this. The score may be based on the video that the user watched from any channel and/or the most recently watched video topic. We studied different recommendation systems that can be used to address various real-world challenges. These systems help to connect people with resources and information that may not have been easily discoverable otherwise, making them a useful tool for solving these challenges. We discussed the journey of YouTube’s recommendation system, a collaborative system used by YouTube, and examined how YouTube performed well using deep learning in their systems.
OPCFW_CODE
For Internet companies, marketing campaigns play an important role in acquiring new customers, retaining and engaging existing customers, and promoting new products. During this Lucene/Solr Revolution session, Hien Luu and Rajasekaran Rangaswamy from LinkedIn will demonstrate how Lucene powers the LinkedIn segmentation and targeting platform, and how this platform helps marketing teams to easily and quickly create member segments based on member attributes using nested predicate expressions ranging from simple to complex. Once segments are created, then those qualified members are targeted with marketing campaigns. Lucene is a key piece of technology in this platform. This session will cover how they leverage Hadoop to efficiently build Lucene indexes for a large and growing member attribute data set of 225 million members, and how Lucene is used to create segments based on complex nested predicate expressions. This presentation will also share some of the lessons they learned and challenges they encountered using Lucene to search over large data sets. This intermediate level session will take place from 11:05-11:50 on Wednesday, November 6. Click here for more details. About the Speakers: Hien Luu is a senior member of the Data Services Platform team at LinkedIn and he is the technical lead of the LinkedIn Member Segmentation platform. He enjoys teaching and is currently an instructor of the Hadoop: Big Data Processing course at UCSC Silicon Valley Extension school. He has given presentations at various conferences and user groups like JavaOne, Silicon Valley CodeCamp, and SVForm Software & Architecture user group. He loves working with big data technologies and recently became a contributor of the Apache Pig project. Rajasekaran Rangaswamy is a Staff Engineer at LinkedIn. He is involved in a number of projects involving big data technologies like Hadoop, Hive, and Pig. His areas of expertise include Ad Operations, Data Warehousing, Search Optimization, and Search as a Service. He first got involved with Lucene when he built a prototype for the Member Segmentation project back in 2010. - For more information about Lucene/Solr Revolution EU, visit lucenerevolution.org. - For more Road to Revolution posts, click here. - To view the full session agenda, click here. - To register for the conference, click here. - To get the latest conference news and updates, follow @LuceneSolrRev on Twitter. - Do you have a question about the conference? Do you want to be added to the conference mailing list? Are you interested in sponsoring Revolution? If so, please email us at: email@example.com. Lucene/Solr Revolution is presented by Lucidworks, the commercial entity for Apache Lucene/Solr open source search — the future of search technology.
OPCFW_CODE
<?php namespace contact\Resource\ContactInfo; /** * @license http://opensource.org/licenses/lgpl-3.0.html * @author Matthew McNaney <mcnaney at gmail dot com> */ class PhysicalAddress extends \Canopy\Data{ /** * Room nummber in building * @var \phpws2\Variable\Integer */ private $room_number; /** * Name of building * @var \phpws2\Variable\TextOnly */ private $building; /** * * @var \phpws2\Variable\TextOnly */ private $street; /** * Post Office box * @var \phpws2\Variable\Integer */ private $post_box; /** * @var \phpws2\Variable\TextOnly */ private $city; /** * @var \phpws2\Variable\TextOnly */ private $state; /** * @var \phpws2\Variable\Integer */ private $zip; public function __construct() { $this->room_number = new \phpws2\Variable\IntegerVar(null, 'room_number'); $this->room_number->allowNull(true); $this->building = new \phpws2\Variable\TextOnly(null, 'building'); $this->building->allowNull(true); $this->street = new \phpws2\Variable\TextOnly(null, 'street'); $this->street->allowNull(true); $this->post_box = new \phpws2\Variable\IntegerVar(null, 'post_box'); $this->post_box->allowNull(true); $this->city = new \phpws2\Variable\TextOnly(null, 'city'); $this->city->allowNull(true); $this->state = new \phpws2\Variable\TextOnly(null, 'state'); $this->state->allowNull(true); $this->zip = new \phpws2\Variable\StringVar(null, 'zip'); $this->zip->allowNull(true); } public function getRoomNumber() { return $this->room_number->get(); } public function getBuilding() { return $this->building->get(); } public function getStreet() { return $this->street->get(); } public function getPostBox() { return $this->post_box->get(); } public function getCity() { return $this->city->get(); } public function getState() { return $this->state->get(); } public function getZip() { return $this->zip->get(); } public function setBuilding($building) { $this->building->set($building); } public function setRoomNumber($room_number) { if (empty($room_number)) { $this->room_number->set(null); } else { $this->room_number->set($room_number); } } public function setPostBox($post_box) { if (empty($post_box)) { $post_box = null; } $this->post_box->set($post_box); } public function setStreet($street) { $this->street->set($street); } public function setCity($city) { $this->city->set($city); } public function setState($state) { $this->state->set($state); } public function setZip($zip) { $this->zip->set($zip); } }
STACK_EDU
Where do I find this Advait (non dual) philosophical quote by Nammalvar? Alvars were devotees of Lord Vishnu who propagated Vishnu devotion in ancient India. One of the Alvar named Nammalvar said the following (found from Teachings' of Ramana Maharshi book) - In ignorance, I took the ego to be the Self, but with right knowledge the ego is not and only you remain as the Self. So, the saint is establishing his and Lord Vishnu's self/Atma as one based on his Advaitin experience. I want to know where do I find this quote? Why this question has been edited with identification request tag ? Its meant only for images right? I think mods should do something about it. @RakeshJoshi Dude, please read the tag wiki: https://hinduism.stackexchange.com/tags/identification-request/info @LakshmiNarayanan let the mods clarify. @RakeshJoshi What is not clear? @RakeshJoshi It is for images, verse and stories. Only few think it is only for images and not for others saying that it is created for image identification only but it is false. They said this is a useless tag and love scripture tag. See related discussions on meta and chat. It is perfectly fine for verse identification and quote identification. Then what is the use of scripture tag? @Sarbabhouma Can you also post the passages surrounding the quote from the book? @LakshmiNarayana Sure. @Rohit. Scripture is for the questions which are about scriptures not for anything else. . This is from Thiruvaimozhi 2.9.9 Nammalwar is not saying his soul and Vishnu are one and the same due to Advaitan experience. By "true knowledge", he means understanding the relationship between him and Lord. Nammalwar is an attribute of a Lord and he says everything which belongs to Nammalwar is Lord's belongings. He is feeling sorry that he cannot serve Lord like the Nitya Suris do in the Paramapadam (the eternal abode of Vishnu). The Pasuram is yAnE ennai aRiyagilAdhE yAnE enRanadhE enRirundhEn yAnE nI en udaimaiyum nIyE vAnE Eththum em vAnavar ERE Simple translation based on 12000 based on vAdhi kEsari azhagiya maNavALa jIyar‘s 12000 padi. Due to I myself not having true knowledge in the matters relating to me, considering myself as independent and considering everything but myself as my belongings, engaged in ahankAram (considering myself as independent) and mamakAram (considering the belongings of bhagavAn as mine) and merely existed (without manifesting AthmA‘s eternal nature); Oh one who resides in paramapadham manifesting the pride of lordship, being praised in all of paramapadham (where true knowledge is practiced without this ignorance) by those residents of the paramapadham praising (your relationship with them)! Once this true knowledge is understood, I realise that “I” can be said as “You”, due to me being prakAra (attribute) of you which is inseparable and in the same manner being your attributes, my belongings can also be called as “You”. The webpage also cites Nampillai's and Manavala Jeeyar's commentary to the pasuram to understand what Nammalwar meant by him the same as Lord Vishnu. Its highly unlikely Ramana Maharshi will understand it wrongly as he belonged to Tamil Nadu only. Can you provide some other translation as well? Possibly, plane translation. Their translation is more like an interpretation. @Rohit. It is even more unlikely that a Sri Vaishnava Acharya, who was also from Tamil Nadu and who came from the Guru Parampara of Nammalwar himself, would understand it wrongly. But you're right that the above is less of a translation and more of a commentary. Here is an actual translation: "I and my possessions are Your property and we are at Your service; I did not have this knowledge and had a wrong knowledge - I had "I" and "mine" attitudes, so long. Now I am Yours and my possessions are also Yours; You are the Chief of NityasUris and our Chief!" https://tinyurl.com/yccpubtx @Rohit. Here's another translation: "Oh, my Lord, by this entire Heaven adored! Chief of Celestials, Fancied I, in ignorance bred, I my master was and all things mine own; but now do I realize, all are yours, I and mine." https://archive.org/stream/Thiruvaimozhi_english_commentry/ThiruvaimozhiVol1#page/n107/mode/2up And yet another one: "Not knowing my true self, I thought I was my own. O, Radiant Lord worshipped by the celestials. Me and what is mine are yours!" http://dravidaveda.org/index.php?option=com_content&view=article&id=3690
STACK_EXCHANGE
import itertools results = [{'universities': 1L, 'name': u'Course Name', 'categories': 1L, 'id': 2L}, {'universities': 2L, 'name': u'Course Name', 'categories': 1L, 'id': 2L}, {'universities': 1L, 'name': u'Course Name', 'categories': 5L, 'id': 2L}, {'universities': 2L, 'name': u'Course Name', 'categories': 5L, 'id': 2L}, {'universities': 1L, 'name': u'Course Name', 'categories': 6L, 'id': 2L}, {'universities': 2L, 'name': u'Course Name', 'categories': 6L, 'id': 2L}] """ When you call values() on a queryset where the Model has a ManyToManyField and there are multiple related items, it returns a separate dictionary for each related item. This function merges the dictionaries so that there is only one dictionary per id at the end, with lists of related items for each. """ def merge_values(values): grouped_results = itertools.groupby(values, key=lambda value: value['id']) merged_values = [] for k, g in grouped_results: print k groups = list(g) merged_value = {} for group in groups: for key, val in group.iteritems(): if not merged_value.get(key): merged_value[key] = val elif val != merged_value[key]: if isinstance(merged_value[key], list): if val not in merged_value[key]: merged_value[key].append(val) else: old_val = merged_value[key] merged_value[key] = [old_val, val] merged_values.append(merged_value) return merged_values print merge_values(results)
STACK_EDU
Töitä ei löytynyt Pahoittelut, emme löytäneet etsimääsi työtä. Löydä viimeisimmät työt täältä: Make sayari aap and monotise with non skipping ads We are looking for native speakers to write and translate a short text (~500 words) from English to Dutch, Norwegian, Romanian, Serbian, Burlgarian, Slovak, Czech, Polish, Greek Languages Hi there, I am looking for some information on developing an online strategy game. I want to create a game that is similar to a Tower Defence game. Please contact me for more details. I cannot provide a lot of information in this section, so please message me! We would like a native Russian speaker who can translate copy from English to Russian for a web application that we've built. The ideal candidate would have an excellent command of Russian, be familiar with technology and the technical terms that are used on mobile and web apps, and would also have knowledge of skin care. We would pay a flat rate to translate the copy for the web app as well ... My business is a start-up tuition agency that looks to improve the grades and learning of students. It involves training tutors based around my ideology to improve the self-awareness of true learning and how to apply it in order to succeed. Here is what I require for my website. Lot has been done but work is required. Main page: 1. A pop-up that appear straight away saying "Attention! By... Hi, I need to scrape Foursquare "lists" data for a recommender system I am developing. I have built a basic scraper, which has successfully collected the contents of roughly 30,000 Foursquare lists but need assistance in using Selenium to collect more. 1) currency symbols change 2) from the location to destination Total amount calculation must see the client before booking . 3) Adjusting logo and icon there is new version of this app has released we need to be updated and before we have customized need to be keep it I'm looking for a Native German, with experience in writing reviews about casinos and online gambling. The project is for 10.000 words unique content, SEO optimized on provided keywords, no duplicate, should pass CopyScape. Dreamlight is a hardware startup with a global headquarters in Los Angeles and an R&D and supply chain management team in Shenzhen, China. Currently, $3 million in seed round financing is completed, and A round of $15 million in financing is planned. Dreamlight is a boring sleep industry challenger, a competitive new player in the global sleep industry, and we hope to combine the current IoT t... I want a simple logo for my company....my company is dealing in machines
OPCFW_CODE
Microsoft Dynamics 365 Business Central is a business management solution for small and mid-sized organizations. The software automates and streamlines business processes and helps SMB customers to manage their business. Business Central not only fully integrates with Microsoft's office product family. Moreover, it is highly adaptable and extensible with a plethora of apps provided by Microsoft's business application app store called AppSource. As such, Business Central enables companies to manage their business, including finance, manufacturing, sales, shipping, project management, services, and more. Manufacturing companies that use Business Central gain the same advantages that any user of the solution gets: It supports them in achieving a streamlined, integrated, all-in-one management solution that handles every aspect of their business process from finance to sales, inventory & purchasing, warehousing, and stock management. In addition to this, Business Central comes with out-of-the-box manufacturing capabilities that seamlessly integrate with the modules mentioned before. This blog post gives an introductory overview of the essential out-of-the-box manufacturing functionalities of Business Central Overview: Business Central manufacturing essentials The core of the out-of-the-box manufacturing functionalities is the routing and the BOM (bill of materials). Both are used to create production orders, which are needed to close the gap between supply and demand. The BOM defines which material is needed when producing a certain item. Likewise, the routing defines which machine or work center is needed for how long when the item is produced. Production orders are always based on the BOM and the routing. They can get created manually, or from a sales order (to fill the specific demand from one particular customer), or by accepting the suggestions of the planning worksheet. The planning worksheet is the frontend to Business Central's material requirements planning (MRP) engine, which seeks at balancing mid-term item demand and item supply. Once production orders are created, they consume components and can have labor reported towards them. In addition to this, users can report the time on which each production order used which machine. This enables a holistic production order costing, and supports the entire "item flow" from a purchase component to WIP (work in process), to inventory, and finally to the cost of goods sold. This is achieved with the following building blocks of the out-of-the-box Business Central manufacturing capabilities. Out-of-the-box Business Central manufacturing building blocks Production orders are the cornerstone of Microsoft Dynamics 365 Business Central manufacturing. They are defined and made up by the routing and the BOM. They are used to manage the conversion of purchased materials into manufactured items. Business Central allows users to apply five different statuses of a production order. These are: simulated, planned, firm planned, released, and finished. These statuses support different use cases of production order handling - from costing and quoting, over planning to execution and reporting. As a consequence, production orders contain the following information: - Products planned for manufacturing - Materials required for the planned production orders - Products that have just been manufactured - Materials that have already been selected - Products that have been manufactured in the past - Materials that were used in previous manufacturing operations Production and inventory planning Business Central provides users with an engine to calculate the master planning schedule and the material requirements based on both actual and forecasted demand. While the actual demand e.g. originates from existing sales orders, the forecasted demand results from a separate demand forecasting functionality. The manufacturing user can decide to either calculate either the master planning schedule (MPS) or the material requirements planning (MRP) or to do both at the same time. MPS and MRP are built on the same planning engine. MPS is the calculation of a master production schedule based on actual demand and the demand forecast. MRP is the calculation of material requirements based on actual demand for components and the demand forecast on the component level. MRP is calculated only for items that are not MPS items. The purpose of MRP is to provide time-phased formal plans, by item, to supply the appropriate item, at the appropriate time, in the appropriate location, in the appropriate quantity. The output of an MRP run typically is the creation of new production orders to balance item and component demand with the respective supply. The MPS/MRP engine is based on the assumption of infinite capacity. To convert the material into produced end items, production resources must be set up in the system. Both operators and machines are represented as machine centers in Business Central. They can get organized in work centers and work center groups. The user can define the production capacity of each resource. Capacity is defined by the work time available in the machine and work centers, and is governed by calendars for each level. A work center calendar specifies the working days or hours, shifts, holidays, and absences that determine the work center’s gross available capacity (typically measured in minutes). The machine center calendar gets inherited from the respective work center. All of this is determined by defined efficiency and capacity values. When these resources are established like this, they can be loaded with operations according to the item's defined routing, Visualizing the manufacturing module - visual production scheduling extensions Our two visual scheduling solutions, namely, the Visual Production Scheduler and Visual Advanced Production Scheduler, are both fully integrated into the manufacturing module of Business Central. In relation to the functionalities stated above, both solutions give you the visualization of the following: - Production orders & routings - Capacity management on machine centers & work centers - Demand planning with integration to the MRP (VAPS only) - Inventory planning with the introduction of EMAD (earliest material availability date) and the availability view (VAPS only) Although the two products have their differences, in general, they give you the following advantages: - Better shop floor transparency - Increase on-time deliveries - Better bottleneck recognition Want to learn more about visual scheduling for Microsoft Dynamics 365 Business Central? Download our free Ebook "introduction to visual scheduling for Microsoft Dynamics 365 Business Central".
OPCFW_CODE