Document
stringlengths
395
24.5k
Source
stringclasses
6 values
This is a common question among beginner programmers. They tend to confuse these two Drupal authentication providers. It’s not such a big thing, and after this article, you will not have the same doubts every time a project requires a User Login….or at least that’s what we’re hoping. Similarities between OpenID and OAuth Yes, you are trying to see the difference and not the resemblances between the two. If you are a true beginner, probably they already look the same to you, and you don't really need to see more similarities. Yet, these pieces of information may help you have the full image. First of all, both modules are open web standards and have something to do with authentication, security, and authorization. Both OpenID and OAuth do not work with a unique centralized server. That would be quite dangerous, as a thousand websites use these two modules. In itself, these two modules make browser redirects to a certain client website, and the other way around, by using SSL and SSO technology. The best part about these two modules (and the one thing you should remember from this paragraph), is that OpenID and OAuth let you have full control over which strange websites and possibly not so benevolent users, “talk” to your website...isn’t that what a truly protective parent would do? Module’s name says almost all about it The OAuth is a module used mainly for data sharing and communication between services. Let’s say that you have an app on which you can login with your Twitter account for example. When logging in with your twitter account, for example, that app will know exactly what data you’ve shared with twitter and will act on your behalf. It’s a bit like when you're logging on those Facebook App, or even on Instagram. With OAuth, the user gives permission to a site X on which s/he is logging in, to access the API of a site Y, on which s/he has a previously created account. The OAuth appeared mainly due to the need of not letting a third party app to see or to share passwords, so some may say that the OAuth is a reply and/or an improvement to the OpenID. OpenID is generally used on Drupal multisites, as this module lets the website’s users logging in on all multisites instances. OpenID lets a third-party authenticate your users for you, by using accounts they have. The module uses a single set of credentials to let a user log into one or more websites or applications. It is commonly used by beginners, as it is easier to implement and does not require so much time spent on coding and implementation. Yet, do not underestimate this authentication procedure. In itself, there is little difference between the two authentication systems, yet the OAuth suits best a wide range of projects, and is widely recommended. The major improvement of OAuth is that it uses HTTP Basic credentials (username and password) to provide an API, feature that is not available for OpenID. How to use the modules Set a server, be that SSO or SSL….most people pay for them. As said before, these modules do not use a centralized server, so they will be implemented on the client’s server. So, no matter what module you are using, a secure server will be required. As mentioned before, OpenID is relatively simpler to implement than OAuth. You will find all the Specs and Libraries on the OpenID website. In the case of OAuth, some research will be needed before starting the installation of the module. Again, if you are a beginner, the whole procedure may prove a bit too tricky, as you will have to learn more about PECL repository and the PEAR packaging system. Then you will proceed to the actual installation of the module as presented on the drupal.org page. The detailed procedure may seem a bit difficult at the beginning, yet with a lot of patience, most surely, you’ll be able to carry it out. If you have problems, do not hesitate to contact us. The contact form below is waiting for your questions.
OPCFW_CODE
import numpy as np import matplotlib.pyplot as plt from astropy.io import ascii import sys, os, string import pandas as pd from astropy.io import fits import collections from astropy.stats import biweight_midvariance #The location with the file for all of our data fluxdatapath = '/Users/blorenz/COSMOS/COSMOSData/lineflux.txt' #The location to store the MAD of each line madout = '/Users/blorenz/COSMOS/COSMOSData/linemad.txt' #The location to store the scale and its stddev of each line scaleout = '/Users/blorenz/COSMOS/COSMOSData/scales.txt' #Folder to save the figures figout = '/Users/blorenz/COSMOS/Reports/2018/Images/' #Read the datafile fluxdata = ascii.read(fluxdatapath).to_pandas() #Get the strings of each line lines = ['4861','4959','5007','6563','4340','4102','6548','6583','3727','6563_fix','6548_fix','6583_fix'] lines = np.sort(lines) #Set up a df to store them mad_df = pd.DataFrame() scale_df = pd.DataFrame() #Fontsizes for plotting axisfont = 18 ticksize = 16 titlefont = 24 legendfont = 16 textfont = 16 fig,axarr5 = plt.subplots(3,4,figsize=(30,20)) axarr5 = np.reshape(axarr5,12) counter=0 for line in lines: #Compute the scale dataframe and make a plot to display it ax = axarr5[counter] medscale = np.median(np.log10(fluxdata[fluxdata[line+'_scale']>0][line+'_scale'])) sigscale = np.sqrt(biweight_midvariance(np.log10(fluxdata[fluxdata[line+'_scale']>0][line+'_scale']))) scale_df.at[0,line + '_medscale'] = medscale scale_df.at[0,line + '_sigscale'] = sigscale #Make the plot bins = np.log10(np.arange(0.05,4,0.05)) ax.hist(np.log10(fluxdata[fluxdata[line+'_scale']>0][line+'_scale']),bins=bins,color='grey',label=None) #Plot the median, 1sig, and 3 sig stddevs ax.plot((medscale,medscale),(-100,1000),color='red',ls='-',label='Median') ax.plot((medscale-sigscale,medscale-sigscale),(-100,1000),color='pink',ls='-',label='1 sigma') ax.plot((medscale+sigscale,medscale+sigscale),(-100,1000),color='pink',ls='-') ax.plot((medscale-3*sigscale,medscale-3*sigscale),(-100,1000),color='thistle',ls='-',label='1 sigma') ax.plot((medscale+3*sigscale,medscale+3*sigscale),(-100,1000),color='thistle',ls='-') #Titles, axes, legends #ax.set_xscale('log') ax.set_title(line + ' Scale Histogram',fontsize = titlefont) ax.set_xlabel(line + ' Scale',fontsize = axisfont) ax.set_ylabel('Counts',fontsize = axisfont) ax.set_xlim(np.min(bins),np.max(bins)) ax.set_ylim(0,500) ax.tick_params(labelsize = ticksize) ax.legend(fontsize=axisfont) counter = counter+1 fig.tight_layout() fig.savefig(figout + 'scale_hist.pdf') plt.close(fig) #Division function def divz(X,Y): return X/np.where(Y,Y,Y+1)*np.not_equal(Y,0) #Finds the OBJIDS of all duplicates dupobjids = [item for item, count in collections.Counter(fluxdata.OBJID).items() if count > 1] #Pulls out the errors of the duplications duprows = [fluxdata[(fluxdata.OBJID == i)] for i in dupobjids] #Loop over every line for line in lines: #Compute the difference for all duplicates in the line diff = [np.abs(divz(i.iloc[0][line + '_flux'],i.iloc[0][line + '_scale'])-divz(i.iloc[1][line + '_flux'],i.iloc[1][line + '_scale'])) for i in duprows if (((i.iloc[0][line + '_flag'] in [0,4]) and (i.iloc[1][line + '_flag'] in [0,4])) and (np.abs(i.iloc[0][line+'_scale']-scale_df[line+'_medscale'][0]) < (3*scale_df[line+'_sigscale'][0])) and (np.abs(i.iloc[1][line+'_scale']-scale_df[line+'_medscale'][0]) < (3*scale_df[line+'_sigscale'][0])))] #Compute 1.49*mad mad = 1.49*np.median(diff) mad = mad/np.sqrt(2) #Store the result to the df mad_df.at[0,line + '_mad'] = mad #Use this to set a new mad, then take it out #mad_df.at[0,'6548_mad'] = 0.5 #Sort the df by linename mad_df = mad_df.reindex(sorted(mad_df.columns), axis=1) scale_df = scale_df.reindex(sorted(scale_df.columns), axis=1) #Save the df mad_df.to_csv(madout,index=False) scale_df.to_csv(scaleout,index=False)
STACK_EDU
PCILeech uses PCIe hardware devices to read and write from the target system memory. This is achieved by using DMA over PCIe. No drivers are needed on the target system. PCILeech supports multiple memory acquisition devices. Primarily hardware based, but also dump files and software based techniques based on select security issues are supported. USB3380 based hardware is only able to read 4GB of memory natively, but is able to read all memory if a kernel module (KMD) is first inserted into the target system kernel. FPGA based hardware is able to read all memory. PCILeech is capable of inserting a wide range of kernel implants into the targeted kernels – allowing for easy access to live ram and the file system via a “mounted drive”. It is also possible to remove the logon password requirement, loading unsigned drivers, executing code and spawn system shells. PCIleech runs on Windows/Linux/Android. Supported target systems are currently the x64 versions of: UEFI, Linux, FreeBSD, macOS and Windows. PCILeech also supports the Memory Process File System – which can be used with PCILeech FPGA hardware devices in read-write mode or with memory dump files in read-only mode. To get going clone the repository and find the required binaries, modules and configuration files in the pcileech_files folder. - Retrieve memory from the target system at >150MB/s. - Write data to the target system memory. - 4GB memory can be accessed in native DMA mode (USB3380 hardware). - ALL memory can be accessed in native DMA mode (FPGA hardware). - ALL memory can be accessed if kernel module (KMD) is loaded. - Raw PCIe TLP access (FPGA hardware). - Mount live RAM as file [Linux, Windows, macOS*]. - Mount file system as drive [Linux, Windows, macOS*]. - Mount memory process file system as driver [Windows]. - Execute kernel code on the target system. - Spawn system shell [Windows]. - Spawn any executable [Windows]. - Pull files [Linux, FreeBSD, Windows, macOS*]. - Push files [Linux, Windows, macOS*]. - Patch / Unlock (remove password requirement) [Windows, macOS*]. - Easy to create own kernel shellcode and/or custom signatures. - Even more features not listed here … Note : MacOS High Sierra is not supported. PCILeech supports multiple hardware devices. Please check out the PCILeech FPGA project for information about supported FPGA based hardware. Please check out PCILeech USB3380 for information about USB3380 based hardware. PCILeech also support memory dump files for limited functionality. Please find a device comparison table below. |Device||Type||Interface||Speed||64-bit memory access||PCIe TLP access| |USB3380-EVB||USB3380||USB3||150MB/s||No (via KMD only)||No| |PP3380||USB3380||USB3||150MB/s||No (via KMD only)||No| - PE3B – ExpressCard to mini-PCIe. - PE3A – ExpressCard to PCIe. - ADP – PCIe to mini-PCIe. - P15S-P15F – M.2 Key A+E to mini-PCIe. - Sonnet Echo ExpressCard Pro – Thunderbolt to ExpressCard. - Apple Thunderbolt3 (USB-C) – Thunderbolt2 dongle. Please note that other adapters may also work. Please ensure you do have the most recent version of PCILeech by visiting the PCILeech github repository. Clone the PCILeech Github repository. The binaries are found in pcileech_files and should work on 64-bit Windows and Linux. Please copy all files from pcileech_files since some files contains additional modules and signatures. Please see the PCILeech-on-Windows guide for information about running PCILeech on Windows. The Google Android USB driver have to be installed if USB3380 hardware is used. Download the Google Android USB driver from here Unzip the driver. FTDI drivers have to be installed if FPGA is used with FT601 USB3 addon card. Download the 64-bit FTD3XX.dll from FTDI and place it alongside To mount live ram and target file system as drive in Windows the Dokany file system library must be installed. Please download and install the latest version of Dokany. Linux and Android Please see the project wiki pages for more examples. The wiki is in a buildup phase and information may still be missing. Mount target system live RAM and file system, requires that a KMD is loaded. In this example 0x11abc000 is used. pcileech.exe mount -kmd 0x11abc000 Show help for a specific kernel implant, in this case lx64_filepull kernel implant. pcileech.exe lx64_filepull -help Show help for the dump command. pcileech.exe dump -help Dump all memory from the target system given that a kernel module is loaded at address: 0x7fffe000. pcileech.exe dump -kmd 0x7fffe000 Force dump memory below 4GB including accessible memory mapped devices using more stable USB2 approach. pcileech.exe dump -force -usb2 Receive PCIe TLPs (Transaction Layer Packets) and print them on screen (correctly configured FPGA dev board required). pcileech.exe tlp -vv -wait 1000 Probe/Enumerate the memory of the target system for readable memory pages and maximum memory. (FPGA hardware only). Dump all memory between addresses min and max, don’t stop on failed pages. Native access to 64-bit memory is only supported on FPGA hardware. pcileech.exe dump -min 0x0 -max 0x21e5fffff -force Force the usage of a specific device (instead of default auto detecting it). The sp605_tcp device is not auto detected. pcileech.exe pagedisplay -min 0x1000 -device sp605_tcp -device-addr 192.168.1.2 Mount the PCILeech Memory Process File System from a Windows 10 64-bit memory image. pcileech.exe mount -device c:\temp\memdump_win10.raw Dump memory using the the reported “TotalMeltdown” Windows 7/2008R2 x64 PML4 page table permission vulnerability. pcileech.exe dump -out memdump_win7.raw -device totalmeltdown -v -force PCILeech comes with built in signatures for Windows, Linux, FreeBSD and macOS. For Windows 10 it is also possible to use the pcileech_gensig.exe program to generate alternative signatures. - Read and write errors on some hardware with the USB3380. Try pcileech.exe testmemreadwrite -min 0x1000to test memory reads and writes against the physical address 0x1000 (or any other address) in order to confirm. If issues exists downgrading to USB2 may help. - The PCIeScreamer device may currently experience instability depending on target configuration and any adapters used. - Does not work if the OS uses the IOMMU/VT-d. This is the default on macOS (unless disabled in recovery mode). Windows 10 with Virtualization based security features enabled does not work fully – this is however not the default setting in Windows 10 or Linux. - Some Linux kernels does not work. Sometimes a required symbol is not exported in the kernel and PCILeech fails. - Linux based on the 4.8 kernel and later might not work with the USB3380 hardware. As an alternative, if target root access exists, compile and insert .ko (pcileech_kmd/linux). If the system is EFI booted an alternative signature exists. - Windows 7: signatures are not published. - File system mount, including the Memory Process File System, support only exists for Windows.
OPCFW_CODE
All GitHub Copilot for Individuals users now have access to GitHub Copilot Chat beta, bringing natural language-powered coding to every developer in all languages. Last year, we launched a technical preview of GitHub Copilot, a new AI pair programmer that plugs into your editor and offers coding suggestions in real time. Despite offering a limited number of seats, people that started using GitHub Copilot told us it became an indispensable part of their daily workflows. Now, GitHub Copilot is generally available to all developers. And the feedback we have heard and continue to hear is substantiating our core thesis: AI can help make developers more productive and happier while coding. Even still, we wanted to test our theory and see if GitHub Copilot itself actually leads to higher productivity rates among developers. To find out, our research and engineering teams partnered to combine qualitative survey data from more than 2,000 U.S.-based developers with anonymized data to determine if developers feel like GitHub Copilot is making them more productive—and if the data proves they actually are, in fact, more productive when using GitHub Copilot. This is the first of several studies we’re doing around GitHub Copilot, and the early results are promising. Let’s dive in. If you pair-program with a friend or colleague, does that make you more productive? Most people agree that even if a friend’s suggestions aren’t perfect, working with someone else typically helps you reach your coding goals faster, produce better end products, and learn something new while doing it. Academic researchers have also found evidence that pair programming improves productivity [1, 2]. In contrast, if you try to solve a math problem with a calculator that often gives wrong answers, would you find that useful? Probably not. The difference here is what we value most in calculators is precision. Not many people turn to a calculator for inspiration. In a sense, GitHub Copilot is a bit like a pair programmer with a calculator attached. It’s really good at the fiddly stuff, and I can trust it to close all my brackets in the right order—which comes in handy. But recently, I was on a flight without internet—and consequently I was left without GitHub Copilot. What I missed about it wasn’t its precision at closing brackets, but its larger flashes of insight. For example, suggestions of whole patterns or pre-populated boilerplate I only had to adapt slightly. Or valiant attempts at expressions that weren’t yet exactly what I wanted, but helped get me started. We built GitHub Copilot to help make developers happier and more productive by keeping them focused on what matters most: building great software. But the word “productivity” in development contains a wide range of possible practical meanings. Do developers ideally want to save keyboard strokes or avoid searches on Google and StackOverflow? Should GitHub Copilot help them stay in the flow by giving them highly accurate solutions on mechanical, calculator-like tasks? Or, should it inspire them with speculative stubs that might help unblock them when they’re stuck? We’re in pretty uncharted territory with GitHub Copilot, so the first thing to do was to ask people through a survey. Then, we checked their answers against anonymized user data to determine if how people felt GitHub Copilot’s boosted their productivity levels was reflected in how they were actually using it. In total, we surveyed more than 2,000 U.S.-based developers and compared their answers with user data from the same time period. We focused on answering three questions: - Do people feel like GitHub Copilot makes them more productive? - Is that feeling reflected in any objective usage measurements? - Which usage measurements best reflect that feeling? As someone who is part of the team that developed GitHub Copilot, it was incredibly gratifying to hear survey respondents describe how GitHub Copilot empowers them in a multitude of ways. We also discovered a strong connection to our objective usage data. For example, we counted the number of characters contributed by GitHub Copilot, the number of retained suggestions, and how often GitHub Copilot made suggestions in the first place. All of these things correlated with reported usefulness and improved productivity. Yet we got the strongest connection by simply dividing the number of accepted suggestions by the number of shown suggestions. This acceptance rate captures how many of the code suggestions GitHub Copilot produces are deemed promising enough to accept. Developers who report the highest productivity gains with GitHub Copilot also accept the largest number of shown code suggestions When sorting the users in different quartiles depending on how useful they reported GitHub Copilot to be, there was a stark difference between those groups: The acceptance rate of completions was much higher for those who had reported the biggest productivity gains. We found developers didn’t care that much if they needed to rework the suggestion, as long as GitHub Copilot gave them a suitable starting point. And this makes sense: GitHub Copilot isn’t designed to build software by itself. It’s designed to offer helpful suggestions that make it easier to stay in the flow. In other words, GitHub Copilot offers developers the parts but leaves it up to them to assemble and design the finished product. We’ve written an academic research paper with these findings, and some general background about code suggestion acceptance rates we’re seeing among people who use GitHub Copilot. Have a look for a deeper and more systematic dive into topics like retention, language differences, and weekend coding. We presented this paper at PLDI’s MAPS ‘22. But everyone writes code differently, so how will our findings apply to you? Try out GitHub Copilot today, and let us know what benefits you discover.
OPCFW_CODE
I was helping my brother with a script called incith-google used by one of his IRC bots. It acts as a bridge between IRC and google, allowing IRC users to perform queries against google from the comfort of their IRC client. The script performs the search on the IRC user’s behalf and returns the result back into the IRC channel. The problem is that it had broken. In investigating, it became readily apparent that it was bound to break, and surely had broken before. This is because the script was making the request to google as if it were a web browser, and receiving html output in return. The IRC client only wants to see a short bit of text, so the script attempts to parse out the juicy bits from the html output. This process of screen scraping is wobbly at best; any subtle formating or presentation change of the google search results web page can break the script entirely. This happens frequently due to the web’s inherent tendency to mix content and presentation semantics; the good news is that the w3c is finally catching on to this. A more robust way to do these types of interactions would be to use a more rigorous and starndardized process for asking questions and getting answers; in a word: API. A large part of the value of API is that the semantics of exchanging information and interacting witih other pieces of software are ‘locked down’, in the sense that the API vendor wants you to trust that those semantics will continue to work as designed, for the life of the API. It just so happens that Google has several APIs for accessing their various services. Because Eggdrop scripting is done solely in TCL, I started there. It didn’t take long to find Web Services For TCL, which is precisely what I needed.One downside to the Web Services for Tcl library is that there are a significant number of other (mostly non-standard) Tcl libraries on which it is dependant. Tcl has no package / module / library management system that might ease the process of installing these other libraries, so it took me a bit of time to get it all going (and I mostly know what I’m doing). The average Eggdrop user learned unix in order to utilize Eggdrop itself, so they are typically not of the sysadmin variety (as it happens, Eggdrop is what initially got me in to Unix, though I’ve come a bit of a ways since 1995 or so). A bigger downside is that Google no longer allocates new SOAP API keys, so if you didn’t get one prior to Dec 5, 2006, you are s.o.l. I got one for some reason, even though I’m only really getting around to using it now. Anyway, after going through all the trouble of getting this library operational, I figured I’d go ahead and bang out a quick eggdrop interface to Google based on Web Services for Tcl, so there you have it.
OPCFW_CODE
Add hooking support [Work in progress, will add more functionality in following patches.] This PR should enable KubeVirt to support basic hooking support. With the code in, it should be possible to enable hooking feature gate and request a hook sidecar using a VM annotation. This hook sidecar could subscribe to OnDomainDefine hook point and change the default Domain XML. This PR does not cover security, if the feature gate is opened, anyone can request any hook sidecar. It also does not cover hooking framework (the reason for that is to make even bare gRPC communication well documented). I'm having troubles with vendoring. I created a basic hook sidecar https://github.com/phoracek/kubevirt-hook-smbios. Compilation fails with: ➜ kubevirt-hook-smbios git:(master) go build -o kubevirt-hook-smbios cmd/smbios.go # github.com/phoracek/kubevirt-hook-smbios/pkg pkg/smbios.go:41:29: cannot use server (type *"google.golang.org/grpc".Server) as type *"kubevirt.io/kubevirt/vendor/google.golang.org/grpc".Server in argument to v1alpha.RegisterHookServer Any idea how to fix that? Do I need to do some changes on KubeVirt? I did not use vendoring in the hook since it's problematic to do that without this PR merged. cluster-sync of Kubevirt fails too: ➜ kubevirt git:(hooks) ✗ make cluster-sync ./cluster/build.sh Building ... sha256:fa41ebaab36bba6abdf6a0ffd06a3187eced2528a5642dd8e0faeb4694a2d6d4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory '/home/phoracek/Code/gopath/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:fa41ebaab36bba6abdf6a0ffd06a3187eced2528a5642dd8e0faeb4694a2d6d4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 vendor/google.golang.org/grpc/status/status.go:49:2: cannot find package "google.golang.org/genproto/googleapis/rpc/status" in any of: /root/go/src/kubevirt.io/kubevirt/vendor/google.golang.org/genproto/googleapis/rpc/status (vendor tree) /gimme/.gimme/versions/go1.10.linux.amd64/src/google.golang.org/genproto/googleapis/rpc/status (from $GOROOT) /root/go/src/google.golang.org/genproto/googleapis/rpc/status (from $GOPATH) make[1]: *** [Makefile:16: build] Error 1 make[1]: Leaving directory '/home/phoracek/Code/gopath/src/kubevirt.io/kubevirt' make: *** [Makefile:66: cluster-build] Error 2 This happens only with the last commit. Maybe caused by go get -u -d github.com/golang/protobuf/protoc-gen-go added to docker-builder/Dockerfile. Any ideas how to fix those problems? Thanks a lot. I tried to refresh vendoring, but make deps-update fails with: [ERROR] Error scanning google.golang.org/grpc/balancer: cannot find package "." in: /root/.glide/cache/src/https-google.golang.org-grpc/balancer [ERROR] Error scanning google.golang.org/grpc/balancer/roundrobin: cannot find package "." in: /root/.glide/cache/src/https-google.golang.org-grpc/balancer/roundrobin [ERROR] Error scanning google.golang.org/grpc/connectivity: cannot find package "." in: /root/.glide/cache/src/https-google.golang.org-grpc/connectivity [ERROR] Error scanning google.golang.org/grpc/encoding: cannot find package "." in: /root/.glide/cache/src/https-google.golang.org-grpc/encoding [ERROR] Error scanning google.golang.org/grpc/encoding/proto: cannot find package "." in: /root/.glide/cache/src/https-google.golang.org-grpc/encoding/proto [ERROR] Error scanning google.golang.org/grpc/internal/backoff: cannot find package "." in: /root/.glide/cache/src/https-google.golang.org-grpc/internal/backoff [ERROR] Error scanning google.golang.org/grpc/internal/channelz: cannot find package "." in: /root/.glide/cache/src/https-google.golang.org-grpc/internal/channelz [ERROR] Error scanning google.golang.org/grpc/resolver: cannot find package "." in: /root/.glide/cache/src/https-google.golang.org-grpc/resolver [ERROR] Error scanning google.golang.org/grpc/resolver/dns: cannot find package "." in: /root/.glide/cache/src/https-google.golang.org-grpc/resolver/dns [ERROR] Error scanning google.golang.org/grpc/resolver/passthrough: cannot find package "." in: /root/.glide/cache/src/https-google.golang.org-grpc/resolver/passthrough The kubevirt building problem was fixed with updated vendoring. Going to add example hook under kubevirt now. The kubevirt building problem was fixed with updated vendoring. Going to add example hook under kubevirt now. excellent! Eventually I think we'll want a sample-hook-sidecar repo in the kubevirt project that people can clone as a skeleton for their own sidecar. We don't have to tackle that right away though. Thanks for the reviews. Just posted hooks manager with sidecar collecting implemented. It's not perfect by any means, but it works. Will implement actual hook point tomorrow, do some refactoring and fix according to your comments. In the current state, the mechanism should work, check https://github.com/kubevirt/kubevirt/pull/1171/files#diff-db3328ed196d949f1648bca22fe9c478 if you want to try it out. In next patch I will push fixes for your comments, then logging, licenses, comments. I'm not sure about the state of hooks/manager.go, the logic there is a bit too crazy. My idea would be to use a combination of log inspection (verify hook is called succesfully) and 'virsh dumpxml ' in the VM's pod to verify the domain xml change was picked up. You might have a better idea though. @davidvossel do you have any recommendation which existing module I should follow? @davidvossel not all commits here are perfectly atomic. Do you prefer to squash the PR, keep it as it is or remake all commits (please don't). not all commits here are perfectly atomic. Do you prefer to squash the PR, keep it as it is or remake all commits (please don't). @phoracek lol, yeah don't waste your time trying to go back and make each commit perfectly concise. Just keep moving forward with new commits as feedback comes in. If that means we have 30 commits at the end of this, then so be it :) do you have any recommendation which existing module I should follow? @phoracek Yep, this shouldn't be too difficult. We already have code that handles remotely executing a command on a pod and getting output in our functional test suite. For an example of how to execute a command on a pod in the test suite, a look at the tests/vmi_networking_test.go's usage of the ```tests.ExecuteCommandOnPod`` function. The tests/vmi_lifecycle_test.go file has a lot of good examples of how to start a vmi in the functional test suite. lol, yeah don't waste your time trying to go back and make each commit perfectly concise. Just keep moving forward with new commits as feedback comes in. If that means we have 30 commits at the end of this, then so be it :) you will need to rebase with master though Ok, I added functional tests coverage and rebased it on master. Could you please review? @davidvossel @fabiand @senior7515 @SchSeba ok to test retest this please @phoracek LGTM. Though architecturally now that we see the shortcomings of the command line api (having hard coded args in templates) and since we now have a client-server at least for the hook interface, it is probably worth it for us to consider making virt-launcher a service itself. Things that interact with it can be a client gRPC too. but just a thought. Thanks! Awesome - Looking forward! :tada:
GITHUB_ARCHIVE
Scenario - migrate servers while maintaining their existing IP address from data centre 1 to data centre 2 with minimal downtime. Diagram attached. I'm planning on using a Cisco ASR1001-X with AES license at DC1 and DC2 and configuring the routers with OTV to extend 10 x VLANs between the data centres. The join interface would connect directly to the WAN circuit NTU and the Internal Interface would connect to the switch and be configured as a service instance with 10 VLANs tagged using dot1q. The problem is that DC1 switch infrastructure is using Cisco Nexus 56xx configured with FabricPath. I can't find any information that suggests that i can patch the Cisco ASR router's internal interface directly into a FabricPath switchport or what the configuration would be. Older OTV documentation refers to Nexus 7000 and OTV stating the following: "Because OTV encapsulation is done on M-series modules, OTV cannot read FabricPath packets. Because of this restriction, terminating FabricPath and reverting to Classical Ethernet where the OTV VDC resides is necessary." Is this true for the Cisco ASR also? The only workaround i can think of is to install a cheap catalyst switch connected to the FabricPath domain and re-introduce spanning-tree at the edge but this seems backwards to me. Any help or suggestions appreciated? Thanks Based on your diagram, it should work because the nexus switch is the demarcation point between the classical ethernet and fabricpath. Fbricpath frames should not hit to the ASR router. As long as the ASR is configured properly to receive the dot1q tag from nexus and bridge it into the overlay interface, a layer 2 domain will be extended. So it is possible to have switchports configured as routed, fabricpath and trunk/access in a fabricpath configuration? Do i need to add any spanning-tree pseudo or priority configuration? |no ip address| |service instance 1 ethernet| |encapsulation dot1q 1| |service instance 2 ethernet| |encapsulation dot1q 2| |service instance 3 ethernet| |encapsulation dot1q 3| |switchport mode trunk| |switchport trunk allow vlan 1,2,3| All the nexus switches in fabripatch domain are considered as a single giant switch and the STP root must belong to the fabricpath domain. On the ASR routers (OTV AED), you would need to specify the vlans to be extended over the WAN and the site vlan for the communications between local OTV routers. The configuration on IOS XE has different flavor when compared with NX-OS. It requires to bridge between internal interface and overlay interface. Refer here for the configuration guide on IOS XE:
OPCFW_CODE
Top Guidelines Of Cascading_Style_SheetsAnd how about when you choose that highlighted textual content must be crimson in place of blue? With the 1st method, you would need to manually edit the tags just about everywhere you employed it - with CSS, just change the solitary ".highlight" rule! learn-css Usually text appears in the middle of a CSS navigation bar button, correct? Properly, all you need to realize this effect is textual content-align:Centre assets. You must assign it towards the or aspects. what-is-css If you would like a more obvious separation concerning your navbar buttons, you can increase CSS borders. You could make containers for each item utilizing border property or only basic traces To acquire traces separating the goods we incorporate the border-proper property to things to generate link dividers. linking to your external CSS doc, the "in head" declaration would win mainly because it's even closer to your factor defined. This is often only real of Similarly weighted selectors. Look at for an excellent description of the weight of a specified selector. There, you’ll see a long listing of components. People are your a lot of stylesheets that are enqueued from a wide variety of resources. When done properly, Each and every must have an determining ID. Most probably, you’ll observe that the plugin design is later on within the than your theme type. I’m sure that following finishing this tutorial, you can create slick vertical and horizontal navigation bars. Permanently measure, Permit’s sum up the data:HTML and csss The decreased pane shows the many Qualities which are defined from the CSS rule to the rule that is selected in the middle pane. In such a case you could see that the rule for img defines the border, float and margin-suitable Qualities. The browser will parse the HTML and make a DOM from it, then parse the CSS. Considering that the only rule out there within the CSS includes a span selector, it is going to apply that rule to each one of several three spans. The updated output is as follows: CSS information are saved in the plain textual content format, which means you are able to open up and edit them with any text editor. Nevertheless, you'll want to use Net enhancement packages, such as Dreamweaver and ColdFusion Builder, which offer more Innovative features for modifying CSS files. I do of course make sure to convey to them that it isn't really a problem with the design and style sheets for being combating each other, that is the way the language was built. Plus the manner where the codes are composed in design and style sheet is within the cascading manner. Or simply, back to again codes in layers for every html component of the html page in type sheet make the cascading design sheet. W3.CSS also can quicken and simplify Internet development since it is much easier to understand, and easier to use than other CSS frameworks: Any CSS policies that you just insert amongst the brackets for this rule will only be utilized if the dimension in the browser is 480 pixels vast or a lot less. CSS is brief for Cascading Fashion Sheets and is the first language utilised to describe glimpse and formatting for webpages throughout the net and documents of markup (e.g. HTML and XML) generally speaking.HTML and CSS
OPCFW_CODE
I have a client that wants to regularly (and somewhat randomly) change their hours of operation. I’ve managed the basics with the Day/Night control. However, they want to record a message that lets them announce the new hours to customers that call in. My original thought was to have the IVR send requests for their hours to a voicemail extension. That way, they’d just go to that extension, record a new outgoing message and then be all set. However, the voicemail wants to offer the caller the opportunity to leave a message. The client doesn’t want them to leave messages. Any thoughts on how to pull this off? I’m not married to any one solution. The only requirement is that the end-user must be able to change the message easily. Giving them access to the FreePBX interface is not preferred. If your FreePBX version is new enough, you can tie a feature code to the recording in system recordings. Then the user dials that feature code and re records it. I am not sure why you don’t want to use announcement? That is what it is for. I do this, but the setup requires a bit of linux manipulation. I’ve posted the procedure here on the forum. I’ll see if I can find it. as cosmicwombat points out, all supported versions of FreePBX allow you to tie a feature code to a system recording which can also be password protected. That is one of the more common use cases of this, to easily change the message of an IVR by simply changing the recording it is tied to. From within the phone system you can just call the feature code. From outside, you would have to create a Misc Destination associated with the recording’s feature code and then either point a DID at it, or add a ‘hidden’ option to your IVR that would let you choose to record the recording from outside the system. Can’t find my origional post but here goes “from memory”… - You’ll actually be using the announcements, but you’ll be updating them from a voicemail only extension. - Create a VM only extension. I ususlly create this with a number that is outside the extension plan of my box. - After creating the VM extension, record a unavailable announcement on it. - Create a new system recording (doesn’t matter what you say in it). Give it a descriptive file name like “ChangeHours” or something. - Create a new announcement using the file in #4 Now for the linux stuff - Locate the voicemail directory for the extension you have created. (probably: /var/spool/asterisk/voicemail/default/xxxx/). - You’ll notice within that directory there are a couple of unavail files “.wav” and “.WAV”. You’ll use the “.wav” (lower case) - Make note of the full path to that file. (Probably: /var/spool/asterisk/voicemail/default/xxxx/unavail.wav). xxxx is the extension # - Now locate the announcement file you created in step 4. (Probably in /var/lib/asterisk/sounds/custom). Go to that directory. - I know this sounds strange, but remove the file. (rm -f ChangeHours.wav) - Now link the unavail.wav from 7 to the ChangeHours.wav file… (ln -s ChangeHours.wav /var/spool/asterisk/voicemail/default/xxxx/unavail.wav) Now any time the announcement is played, you’ll actually be playing the unavail message from extension xxxx. Let me know if any of this doesn’t make sence. Make a new system recording ( or view an old one) and look for the “Link to Feature Code” button. If you do not see that button, then my guess is that your FreePBX version is out of date. Philippe - I’m a little unclear as to how to tie a feature code to changing a pre-existing announcement. Can you point me to some documentation that might discuss how to implement this? Bill - this makes great sense. I’m interested in seeing how this might be done directly from the FreePBX interface, but if that proves unworkable, then I’ll do it behind the scenes as you describe above.
OPCFW_CODE
Google's announcement of their new Gears browser extension API for enabling offline capabilities in Web apps with these services: - LocalServer to cache and serve application resources locally. - Database (SQLite) to serve as a searchable local data store with online synchronization capability - WorkerPool to enable clients to run resource-intensive operations on a new thread Google's Choosing an Offline Application Architecture page shows how the client application's UI can switch between the Web server and local database cache as its data source. Running SQLite or any other database in the browser requires write access to the client's file system. Synchronization can be manual, as in the Gears-enabled Google Reader (a button toggles between online and offline mode), or background, which can take advantage of the WorkerPool. Update 6/2/2007: Tim Anderson's Why Google Gears? Thoughts from Google Developer Day provides an independent overview of Google Gears and notes the importance of synchronization. Tim also mentions full-text search in his SQLite will be everywhere post. According to Michael Cleverly's October 11, 2006 blog entry for D. Richard Hipp's keynote at the 13th Annual Tcl/Tk Conference: As of SQLite 3.3.8 (released Monday!) full text search support in SQLite. Richard's authorized to announce help from engineers at Google. Later question elicited that roughly half of the FTS code was written by Richard and Dan, the other half by four engineers from Google. He isn't able to/can't comment on their motivations/plans/internal usage. [Minor edits.] [The blog also notes that Microsoft uses SQLite in the Xbox.] Google's Database API page states, "Google Gears includes SQLite's full-text search extension fts2." Scott Hanselman says in his Google Gears - Maybe all Rich Internet Applications needed was Local Storage and an Offline Mode post: Stunning move by Google today in the Rich Internet Application space. While most of us (myself included) are off debating Flash vs. Silverlight vs. Apollo vs. Whatever, Google introduces Google Gears ... a technology all of the above (or none of the above) can utilize ... This is a huge move and is quite brilliant. In one seemingly innocuous move (and one tiny 700k (yes, 700K) download) Google is well positioned to get Google Docs, including Writely, Spreadsheet and Presentation, along with who knows what else, enabled for offline use. And the whole thing is Open Sourced via the New BSD License. Dare Obasanjo quotes Robert Scoble's Google brings developers offline with “Gears”; new offline Reader post, and asks "[W]hat took them so long?", and closes with "Welcome to the future." Gears supports using Adobe’s Apollo and Flash and should support other technologies including Microsoft’s Silverlight. [Emphasis added.] But eWeek's Darryl K. Taft clarifies Gears' support for Silverlight in today's "Google Gears Aims to Bolster Adobe Apollo, Others" article: Meanwhile, Google Gears will work with Microsoft's Silverlight cross-platform, cross-browser tool for building RIAs. Gears hopes to complement Microsoft's Silverlight and other technologies by offering an offline infrastructure that developers can incorporate into any Web application—even those that use plug-ins from other companies, a Google spokesperson said. [Emphasis added.] However, Microsoft has its own local database cache: SQL Server Compact Edition (SSCE) and Synchronization Services (Sync Services). Orcas Beta 1 includes Sync Services Beta 1 and an early version of the graphical Sync Designer for SSCE 3.5. Sync Services is limited to synchronization between SSCE 3.1 or 3.5 on the client side and SQL Server 200x behind the Web server. Unfortunately, SSCE v3.x doesn't have full-text search capability. Update 6/6/2007: Dare analyzes Google Gears in his June 5, 2007 Google Gears: Replacing One Problem with Another post and concludes that the missing synchronization component might be a showstopper: It seems that without providing data synchronization out of the box, Google Gears leaves the most difficult and cumbersome aspect of building a disconnected Web app up to application developers. This may be OK for Google developers using Google Gears since the average Google coder is a Ph.D but the platform isn't terribly useful to Web application developers who want to use it for anything besides a super-sized HTTP cookie. Collectively, the comments to Dare's post point out that general-purpose synchronization systems aren't easy to get right and that users of the Gears toolkit will need to write their own sync services. One commenter (Sam Sethi) points out that the Remember the Milk online to-do list has added Google Gears synchronization. Update 6/7/2006: eWeek's Microsoft Watch blogger, Joe Wilcox, posted his Can Microsoft Be the Wrench in Google Gears? article on June 6, 2007. Joe quotes the same paragraph from Date's entry (above), and observes: Quite possibly, synchronization is the killer app that will determine whether desktop software maintains its relevance or the Web becomes the more popular platform. ... The natural place for synchronization services is as part of the operating system. Unfortunately, I think Joe's blog is missing something here, and I'm speaking as a developer. Synchronizing data is never a trivial nor generic task, because how it works and what it does depends on the particular application, and often requires substantial user input, and is heavily tied to file formats and how the data is organized inside the file. ZDNet Blogger David Berlind posted an audio interview about Google Gears that he conducted on May 30, 2007 with Google director of engineering Linus Upson. Berlind quotes Adobe’s vice president of product management Michele Turner: [W]e were developing identical technology to facilitate the offline component of the Apollo runtime. ... For example, they’re [Google] using SQLite and we were already incorporating SQLite into Apollo. So, now we’re aligning our efforts with Google on things like the synchronous and asynchronous calls that must be made to the SQLite database in order to enable the offline capability. Google Gears piqued more interest in data synchronization than I expected. Recent Microsoft Synchronization Services Updates My "Update Local Data Caches with Sync Services" article in the May 2007 issue of Visual Studio Magazine carries this deck: "The Microsoft Synchronization Services 1.0 API for SQL Server 2005 Compact Edition and the new Sync Designer in the Orcas March 2007 CTP team up to generate a local data cache that synchronizes incrementally and bidirectional with SQL Server [Express]." Sync services and the Sync Designer is the technology behind Occasionally Connected Systems, which--together with the Entity Framework--comprise the Dynamic Development "Pillar of Katmai." My earlier "Lighten Up Your Local Databases" in the March 2007 Visual Studio Magazine issue covers SQL Server 2005 Compact Edition v3.1 , which is the first version licensed for use in conventional PCs, and v3.5, which is included in Visual Studio codename "Orcas." Microsoft initially called V3.5 "SQL Server Everywhere Edition." V3.5 adds important features, such as timestamp fields to aid synchronization and simplify concurrency conflict detection. SQL Server Mobile Edition v3.0 and earlier were licensed for use by devices and Tablet PCs only. Microsoft's Rafik Robeal released Demo VII: Tombstone Cleanup and Detecting Stale Offline Clients for the Sync Services runtime. Tombstone cleanup detects and prevents stale clients from synchronizing and tells them to reinitialize their local data cache. A stale offline client is a computer that hasn't synchronized in the last n days, so the client's sync request is rejected. Rafik has updated the preceding six demos to VS Orcas Beta 1 in his Sync Services Demos Refresh post: - Demo I: Offline Application - SyncAdapterBuilder shows you how to use SyncAdapterBuilder to get started quickly - Demo II: Offline Application - TSQL+SProcs shows you how to author SyncAdapter manually using TSQL statements and stored procedures - Demo III: Offline Application - WebService shows you how to setup N-Tier using web services - Demo IV: Offline Application - Conflict Handling shows you how to go about conflict detection and resolution - Demo V: Offline Application - Oracle Backend shows you how to use sync services against Oracle database - Demo VI: Offline Application - Decoupled Change Tracking shows you how to track changes without changing the schema of the base table by using a separate tracking table Update 6/6/2007: Rafik just added a Sync Events for Conflict Handling, Progress Reporting, Business Logic … post that describes Sync Services new set of events. Some of the events are implemented in Beta 1, but you'll have to wait for Beta 2 for the others. See my Synchronization Services Runtime Beta 1 for ADO.NET 2.0 Is Available and A Sync Services Bidirectional Test Harness posts for issues with the Beta 1 release.
OPCFW_CODE
I want to ask you how can I get back together my Fusion Drive (merge HDD and SSD together). I was installing Windows 10 and during installation it crashed and I was not able to boot in Windows neither OS X. I also lost recovery partition, so I am going to install it with internet recovery or with my installed OS X on my external HDD. Thanks for any advice - Detach any external drive (especially your external Time Machine backup drive) Restart to Internet Recovery Mode by pressing alt cmd R at startup. The prerequisites are the latest firmware update installed, either ethernet or WLAN (WPA/WPA2) and a router with DHCP activated. On a 50 Mbps-line it takes about 4 min (presenting a small animated globe) to boot into a recovery netboot image which usually is loaded from an apple/akamai server. I recommend ethernet because it's more reliable. If you are restricted to WIFI and the boot process fails, just restart your Mac until you succeed booting. Alternatively you may start from a bootable installer thumb drive (preferably Yosemite or El Capitan) or a thumb drive containing a full system (preferably Yosemite or El Capitan). Rebuild Fusion Drive All data on the disks will be deleted. - Booted to Internet Recovery Mode open Utilities → Terminal in the menubar and enter: diskutil cs listto get a CoreStorage listing. - Copy the Logical Volume UUIDs one by one, if any are listed. - Now delete all Logical Volumes with diskutil cs deleteVolume LVUUID. - Copy the Logical Volume Group UUID, if any is listed. It's the first listed in the listing of diskutil cs list. - Then delete the Logical Volume Group with diskutil cs delete LVGUUID. exitand quit Terminal - Open Disk Utility. Enter 'Ignore' if you are asked to fix the drives. Choose your SSD and erase it: 1 Partition Mac OS X Extended (Journaled), GUID Partiton table and hit Erase. Please check that the size is ~121 GB Choose your HDD and erase it: 1 Partition Mac OS X Extended (Journaled), GUID Partiton table and hit Erase. Please check that the size is ~2 TB Quit Disk Utility and open Terminal Example (your disk identifiers and sizes are different of course: Your volume SSD probably has the Identifier disk0s2 and the size 121 GB and your volume HDD probably has the Identifier disk1s2 and the size 2.0 TB): diskutil cs create "Name" IdentifierSSD IdentifierHDD In your case probably diskutil cs create "Macintosh HD" disk0s2 disk1s2. Copy the resulting LVGUUID diskutil cs CreateVolume LVGUUID jhfs+ "Macintosh HD" 100%. diskutil cs list Check the size of your Logical Volume. It should have the size ~1.121 TB - Open Disk Utility and check your newly created volume for errors - Quit Disk Utility - Open Restore OS X. Install and configure OS X. The original OS X your Mac came with will be installed. - After configuring OS X, download the newest available system installer with App Store and upgrade your system.
OPCFW_CODE
connmanctl − Connman CLI connmanctl [state | technologies | enable technology|offline | disable technology|offline | tether technology on|off | tether wifi on|off ssid passphrase | services [service] | peers peer | scan technology | connect service|peer | disconnect service|peer | config service option arg... | vpnconnections [connection] | help] Connmanctl is a Connman command line interface which can be run in two modes: a plain synchronous command input, and an asynchronous interactive shell. To run a specific command the user may enter connmanctl command [options] or just connmanctl, in the latter case an interactive shell will start. Connmanctl can handle most network connections. It can be used to enable/disable any technology that exists on the system, display a list of services available, connect/disconnect networks, show properties of the system, the technologies, and any individual service, and configure all of the properties. It is also able to monitor changes in the properties of the services, technologies, and the system. In the interactive shell, all of the same commands can be used. It provides quicker usage when needing to use connmanctl more extensively. In addition, connecting to protected wireless access points for the first time requires interactive shell. Shows the abbreviated help menu in the terminal. Shows the system properties. Includes the online state of the system, offline mode, and session mode. Shows a list of all technology types existing on the system and their properties. See the properties section of the Technology API for explanations of each property. Enables the given technology type (e.g. ethernet, wifi, 3g, etc.) Turns power on to the technology, but doesn’t connect unless there is a service with autoconnect set to True. Disables the given technology type. Turns power off to the technology and disconnects if it is already connected. Enables offline mode. Disconnects and powers down all technologies system-wide, however each technology can be powered back on individually. Disables offline mode. Technologies are powered back on according to their individual policies. tether technology on | off Enable or disable tethering on technology. Ethernet cannot be tethered by default since tethering it usually breaks local networks. See connman.conf(5) for enabling. tether wifi on | off ssid passphrase Enable or disable wireless tethering, as well set the SSID and passphrase. Shows a list of all available services. This includes the nearby wifi networks, the wired ethernet connections, bluetooth devices, etc. An asterisk in front of the service indicates that the service has been connected before. Shows a list of all properties for that service. Only the service path (e.g. wifi_6834534139723_managed_none) is accepted as a parameter. Scans for new services on the given technology. Connects to the given service. Some services need a so-called provisioning file in order to connect to them, see connman-service.config(5). Disconnects from the given service. move-before service target-service Prefer connecting to service over target-service. move-after service target-service Prefer connecting to target-service over service. config service option arg... Configures a writable property of the given service to the value(s) entered after option. See the Config Options subsection for details. Listens for and displays DBus signals sent by Connman. If a currently monitored property changes, the changes will be shown. If no target is specified, all changes will be shown. See the Monitor Options subsection for a summary of parameters. Shows a list of all available vpn connections. Shows the current properties of connection. agent on | off Enable or disable the wireless agent, used for entering wireless network passphrases. See the EXAMPLE section of this man page for an example of connecting to a wireless access point. vpnagent on | off Enable or disable the vpn agent, used for entering vpn credentials. autoconnect on | off Sets the autoconnect property of the service. ipv4 off | dhcp | manual address netmask gateway Configures the IPv4 settings for the service. The argument off means that IPv4 won’t be used, dhcp means that dhcp will be used to get the settings and manual means that the given arguments will be used as IPv4 settings. address, netmask and gateway must be valid IPv4 addresses. See the EXAMPLE section of this man page for details. ipv6 off | auto | manual address prefixlength gateway Configures the IPv6 settings for the service. The argument off means that IPv6 won’t be used, auto means that settings will be asked from the network and manual means that the given arguments will be used as IPv6 settings. address and gateway must be valid IPv6 addresses. prefixlength is the length of the prefix in bits. See the EXAMPLE section of this man page for details. nameservers dns [...] Set the list of nameservers, separated by spaces. timeservers server [...] Set the list of timeservers, separated by spaces. domains domain [...] Set the list of search domains, separated by spaces. proxy direct | auto URL | manual server[...] [--excludes server[...]] Configures the proxy settings for the service. direct means that no proxy will be used. If using auto without a parameter, the network will be asked for the proxy settings. Otherwise, use URL as an proxy autoconfiguration URL. When set to manual, the first list of servers is used as proxy servers, and the traffic to the second list of servers are excluded from the proxy. The list of excluded servers is optional. See the EXAMPLE section of this man page for details. Listens for changes to services, for example a service getting an IP address. Listens for changes to technologies, for example a technology getting enabled. Listens for the changes to global properties, available technologies, services, and peers. Listens for added or removed vpn connections. Listens for the changes to vpn connections, for example connecting to a VPN. Listing available technologies: $ connmanctl technologies Listing available services: $ connmanctl services Scanning for wireless networks: $ connmanctl scan wifi Using the interactive mode to access a wireless access point: connmanctl> agent on connmanctl> connect wifi_100ba9d170fc_666f6f626172_managed_psk Agent RequestInput wifi_100ba9d170fc_666f6f626172_managed_psk Passphrase = [ Type=psk, Requirement=mandatory ] Configuring a static IP from the command line: $ connmanctl config wifi_100ba9d170fc_666f6f626172_managed_psk ipv4 manual 192.168.1.101 255.255.255.0 192.168.1.1 Changing the IP back to dhcp: $ connmanctl config wifi_100ba9d170fc_666f6f626172_managed_psk ipv4 dhcp Setting a proxy server: $ connmanctl config wifi_100ba9d170fc_666f6f626172_managed_psk proxy manual proxy.example.com Setting multiple proxy servers: $ connmanctl config wifi_100ba9d170fc_666f6f626172_managed_psk proxy manual proxy.example.com http://httpproxy.example.com --excludes internal.example.com Tethering a wireless connection (ssid "SSID", passphrase "password"): $ connmanctl tether wifi on SSID password connman.conf(5), connman-service.config(5), connman-vpn-provider.config(5), connman(8), connman-vpn(8)
OPCFW_CODE
Three cafes,R S,and M take one step to go to one of these cafes to another. How many unique ways are there to start at R and end at M in seven steps? There is an equilateral triangle with one cafe at each vertice. They are called R, S, and M. I tried to solve this problem but got stuck. Here is my work: After knowing that R and M are the endpoints, we can alternate to the 3 cafes for 5 steps or 6 times. I have two choices for each cafe. Hence, the answer is 2^6 or 64. After some testing and writing down some possibilities, I saw that this theory was wrong. I then tried to use casework to solve this problem. Here are my cases: Case 1: Visit Cafe M once For this, It is easy to see that there is 1 way to visit Cafe M once, making it the last cafe to visit on our list. Case 2: Visit Cafe M Twice For this case, we can do 7 choose 2 (Using the formula for distributing n items into r groups.) to find the number of ways to visit Cafe M twice, one being at the end. For this, we get 21. Case 3: Visit Cafe M thrice For this, I did 6 choose 2 ( using the formula), and found it to be 15. Case 4: Visit Cafe M four times For this, I found 10 ways according to the formula by doing 5 choose 2. Altogether, I got 53 as my answer. However, after checking my answer, it said I was wrong and 43 was the answer. Where did I go wrong? Thanks! The title says three cafes while the body says six. Please make a clear statement of the problem. How many are there? Which are one step apart? I am guessing that there are three at the corners of an equilateral triangle, and that at any point you can go to either of the other two cafes, but you never say that. If you visit M only once, don't you have to alternate between the other two so there is only one way? There is an equilateral triangle that has one cafe at each vertice. They are named R, S, and M. You should edit the question to make it clear. Comments can disappear and people shouldn't be left in the dark as they read the question. You also didn't respond to my comment on Case 1. If you only visit M once and end there, you must alternate between R and S until then, so there is only one way. Using something choose something is completely the wrong approach. Please explain why you are using it. You want a recurrence relation linking the number of ways with $k$ steps ending at each place to the number of ways with $k+1$ steps. I did this because I was counting how many ways to visit the cafes after setting a fixed place for Cafe M. I edited my body text to be more clear. I also changed my case 1 because I understood what you meant by it. How is 43 the answer though? Let $S(n)$ be the number of ways to start at $R$, take $n$ steps, and end at $S$. Define $R(n), M(n)$ similarly. At each step you can go anywhere except where you are so the recurrences are $$S(n)=R(n-1)+M(n-1)\\R(n)=S(n-1)+M(n-1)\\M(n)=R(n-1)+S(n-1)$$ By symmetry $S(n)=M(n)$ and the starting condition is $R(0)=1,S(0)=0,M(0)=0$. A quick spreadsheet (copy the equations down) will give the answer. All the values are $2^n/3$ rounded one way or the other so the total is $2^n$ as you have two choices at each step. Are you saying that 64 is the answer? I also suggested a method that involved me saying that each step has 2 choices, making 64 the answer. I also said that this was wrong and that 43 was the answer. Please clarify if possible. 128/3 equals 42 2/3. Rounding this gives me 43. However, how can you get a fractional number of paths in the first place? The $R$ recurrence gives $2^n/3+2/3$ when $n$ is even and $2^n/3-2/3$ when $n$ is odd. The others are $2^n/3 \pm 1/3$ with the sign chosen to make the answer whole and depending on the parity of $n$. The sign for $R$ is opposite the signs for the other two
STACK_EXCHANGE
I am trying to test apps for IPv6 according to Apple's documentation at Supporting IPv6 DNS64/NAT64 Networks. I have difficulties connecting from the iPad to my app server. My Network Layout: The Fritzbox has IPv6 support turned on with 6to4 enabled, because my provider does not have IPv6. I ran a set of tests with IPv6 turned off. The result was the same (see list of test items below). macmedia runs El Capitan and runs iTunes serving as a media server. macsrv runs El Capitan and OS X Server 5 with DNS, DHCP, File Sharing, Web, Wiki. My Test Process: I can set up the NAT64 network as described in the Apple document (in "Settings" hold option key, press "Sharing", then "Internet Sharing" and release option key). I ran some tests, one of them looked like this: - turn on NAT64 on macsrv - in iPad Air WLAN settings select NAT64 network - the blue checkmark appears in front of the network name - the blue i-icon delivers this: - IP 169.254.55.58 / mask 255.255.0.0 - DNS 2001:2:0:aab1::1 - all other fields empty - in Safari on iPad Air: https://www.google.com/-> takes a long time, eventually a message like: "Could not open the page, because the server does not respond any more." http://macsrv:8989/-> could not find server http://macsrv.local:8989/-> "It works!" (Just looked at the logs: The access in apache2 was logged with the IPv6 address of the en0 interface of macsrv) http://192.168.1.11:8989/-> Error like "Could not open page, because the iPad is not connected to the internet." (192.168.1.11 is macsrv) http://[2001:2::aab1:129a:ddff:fe4f:38f8]:80-> takes a long time, eventually a message like: "Could not open the page, because the server does not respond any more." (like google.com; the address is the IPv6 address of the en0 interface on macsrv) http://192.168.1.115:8088/-> Error like "Could not open page, because the iPad is not connected to the internet." http://app.intra.admadic.com:8088/-> could not find server (this is the same as 192.168.1.115) Once I connected the iPad Air to the macrsv NAT64 network and it received the IP address 192.168.2.2. This happened never again. When I start the NAT64 network on macmedia, the iPad cannot connect and the activity indicator keeps spinning forever. I looked for processes running when NAT64 is enabled: /usr/libexec/InternetSharing rtadvd -c /etc/com.apple.mis.rtadvd.conf -f -s bridge100 unbound -c /etc/com.apple.mis.unbound.conf -d There are two config files related to server: chroot: "" pidfile: "/etc/unbound.pid" chroot: "" directory: "/etc" username: "" do-daemonize: no access-control: ::0/0 allow module-config: "dns64 iterator" dns64-synthall: yes dns64-prefix: 64:ff9b::/96 interface: ::0 forward-zone: name: "." forward-addr: 192.168.1.11 I checked these conf files on macsrv and macmedia and they are identical in every detail. Q1. What could be the reason for NAT64 not working on macmedia? Q2. Can I convince the NAT64 network to use my intranet DNS? Q3. What could be the reason for NAT64 on macsrv to be sometimes working and sometimes not? Q4. Why does the iPad get an 169.254.55.58 IPv4 address? (Shouldn't it only have IPv6 when connecting to NAT64?) Q5: Do the com.apple.mis... conf files look ok? (having 192.168.1.11 as forward looks fine to me...)
OPCFW_CODE
Have you ever worked with HTML before? This was my first time! If you’ve never edited code, it’s actually easier than you think. As long as you learn the rules, and stick to them, anyone can learn how to read and edit basic code. Want to learn some basics? Keep reading! What is code? Code is used for communicating with computers, to put it simply. Humans use code to give computers instructions on what actions they need to perform. There are a few different types of code, depending on what you want to use it for, but I’ll just describe 3 of them here. HTML (Hypertext Markup Language) is the standard markup language for web pages. This is the language we use to create the structure and content of web pages. These elements tells the browser how to define headings, paragraphs, links, etc. CSS (Cascading Style Sheets) is the language we use to design and style web pages. CSS describes how HTML elements are to be displayed on a screen and specifies the layout of web pages. What can you do with it? Coding is used to create websites, webpages, programs, and the like. For my first coding exercise, I took my bio from this website and tried to figure out how to replicate it. My bio on this website was created with a drag and drop template on WordPress. Templates on WordPress are cool, but they aren’t fully customizable. Here’s a screenshot of what my bio looks like currently. Looks good, right? How can you create content using code? There are plenty of sources of open, free code for anyone to use. Along with plenty of websites that allow you to play around with code, without actually messing anything up. Nice, right? I’ve been using this super-useful website: W3Schools.com. Let’s look at the anatomy of HTML: If you were writing code, and you wanted to create a paragraph that had the words, “Hello World!” in it, this is a line of code you would write. Each “line” of code is made up of elements, and these elements contain a few things. First, each element needs an opening tag. The opening tag tells the computer the specific action you want to start. In this instance, the <p> stands for: begin paragraph. Then, you fill your paragraph with whatever content you want to write. In this instance, they wanted to write “Hello World!”. Now when you’re finished with your paragraph, you need to tell the computer to close it. It’s similar to opening, but this time you need to put a backslash to close it. The closing tag here is </p>. You’re done with this line of code! HTML Example and Rules: Here is a simple example of HTML: Some rules to HTML are: - Every page needs to start with <!DOCTYPE html> <html> and end with </html>. This declaration defines that this document is an HTML5 document. - HTML tags are enclosed by brackets < > - Most tags require a closing tag <HTML> </HTML> - Tags must be nested correctly, if stacking. <B><I> Basic Rules</I></B> - HTML treats all white space as a single blank space Coding my first page There are a few programs you can write code in like Notepad, TextEdit, W3 Schools, and Dreamweaver. I used Dreamweaver since it is apart of the Adobe Suite. Dreamweaver is also nice to use because it displays the simulated web page as you type it in. Here’s what my code looks like in Dreamweaver with the simulated page: And here’s a closer look at the code and what it turns into: Where does it go? So you’ve written some code. Now what? Next you need to use a FTP(File Transfer Protocol) program to save your web page(s) from your computer to your web host, remote site, or remote server. I used FileZilla, it’s free to download. Here is a great tutorial here on how to upload your content from your computer to your website. I’ve transferred my HTML file and corresponding image to my preferred host, and here is the resulting page: I know it’s not beautiful, but hey, you gotta start somewhere! The structure is there, the image is there. The links work! My next step is to start designing and styling the page, using CSS. Overall, I had a fun time with learning code, so far. Have you tried coding before? Are you an expert? Let me know in the comments! Leave a Reply
OPCFW_CODE
Laugh track removal software I am looking for a program that can remove the laugh track from a video file. If possible, free, Windows 7 and can batch process. @Geoff: Interesting! I converted your answer to a comment as it is our policy to not have answers about not-yet-released stuff. Cheers! :-) Movavi can help you. It is media editing program that remove laugh tracks. Just to add to the Movavi case, the key to the setup is to effectively use a bandwidth filter to remove the laughter from the video via analysis of the existing material. I'm not sure it's necessarily automated. I was tempted to recommend Serif VideoPlus, but it's even more expensive. Yes! use FFMPEG the open source swiss-knife of media. https://stackoverflow.com/questions/50759770/remove-audio-from-mp4-file-ffmpeg iZotope RX plugin is used to remove instruments bleed from vocal tracks, but I believe you need to provide a separate track which would contain laughter, in order for the plugin to extract the signature. RX is considered as one of the best plugins for these things, so I don't believe there are any ways of doing it automatically. Similar plugins include MAGIX Spectral Layers Pro and Adobe Audition CC, but I believe iZotope RX is considered the most specialized for de-bleeding. Laugh Track Audio Removal Personally counting Laugh Tracks as Noise is something that I agree with. Audacity has a Noise Reduction feature, (under Effects), which relies on you selecting a sample with just the "Noise" and no other background and will then subtract that "noise" from the rest of the track even where it is overlaid on other sounds. It is very effective. Since there are a quite limited number of "laugh tracks" used and in many cases the same ones were used throughout entire series of productions, (see https://en.wikipedia.org/wiki/Laugh_track for more on this), it should be reasonably simple to build up a collection of the offending samples to then apply to multiple files. Audacity is: Free, Gratis & Open Source Cross Platform including Windows 7/10 Can be scripted so as to apply the processing to multiple files via Marcos, Nyquist or Python. But the question is about Videos with Laugh Tracks! The but is that Audacity does not process Video files. This is not too much of a problem as you can use FFMPEG to split the video and audio tracks to separate files and can combine original video & modified audio files into a single file. You can find the commands to do these operations online reasonably simply and since the operations do not modify the length of the Audio track synchronisation should not be an issue. Of course if you are a python user you can use MoviePy which uses FFMPEG behind the scenes to do the audio/video split, should be able to invoke the Audacity operations on the Audio file and then merge the modified audio with the video. Unfortunately the Noise Reduction Effect is not currently (Aug 2020) available from the scripting interface. Hang on Noise Reduce is not currently available! An alternative if scripting is really needed could be to use the noisereduce library from within python. It uses an algorithm is based (but not completely reproducing) on the one outlined by Audacity for the noise reduction effect. Audacity might be worth a try. It can remove background noise and does batch processing.
STACK_EXCHANGE
#MJPEG Stream Viewer #DCS930L #Works very well at 640x480 and low quality images from PIL import Image, ImageTk from io import BytesIO import requests import tkinter import threading from datetime import datetime,date import binascii root = tkinter.Tk() root.geometry("1280x1080") image_label = tkinter.Label(root) image_label.pack() def streamLoop(): r = requests.get('http://trackfield.webcam.oregonstate.edu/axis-cgi/mjpg/video.cgi', stream=True,) headersize = 66 firstRead = True count = 0 while True: #need to account for different image sizes content_length = 0 deets = r.raw.read(headersize) headstr = "".join(map(chr, deets)) print(headstr) headers = headstr.split('\r\n') #print(headers[1]) print((headers[2])[16:]) content_length = int((headers[2])[16:]) #print(content_length) jpg = r.raw.read(content_length) r.raw.read(2) #skip every other frame to lower latency #should probably come up with some better way later if (count%2 == 0): rawbytes = BytesIO(jpg) rawbytes.seek(0) tki = ImageTk.PhotoImage(Image.open(rawbytes)) image_label.configure(image=tki) image_label._backbuffer_ = tki count+=1 print(count) def updateImage(i): image_label.configure(image=i) image_label._backbuffer_ = i thread = threading.Thread(target=streamLoop) thread.start() root.mainloop()
STACK_EDU
Could not install RLS component (rls-preview) Version of VSCode: 1.17.0 Version of the extension: 0.3.1 OS: Ubuntu 16.04 Description: I cannot start the RLS. I get the following error when I start vscode (or when I reload the window): RLS not installed. Install? When I clicked Yes, I got: Couldn't start client Rust Language Server Could not install RLS component (rls-preview) I reinstalled the extension, but the problem persists. I see in the Changelog that there was a change on how RLS detection is performed. Could this be related? Output of the "Rust logging" channel (the selection box shows "Rust Language Server" and not "Rust logging", but I guess that's what you mean?): [Error - 2:25:52 PM] Starting client failed Just a quick note to say that the same thing happens with Windows. Never mind; all I had to do was update rust nightly and install the (new?) rls-preview component. I'm closing the issue. Thanks. Facing the same issue, cannot install rls-preivew: error: could not find rls-preview in registry https://github.com/rust-lang/crates.io-index @Yanpas see: https://github.com/rust-lang-nursery/rls-vscode/issues/181#issue-269383659 What a mess. Why is extension depends on buggy nightly toolchain, what's the problem with using stable by default? I will be okay with using outdated but at least working copy of rls. Now it prints 'rls is not found'. Total voodoo magic According to readme we should switch to https://github.com/rust-lang-nursery/rls-vscode which surprise doesn't work too. PS Nothing personal, I'm just raged with losing 2 hours on toolchain setup. @Yanpas I feel your pain :smile:. Also currently working on getting the toolchain to work, and I thought I would drop in and guide you to what I've learned so far 😃. I've ran rustup component add rls-preview --toolchain nightly and it seems to work Getting $ rustup component add rls-preview --toolchain nightly error: toolchain 'nightly-x86_64-apple-darwin' does not contain component 'rls-preview' for target 'x86_64-apple-darwin' Also getting $ rustup component add rls-preview --toolchain nightly error: toolchain 'nightly-x86_64-unknown-linux-gnu' does not contain component 'rls-preview' for target 'x86_64-unknown-linux-gnu' Doesnt work on windows either: $ rustup component add rls-preview --toolchain nightly error: toolchain 'nightly-x86_64-pc-windows-msvc' does not contain component 'rls-preview' for target 'x86_64-pc-windows-msvc' Same problem here. Found my way here after uninstalling from brew and reinstalling via rustup to see if I could get this this working. No dice. How do you re-open an issue? See that https://github.com/editor-rs/vscode-rust/issues/369 and https://github.com/editor-rs/vscode-rust/issues/370 are related to this. Also about three weeks old at this point. Okay this plug-in is pretty cool about configuration. Add this to your user settings and pull from beta rather than nightly // Rust channel to install RLS from. "rust-client.channel": "beta" so your entire settings might look something like // Place your settings in this file to overwrite the default settings { "workbench.colorTheme": "Visual Studio Dark", "window.zoomLevel": 1, // Rust channel to install RLS from. "rust-client.channel": "beta" } ... if you've not configured code much yet. All, as of today (2017.12.27), this is how I resolved the issue: # Ensure the rustup component for rls-preview is added. # I also added the `rust-analysis` & `rustfmt-preview` components; it is possible that # if those are missing, the vscode plugin may still fail, not 100% sure on that. rustup update rustup component add rls-preview Lastly, ensure your Rust channel config points to stable in your vscode user settings. ... "rust-client.channel": "stable" Thanks! that worked for me :D Alright, so beta and stable are working, but rls-preview / rls don't exist as components on nightly anymore and the beta / stable channels of rls immediately error out on code containing #![feature] gates. Is there a way to get a working rls component for nightly? Can you build it yourself so the vscode-rust plugin picks it up and uses it? This is what worked for me today: rustup component add rls-preview --toolchain nightly. Rust newbie here. One of my project dependencies requires nightly. Now I could not manage to use RLS with it. If rls-preview were available in the nightly toolchain, I think it would solve the problem. Can this be solved with some configuration, so that stable rls is used with other nightly components?
GITHUB_ARCHIVE
6.7. Sample Size and Power Estimation Before testing a hypothesis it is desirable to know how large a sample size should be selected to achieve a desired precision. This depends on a number of factors specific to the nature of the test, such as sample variance, confidence level (α probability or Type I error) or minimum detectable difference. It is also important to know how likely it is not to reject a null hypothesis when in fact it is false (β probability or Type II error), or in other words, to know what the power of the test is (1 – β), i.e. what is the probability of rejecting the null hypothesis when it is in fact false. This section brings together seven broad classes of commonly used hypothesis tests and provides methods of estimating the sample size, power of the test and other parameters. The types of tests supported here are: 1) One Sample 2) Two Samples An eighth option is also provided to compute power of the test from the phi statistic and vice versa, which are used in estimating the sample size and power of the test in ANOVA and two sample tests. Therefore, UNISTAT does not require use of OC Curves published by Pearson and Hartley (1951), pp. 112-130. Although some topics seem to have been excluded from this list, these are often special cases of the methods already provided. For instance, sample size and power of the test in Regression Analysis can be estimated using the Correlation option above. Many different types of ANOVA can also be accommodated simply by entering the relevant statistics in place of the existing parameters. In such cases, you are recommended to consult a statistics book to establish which of the existing procedures can be used as a substitute (see Zar, J. H. 2010). In procedures where a selection of one or two-tailed estimation is available, the default is always set for two-tailed, corresponding to the null hypothesis that “the entities tested are equal” against the alternative hypothesis that “they are not equal”. Where the alternative hypothesis states a relationship of one entity being greater or less than the other, the one-tailed option should be selected. In some procedures, the parameter to be estimated may occur on both sides of an equation and therefore it cannot be calculated directly. In such cases, an iterational algorithm is employed to determine the correct level of the parameter and usually convergence is achieved within a few iterations. In such procedures you are provided with two further input fields to control the two convergence parameters, tolerance and the maximum number of iterations. The default values of these two parameters are 0.001 and 100 respectively and they produce satisfactory results in most cases. If the convergence cannot be achieved within these values, then the program will report this in the output. Then you may edit the default values to obtain convergence.
OPCFW_CODE
As a Rails developer, you'll be faced with plenty schema.rb conflicts when attempting to rebase on your base branch. A quick bit of research tells us simply rake db:migrateing regenerates a merged schema.rb file. That's fine if your database is free of schema changes from other branches. When working on multiple branches, each with migrations that you've run on your development database, this approach undesirably includes those unrelated changes. Firstly, I highly recommend using your test database to work through correcting schema.rb because it's super fast (no data in it) and you won't wipe out your development data when rebuilding your schema. That is to say, your development DB can have any conglomeration of migrations (that you may or may not even keep) while your test DB remains pristine or close to it. Add RAILS_ENV=test to your rake commands below. The solution depends on one particular best practice -- write your damn down methods! I can point to maybe two or three exceptions in my experience because of complexity or difficulty of reproducing the original data. With that out of the way, a note about what does not work. You may be inclined to rebuild the database via rake db:setup with a clean schema.rb from your master branch. Problem is, rake db:setup does not have any way of knowing that your topic branch migration version ID should be removed from the schema.rb only knows the newest version ID. It could remove all version IDs from schema_migrations that are older than this ID, but I think it almost always would be newer than your topic branch migration ID if you are having conflicts, because the other migrations are probably from your colleagues making schema changes after you generated your migration. rake db:setup approach would recreate your database without the schema change introduced by your topic branch, except with its version number in schema_migrations. The result is a migration that looks like it has run, but hasn't. With Migrations In-Sync If you're not in the situation described above, do the following. - Roll back your new migration: rake db:migrate:down VERSION=(stamp). - Check out an unmodified schema.rbfile from your branch's base (probably master): git checkout master -- db/schema.rb. - Rebuild your database from - Re-run your new migration: At this point your schema.rb should look correct. With Migrations Out-Of-Sync If you're really in a mess, another possibility is rake db:drop && rake db:setup with your base branch checked out, before rebasing at all. That will nuke schema_migrations and re-populate it with all version IDs found in db/migrate, which excludes your topic branch migration ID because you're not on your topic branch. Then, you can switch to it, rebase, and rake db:migrate normally. Another way of thinking about it is that you need to re-apply your schema changes in the same way that you'd re-apply your code changes when running into merge conflicts. You're taking the new database blueprint and re-running your change to it. Starting with that new blueprint, however, can be a little squirrely.
OPCFW_CODE
Recruitment Consultant at Now and Beyond Consulting Views:655 Applications:184 Rec. Actions:Recruiter Actions:51 Analyst/Associate/Senior Associate - Data Scientist - Big Data & Analytics (1-7 yrs) Data Scientist (Analyst/Associate/Sr. Associate level) for a Bigdata & Analytics organisation One of our esteemed clients- An Indian Multinational conglomerate company, incubated as a division of this large Group incepted in April 2015, offers multi-sectoral advanced analytics and data engineering solutions using sophisticated predictive analytics and machine learning algorithms is looking for Data Scientists (Analyst/Associate/Sr. Associate level) to be based in Mumbai or Bangalore. - The incumbent will be part of the Predictive Analytics, Digital Analytics, Data Sciences, Advanced Visualization, Insights & Experimentation team and will report to the Manager/Senior Manager. - He/she will be an individual contributor working on multiple data sciences, advanced visualization and data management initiatives across multiple companies and industries leveraging traditional and big data. - The incumbent will have the unique opportunity to witness the application of analytics across multiple industry verticals. - Close partnership with business and the senior leadership of multiple Tata Companies will enable a clear understanding of the business perspectives and the application of analytics for solving real business problems. - Apply Data Mining/ Data Analysis methods using a variety of data tools, building and implementing models using algorithms and creating/ running simulations to drive optimisation and improvement across business functions - Assess accuracy of new data sources and data gathering techniques - Perform Exploratory Data Analysis, detailed analysis of business problems and technical environments in designing the solution - Apply Supervised, Unsupervised, Reinforcement Learning and Deep Learning algorithms - Apply advanced Machine Learning Algorithms and Statistics: o Regression, Simulation, Scenario Analysis - Time Series Modelling - Classification - Logistic Regression, Decision Trees, SVM, KNN, Naive Bayes - Clustering, K-Means, Apriopri - Ensemble Models - Random Forest, Boosting, Bagging - Neural Networks - Lead and manage Proof of Concepts and demonstrate the outcomes quickly - Document use cases, solutions and recommendations - Work analytically in a problem-solving environment - Work in a fast-paced agile development environment - Coordinate with different functional teams to implement models and monitor outcomes - Work with stakeholders throughout the organization to identify opportunities for leveraging organisation data and apply Predictive Modelling techniques to gain insights across business functions - Operations, Products, Sales, Marketing, HR and Finance teams - Help program and project managers in the design, planning and governance of implementing Data Science solutions Experience and Skills: - 1+ to 7 years of professional working experience in Analytics - Experience in Retail, Financial Services and Manufacturing - Experience using statistical packages of R, Python and Spark ML to work with data and draw insights from large data sets - Experience with distributed data/ computing tools: Hadoop, Hive, Spark, Python - Experience with SQL - Experience visualizing/ presenting data for stakeholders using matplot, ggplot or Excel or Tableau - Excellent written and verbal communication skills for coordinating across teams - Bachelors/ Masters in a quantitative discipline (Statistics, Econometrics, Mathematics, Engineering and Science) This job opening was posted long time back. It may not be active. Nor was it removed by the recruiter. Please use your discretion.
OPCFW_CODE
const expectThrow = require('./helpers/expectThrow') const YTKNToken = artifacts.require("./YTKNToken.sol") const BetaFaucetArtifact = artifacts.require('./BetaFaucet.sol') contract('BetaFaucet', function (accounts) { let recipient = accounts[1] let recipient2 = accounts[2] let recipient3 = accounts[3] let ytknTokenInstance, betaFaucetInstance before(async () => { ytknTokenInstance = await YTKNToken.new() betaFaucetInstance = await BetaFaucetArtifact.new() await betaFaucetInstance.initialize(ytknTokenInstance.address) }) describe('initialize()', () => { it('should not be called again', async () => { await expectThrow(async () => { await betaFaucetInstance.initialize(ytknTokenInstance.address) }) }) }) describe('withdrawEther()', () => { it('should work', async () => { await betaFaucetInstance.send(web3.toWei(20, "ether")) const ownerAddress = await betaFaucetInstance.owner.call() assert(await web3.eth.getBalance(betaFaucetInstance.address), web3.toWei(20, "ether")) assert(await web3.eth.getBalance(ownerAddress), 0) await betaFaucetInstance.withdrawEther() const ownerBalance = await web3.eth.getBalance(ownerAddress) // 1000000 is gas amount in wei assert(ownerBalance, web3.toWei(20, "ether") - 1000000) }) }) describe('sendEther()', () => { it('should work', async () => { await betaFaucetInstance.send(web3.toWei(20, "ether")) const recipientBalance = await web3.eth.getBalance(recipient) await betaFaucetInstance.sendEther(recipient, web3.toWei(0.2, "ether")) const newRecipientBalance = await web3.eth.getBalance(recipient) assert.equal( newRecipientBalance.toString(), recipientBalance.add(web3.toWei(0.2, "ether")).toString() ) }) it('should not allow double sends', async () => { await betaFaucetInstance.send(web3.toWei(200, "ether")) await betaFaucetInstance.sendEther(recipient2, web3.toWei(1, "ether")) await expectThrow(async () => { await betaFaucetInstance.sendEther(recipient2, web3.toWei(1, "ether")) }) }) it('should prevent an amount above the limit', async () => { await betaFaucetInstance.send(web3.toWei(200, "ether")) await expectThrow(async () => { await betaFaucetInstance.sendEther(recipient3, web3.toWei(30, "ether")) }) }) }) describe('sendYTKN()', () => { it('should work', async () => { ytknTokenInstance.mint(betaFaucetInstance.address, 3000000) const betaFaucetDelegateYTKNBalance = await ytknTokenInstance.balanceOf(betaFaucetInstance.address) assert.equal(betaFaucetDelegateYTKNBalance, 3000000) const recipientsYtknBalance = await ytknTokenInstance.balanceOf(recipient) assert.equal(recipientsYtknBalance, 0) await betaFaucetInstance.sendYTKN(recipient, 15) const recipientsNewYtknBalance = await ytknTokenInstance.balanceOf(recipient) assert.equal(recipientsNewYtknBalance, 15) }) it('should not allow double sends', async () => { await betaFaucetInstance.sendYTKN(recipient2, 15) expectThrow(async () => { await betaFaucetInstance.sendYTKN(recipient2, 15) }) }) }) })
STACK_EDU
Today, we speak with Abdullah Kurkcu, a Lead Traffic Modeler. Abdullah did his Master’s and Ph.D. programs in Transportation Engineering and has since been involved in research in the field. For the most part, transportation engineering involves working with transportation data to make predictions. In this episode, Abdullah discusses his work around bicycle usage in the US and how it has been affected by COVID-19. As a way of introduction, Abdullah gave an overview of what transportation engineers do and explained why they are critical, especially in big cities. To build and optimize transportation models, however, data are an important ingredient. Abdullah explained how the evolution of smartphones has helped in capturing the necessary data for his analysis. Specifically, he used the Colorado dataset of bicycle and pedestrian count. He also gave an intuition of how he approached the bicycle dataset and his initial reservations about COVID-19 on bicycle usage for transportation. The Traffic Modeler discussed his discoveries in the peak hours before and after COVID. Abdullah also talked about the multiple stations in the system and the measures he took to differentiate the stations. Abdullah then discussed the bias in the data. Bias was primarily from insufficient data. Of course, the data of bicycle riders are not as abundant as vehicle transportation data and the few available are not willing to answer survey questions. Abdullah also talked about the observation between the features, the trends he observed before his analysis, and whether they agreed with his results. He also explained why he accounted for collinearity in the feature engineering process. He then talked about how he modeled the problem. It appears that having two timestamps - before and after COVID - is not the best way to model the problem. Abdullah detailed the intuition behind the problem modeling process and why it was necessary to get a holistic understanding of COVID’s effect on bicycle usage. Partial Least Square Regression (PLSR) was the regression technique used for this problem. The Transportation Engineer explained why this was his preferred algorithm amidst other counterparts such as Random Forest or XGBoost. Needless to say, he talked about how PLSR worked. Abdullah is a Transportation Planning and Engineering expert and he has 8+ years of experience in modeling transportation systems, leading transportation research projects, and developing web-based data management solutions. He completed his Ph.D. in Transportation Engineering at NYU and previously received his master's degree in Transportation Engineering from Florida State University. Abdullah’s experience is in traffic modeling and calibration, developing alternative traffic data collection systems, big data and analytics, GPS-based transportation data analysis, incident and emergency management, wireless sensor networks, social media analytics, and transportation geography. Abdullah has worked on several transportation design projects including New York City Connected Vehicle Pilot Project, Mobile Accessible Pedestrian Signal System, and Regional Traffic Impact Study of Newark Bay-Hudson County Extension (NBHCE) Bridge Deck Reconstruction in Jersey City, NJ.
OPCFW_CODE
Clip - Question re: use of /usb Command-line option I have installed a copy of Notetab Pro on a 256MB Memory stick plugged into my 4 Port Usb Hub and have been trying to create a shortcut to run Notetab from there. How can I do that or is it not possible? I am running XP under Parallels Dexktop on my iMac with no problems, but can't create a shortcut for the above. Charles M. Raine, Winnipeg. MB R3P 0W3 - --- In email@example.com, "Charles M. Raine" <rainec@...> wrote: > I have installed a copy of Notetab Pro on a 256MB Memory stick > my 4 Port Usb Hub and have been trying to create a shortcut to runNotetab > from there. How can I do that or is it not possible?but > I am running XP under Parallels Dexktop on my iMac with no problems, > can't create a shortcut for the above.Hi Charles, > Charles M. Raine, > Winnipeg. MB R3P 0W3 iMac?! I'm surprized that you would change to Apple this late in the game. For the same price you couldda gotten a multi-core PC, "eh?" Just kidding. I still own my first computer, an Apple II. As to running NoteTab off a USB stick, try using a batchfile on the hard disk (always there, and Windows doesn't complain), which searches through possible drive letters for an identifiable file on the USB stick. The search code (one line, correct thepath to NotePro.exe as needed - no spaces is good): - - - FOR %%I IN (C D E F G H I J K) DO if exist %%I:\NotePro.exe set - - - this line sets "SneakerNet" environment variable to the correct drive. Then I create an alias to that drive, that ALWAYS uses the same drive letter (something way up there, like X,Y,Z): - - - IF EXIST Y:\nul subst Y: /d subst Y: %SneakerNet%\ start /WAIT Y:\NTP\NotePro.exe /USER=Y:\NTP - - - Note that the FOUND lable comes AFTER any error handler, if NOT found. The code after FOUND will create an alias to the unknown USB path, start notetab with "start" and the /WAIT switch, so it WAITs for NTP to exit. This is important, because the alias is no longer reliable, when the shell exits. The code also allows backing up files changed during the session (See my post in the general NoteTab group about backing up files when you A shortcut to the batchfile will always work, even if the USB stick is missing. The constant drive letter alias assures, that even your "favorites" are always pointing to the correct drive, as are the paths in INI files, etc. The "Y:" drive solves the problem of changing drive letters. And the INI files, favorites, and internal document links do not get messed up!
OPCFW_CODE
We have had so many requests for our cabling guide, we decided to replicate it on our website. Please feel free to review and copy as needed. Parallel Cabling Guide When IBM introduced the PC, in 1981, the parallel printer port was included as an alternative to the slower serial port as a means for driving the latest high performance dot matrix printers. The parallel port had the capability to transfer 8 bits of data at time whereas the serial port transmitted one bit at a time. IEEE 1284 Cabling Guide The "IEEE Std.1284-1994 Standard Signalling Method for a Bi-directional Parallel Peripheral Interface for Personal Computers", is for the parallel port what the Pentium processor was to the 286. The standard provides for high speed bi-directional communication between the PC and an external peripheral that can communicate 50 to 100 times faster that the original parallel port. Serial Cabling Guide Serial means one event at a time. It is usually contrasted with parallel, meaning more than one event happening at a time. In data transmission, the techniques of time division and space division are used, where time separates the transmission of individual bits of information sent serially and space (on multiple lines or paths) can be used to have multiple bits sent in parallel. SCSI Cabling Guide SCSI was created to satisfy the need for a more flexible, faster, command-controlled interface for hard disk drives and other computer peripherals. Despite the term "small" in its name, SCSI is large. It is large in use, in market impact, influence, and unfortunately in Monitor Cabling Guide / Video Display Since there are many different ways to specify a video card's capabilities, and so many potential resolutions, color modes, etc., video standards were established in the early years of the PC, primarily by IBM. The intention of these video standards is to define agreed upon resolutions, colors, refresh modes, etc., to make it easier for the manufacturers of PCs, monitors, and software to ensure that their products work together. USB Cabling Guide USB (Universal Serial Bus) is designed to be a "plug & play" interface between a computer and add-on devices such as audio players, joysticks, keyboards, telephones, scanners, and printers. USB allows new devices to be added to computers without having to turn the computer off. USB Supports a data speed of 12 megabits per second. Network Cabling Guide Local Area Networks (LAN) have become the prevalent way of sharing information. As this is probably the fastest moving of cabling media, by the time you have finished reading this, another new product will be on the market !
OPCFW_CODE
If you've spent any time looking at my sidebar, I'm sure you've seen this: Today I'm going to tell you how I made it. 1. First I needed the pictures. I used Photobucket (of course). If you don't have an account there yet, click here to get one. If you do have one, log into your account. 2. At the top of the page there is a place to search. It looks like this: 3. Put in what you want to find an image of....for example, when I found the picture I used for my Twitter link, I just put in "Twitter". It will give you the resulting pictures. When you find the one you want, click on it and then choose "copy to my album". You have to place your mouse over the picture to bring up the copy option. It looks like this when you do: 4. After you save it to your album, click on the picture to get the codes. This is what that looks like: 5. See where it says "HTML Code"? That's what you need. Copy it. 6. Go to your Blogger dashboard/Layout/Page Elements. 8. Paste the code into the widget. It will look something like this (remember, the red symbols won't be there; I have to show them to show you the code): (<)a href="http://photobucket.com" target="_blank">(<)img src="http://i269.photobucket.com/albums/jj57/mommacow39/twitter-1.png" border="0" alt="blog twitter">(<)/a> 9. Notice the section I have in purple. That's the part you're going to change (except for also taking out the red parenthesis). You are going to change it to the site you want to link to. So if I wanted to link that picture to my twitter account I would change the http://photobucket.com to http://twitter.com/katbrak and my code would now look like this: (<)a href="http://twitter.com/katbrak" target="_blank">(<)img src="http://i269.photobucket.com/albums/jj57/mommacow39/twitter-1.png" border="0" alt="blog twitter">(<)/a> 10. When you have all the pictures/links you want on the widget, save it. Place it where you want it on your sidebar. Preview your blog and make sure it looks the way you want it to and then save it too. That's it! You now have learned how to link Photobucket pictures to your sites. (By the way, you can click on any of the pictures in this post to visit me on my other sites...) Don't forget about the center code to make it look just the way you want it to...click here to learn how to do that if you missed it.
OPCFW_CODE
Astrophys. J., Suppl. Ser., 211, 25 (2014/April-0) The Spitzer Infrared Spectrograph debris disk catalog. I. Continuum analysis of unresolved targets. CHEN C.H., MITTAL T., KUCHNER M., FORREST W.J., LISSE C.M., MANOJ P., SARGENT B.A. and WATSON D.M. Abstract (from CDS): During the Spitzer Space Telescope cryogenic mission, Guaranteed Time Observers, Legacy Teams, and General Observers obtained Infrared Spectrograph (IRS) observations of hundreds of debris disk candidates. We calibrated the spectra of 571 candidates, including 64 new IRAS and Multiband Imaging Photometer for Spitzer (MIPS) debris disks candidates, modeled their stellar photospheres, and produced a catalog of excess spectra for unresolved debris disks. For 499 targets with IRS excess but without strong spectral features (and a subset of 420 targets with additional MIPS 70 µm observations), we modeled the IRS (and MIPS data) assuming that the dust thermal emission was well-described using either a one- or two-temperature blackbody model. We calculated the probability for each model and computed the average probability to select among models. We found that the spectral energy distributions for the majority of objects (∼66%) were better described using a two-temperature model with warm (Tgr∼ 100-500 K) and cold (Tgr∼ 50-150 K) dust populations analogous to zodiacal and Kuiper Belt dust, suggesting that planetary systems are common in debris disks and zodiacal dust is common around host stars with ages up to ∼1 Gyr. We found that younger stars generally have disks with larger fractional infrared luminosities and higher grain temperatures and that higher-mass stars have disks with higher grain temperatures. We show that the increasing distance of dust around debris disks is inconsistent with self-stirred disk models, expected if these systems possess planets at 30-150 AU. Finally, we illustrate how observations of debris disks may be used to constrain the radial dependence of material in the minimum mass solar nebula. catalogs - circumstellar matter - infrared: stars - zodiacal dust VizieR on-line data: <Available at CDS (J/ApJS/211/25): table1.dat table2.dat table3.dat refs.dat> View the references in ADS To bookmark this query, right click on this link: simbad:2014ApJS..211...25C and select 'bookmark this link' or equivalent in the popup menu
OPCFW_CODE
Hey Guys, I am taking the C++ course part two. I failed one of the sort arrays exercises, and i feel really embarrassed. If I fail this, am I bad at programming? If no, then why? How can I improve at it so I don’t fail it? Of course not! Programming is all about solving problems. Mosh creates challenges at every stage. Sometimes you get it right away, sometimes you pause and have to spend some time working it out. I’ll give you a little clue. If you post your code to ChatGPT, it will tell you what’s wrong. Or post your code here and we can advise you. Show us enough we can identify what’s up. I’ve been coding for 35 years and have written production applications in more than a dozen languages. I still find it challenging and I enjoy that aspect of programming. Yes, some things are harder than they should be. You can spend two days building the app of your dreams and everything going great – but then one stupid button or one stupid function kicks your butt for a week. It takes longer to debug that one thing than it took to write the whole program. Are you a true programmer? It depends. Do you like to solve problems and WIN? Or do you want to quit whenever things get tough? There’s no wrong answer here. People are different. But you can see programming is not for everyone. For those of us that are programmers, it’s always challenging and always rewarding. Thank you so much Jerry for your support! I just have a few questions: So are you saying that it’s okay if programmers can’t solve logic and it’s okay for them to use other resources like friends, stack overflow, this, and ChatGPT? Is learning this logic important for interviews, you said you had 35 years xp right? What are the questions like? P.S. I wanna be a C++ game dev one day and program games for companies like epic games! Most companies find a few of the basic questions you find listed online. Some googling will help you find those. I never actually required technical testing when I hired. I can tell everything about you in a thirty minute conversation. But most employers rely on technical tests and algorithms. Each company is different. Some look for a greater understanding of their business. Some are stuck on you knowing algorithms that you’ll never use in real life. They will drill you on the difference between sort algorithms and you’ll probably never use any of them. One area that no one talks about but is the real decider on job interviews is your experience with specific platforms and libraries. Okay, you know C++. But do you know Unreal Engine? Or whatever other engine they might be using. Do you know the same utility classes and libraries they use? How fast will you get up to speed on their entire toolset. It’s much more than just C++. Since you mentioned game development. That’s the hard one. Everyone wants to be in game development. Therefore, it is highly competitive, extremely demanding, and only the best survive. Honestly, all you can do … is all you can do. Spend a few hundred hours on mastering the language. Spend a few hundred hours mastering a couple gaming frameworks. Spend a few hundred hours mastering algorithms and working coding puzzles on leetcode. You see? It’s too many hundreds of hours. So you just do what you do and be as good as you can be. My early career was developing device drivers, performance testing tools, operating system stuff. I was really strong and no one could touch me at my company. Later in life I simplified and did some web development with HTML, PHP, and stuff like that. Now I’m refreshing with next.js and all that. I’m also starting to learn RUST. I can’t decide if that’s going to take off or not. I know C, C++, C#, and some Java, Dart as well. But since I’m basically retired, I don’t really have an objective other than to stay sharp. I ran my own software company for 20 years, but spent too much time running things that many of my skills have gotten soft. That’s why I came here.
OPCFW_CODE
My understanding of your trade in this video is that you waited for a rejection of the downside price movement by the Pivot level @ around 1.4153, then you went Long .... Right ? What role did the two T&S charts play in the trade ? What numbers did these two T&S give you for you to have gone Long ? I have to say "How can I ever repay you?" to 2 persons here. First, I met pbylina in a different forum where people who doesn't know anything keeps on posting and those who actually knows, keeps their mouth shut. It was due to pbylina that I am here today. Next is obviously bloom. I don't have to mention anything about bloom, people reading this thread already knows what kind of person he is. Those who make fun of bloom because of poor english pls keep in mind that here,we are more interested to learn about the language of tape than the english language..and bloom is surely far far better than you all. IMHO,If you can't understand bloom's english, then you may not be intelligent enough to understand the tape. I no longer have the data connection , so I can't make any videos like pbylina. Maybe that is why people are no longer responding to my questions. Any ways, you people are doing a great job. God bless you all Last edited by rocky9281; August 7th, 2011 at 10:17 AM. The following user says Thank You to rocky9281 for this post: Thank you man))) You asking a good questions but i do not have answers to them)))) If you can than do screenshots of the tape and chart and your explanations. That will be enough)) As for my "bad" english )) I am already teach some tricks with this BAD ENGL some guys and find friends and i have BIG сомнения (please translate in google) you guys could do so on english and not even on Russian )))) And Rocky is right)) thanks man. We are here to learn something new. I want to thanks all of you guys who really interested in what i am doing))) and helping as can I am local @rainman@)))) he he he he 767865 Lucy lu )))) The following user says Thank You to bloom for this post: Only way to get data now is to open an account with broker. Like I said before, I open an account 3 months ago and I still didnt make a single real trade because Im not ready. The data feed is free. My broker hasnt said anything that I havent made a single trade yet.(I hope I didnt open my mouth too soon) Till now I was trading Indian stocks only....Are both of you talking about Euro and es? Secondly, I hope someday,I will be able to turn this trading skill into a business in future,when I will become better in tape reading. Anybody having similar dreams? Want to work together? Bloom? Pbylina?
OPCFW_CODE
Support for allOf when describing polymorphic types I believe this could be done right now, though a fairly simple change to SchemaRegistry could make the implementation a bit cleaner. I built a test implementation using a SchemaFilter and an attribute on child classes to specify which type Swashbuckle should use as the "parent". The filter sets up the "allOf" array on the child schema to contain a reference to the parent, then removes any properties defined by the parent from the child schema. The only issue I've run into is that there doesn't seem to be an easy way to get the fully inflated Schema instance for a given type in a filter, as GetOrRegister only returns reference schemas. I'd need the full schema of the parent to figure out which properties of the child schema were actually defined in the parent. SchemaRegistry.Definitions contains the full schemas and is public, but the SchemaIdManager used to generate the indexes into it is private. I was thinking of adding a method to SchemaRegistry to fetch the fully inflated Schema from SchemaRegistry.Definitions, given a Type. Does that seem reasonable? Could also include ability to set the "discriminator" property of the parent, but I didn't bother with that now as the tool support for discriminator seems very limited. Attribute (place on child class, specifying parent): [AttributeUsage(AttributeTargets.Class, AllowMultiple = false)] public class SwaggerSubtypeOfAttribute : Attribute { public Type Parent { get; set; } public SwaggerSubtypeOfAttribute(Type parent) { this.Parent = parent; } } Filter: public class SwaggerSubtypeOfAttributeFilter : ISchemaFilter { public void Apply(Schema model, SchemaFilterContext context) { var subTypeAttrib = context .SystemType .GetTypeInfo() .GetCustomAttribute<SwaggerSubtypeOfAttribute>(); if(subTypeAttrib != null) { if (model.AllOf == null) { model.AllOf = new List<Schema>(); } var schemaRef = context.SchemaRegistry.GetOrRegister(subTypeAttrib.Parent); model.AllOf.Add(schemaRef); //Replace with a better method of getting full schema given a type var actualSchema = context .SchemaRegistry .Definitions[schemaRef.Ref.Replace("#/definitions/","")]; //Remove any properties defined by the parent from the child foreach (string propName in actualSchema.Properties.Keys) { model.Properties.Remove(propName); } //Move the remaining properties to the "allOf" set model.AllOf.Add(new Schema() { Properties = model.Properties }); model.Properties = new Dictionary<string,Schema>(); } } } @kevbry wouldn't it be better using an existing attribute for that kind of stuff, such as [KnowType(typeof(ChildType))]? this enhancement would be really useful, my vote for it With OpenAPI v3 support now added (see #952), we're in a better position to improve the support for polymorphic types. So, I'll be looking at this as a priority. Stay tuned ... Polymorphic schema support was added with #1041 (preview packages available on myget.org). It's opt-in via the following config method: services.AddSwaggerGen(c => { ... c.GeneratePolymorphicSchemas(); } When generating the schema for an abstract/base model, this will look for any subtypes in the same assembly and add a corresponding schema for each one to the oneOf collection. This allows you to use abstract/base classes in your contract but also list the known subtypes in your documentation. If neccessary, you can override the default sub type detection (i.e. assembly scoped) by passing a custom strategy to the above config method. If you're using the Annotations package, you'll need to decorate the abstract/base class with one or more SwaggerSubType attributes for each subtype you'd like to include in the docs.
GITHUB_ARCHIVE
Does saltwater flooding cause permanent damage to electrical wiring? Does saltwater flooding cause permanent damage to electrical wiring? A category 5 hurricane will strike the west coast of Florida. I am on the east coast near saltwater. I'd like to leave the electric on when I evacuate tomorrow. However, if I turn it off, will it reduce the likelihood of damage to the electrical wiring if saltwater manages to infiltrate? Another answer already reasonably answers the question of this particular storm. However, in general I would consider the following: Water can damage nearly everything in a house. There are reasons why houses that get flooded sometimes get torn down as the renovation costs can be higher than the rebuild costs. There are reasons why cars are often totaled due to flooding even if they dry out and seem to be mostly OK. So an actual high-water mark above any electrical equipment (and drywall and carpet and furniture and...) puts anything below that mark at serious concerns of being unsalvageable. That being said, if the damage is rain, waves, storms passing through but not actual flooding of a building, the concerns are much lower. If electric devices are "on" then they can be damaged quite severely by even a little bit of water that gets in. Note that many appliances have electronic components that are "on" even when the primary functions are off. The classic example is the clock on a VCR - you aren't playing or recording a show but the VCR is still on. If electric devices are "off", whether deliberately or due to a power outage, they are often perfectly fine once they dry out, as long as they weren't actually flooded for an extended period of time. There are some devices where being "on" with some risk is better than being "off" and guaranteed useless. Classic examples are alarms (fire, smoke, burglar) and refrigeration equipment. If you have no "really need to leave on" things such as refrigeration equipment (if you leave your refrigerator off and empty, open the door so it will be less likely to grow mold while you're gone) then the simple solution is to flip the main breaker on your way out. If that is not practical (e.g., refrigerator or freezer that, provided there isn't an extended outage, will keep a lot of expensive food from spoiling), turn off all the circuits you don't need at the breaker panel. Turning them off at the breaker panel works best for hardwired devices (e.g., oven, water heater, lights/fans) and is often easier than unplugging everything else. For refrigeration equipment, the temperature change within depends on both how long the outage lasts and how "full" the piece of equipment is: the fuller, the more thermal inertia it's got. Packing a freezer with bottles of (cold) water ahead of time, and letting them freeze completely before pulling the plug, then keeping the lid shut, will allow for a 1 or 2 days outage without much risk. And it also makes checking easier: if the water is no longer frozen when you open the lid, then you can throw everything out, but as long as it is, you're good. In the interest of shared information, one of the most knowledgeable and experienced meteorologists, Denis Philips, shares his expertise. Cat 5 doesn't get to the west coast of Fla. maybe a 3. East coast of Fla is expected to have some tropical force winds, and the southern part of the storm is expected to be sheared off. Further, the storm is very small. Nothing like the size of Helene. The strongest winds only go out about 5 miles. So if you are on the East coast you will not be getting much, if any surge. Back to the question: Water and electricity do not mix. Saltwater is very bad, if your electric is on or off. However, again, if you are on the east coast you should be fine. Where I am (West Coast, Sarasota) People are leaving and going to the east coast. Thank you and I hope you / family have fared well Thanks for asking. We are alive and well, but tired with a lot to clean up.
STACK_EXCHANGE
Fixed #1799 by creating a separate intellijPlatformComposedJarApi (extends api & compileOnlyApi) configuration with JAVA_API usage attribute value. Also replaced java plugin by java-library plugin, because it is a more proper plugin for IntelliJ plugin projects (they are libraries). Pull Request Details Fixed #1799 by creating a separate intellijPlatformComposedJarApi (extends api & compileOnlyApi) configuration with JAVA_API usage attribute value. Also replaced java plugin by java-library plugin, because it is a more proper plugin for IntelliJ plugin projects (they are libraries). This PR includes #1792, because it is not merged yet, which I need to fix tests, since master is broken right now The actual fix for this issue can be seen in https://github.com/JetBrains/intellij-platform-gradle-plugin/pull/1800/commits/d1c8ef8719b60b7119cad4bbf122c58330d5d378 Description :point_up: + see #1799 These changes create a new outgoing variant: -------------------------------------------------- Variant intellijPlatformComposedJar -------------------------------------------------- IntelliJ Platform final composed Jar archive Capabilities - 123:subpr:unspecified (default capability) Attributes - org.gradle.category = library - org.gradle.dependency.bundling = external - org.gradle.jvm.environment = standard-jvm - org.gradle.jvm.version = 22 - org.gradle.libraryelements = composed-jar - org.gradle.usage = java-runtime - org.jetbrains.kotlin.platform.type = jvm Artifacts - build/libs/subpr.jar (artifactType = jar) -------------------------------------------------- Variant intellijPlatformComposedJarApi -------------------------------------------------- IntelliJ Platform final composed Jar archive Api Capabilities - 123:subpr:unspecified (default capability) Attributes - org.gradle.category = library - org.gradle.dependency.bundling = external - org.gradle.jvm.environment = standard-jvm - org.gradle.jvm.version = 22 - org.gradle.libraryelements = composed-jar - org.gradle.usage = java-api - org.jetbrains.kotlin.platform.type = jvm Artifacts - build/libs/subpr.jar (artifactType = jar) Here is proof that it works (notice the fake build number, it is from my local maven, also kotlin plugin is commented out): Now org.jetbrains:annotations:26.0.1 is available and org.apache.commons:commons-lang3:3.5 is not, like it should be. Related Issue #1799 Motivation and Context It was broken. How Has This Been Tested So far only manually. Types of changes [ ] Docs change / refactoring / dependency upgrade [ x] Bug fix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to change) Checklist [ x] I have read the CONTRIBUTING document. [x ] My code follows the code style of this project. [ ] My change requires a change to the documentation. [ ] I have updated the documentation accordingly. [ x] I have included my change in the CHANGELOG. [ ] I have added tests to cover my changes. [ ] All new and existing tests passed. Tests failed because master is broken. Tests failed due to Error: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
GITHUB_ARCHIVE
|Title||Sia - Bird Set Free (Lyrics)| |MP3 Size||4.2 MB (estimated)| Music: Sia - Bird Set Free (Official Lyrics Video) | (Audio) | Keep Subtitles/CC on! #ThisIsActing - ^ NO PITCH VERSION ^ Get Sia's new single "ALIVE". Show your friends and family that you can't live without music! Currently, a discount of 8$ on all purchases! Always remember to support the official artist! ^_^ This video is for entertainment purposes only and all rights are reserved by the prospective owners. The videos never will be monetized without permissions of the owners & if you'd like me to remove the video, please contact me immediately. #Sia #BirdSetFree #Lyrics |Copyright||All materials' copyright are belong to their respective owners and/or uploader. This video was uploaded by Jinx Promotion| |License||Licensed under YouTube Standard License| |Download as MP3| |Download as MP4| |Tags||Sia - Bird Set Free (Lyrics), Download Sia - Bird Set Free (Lyrics), Download Video Sia - Bird Set Free (Lyrics), Download MP3 Sia - Bird Set Free (Lyrics), Download Sia - Bird Set Free (Lyrics) episode terbaru, Sia - Bird Set Free (Lyrics) terbaru| Click image to play video, right click to save image. (Image via youtube.com) Post related to Sia - Bird Set Free (Lyrics): - [03:37] Sia - Unstoppable (Lyrics)By Jinx PromotionMusic: Sia - Unstoppable (Lyrics Video) (OFFICIAL AUDIO - NO PITCH!) ... - [05:20] Sia - Angel By The Wings (Lyrics)By TS1Sia - Angel By The Wings (from the movie "The Eagle Huntress" ... - [04:08] Amy Winehouse - Back To BlackBy AmyWinehouseVEVOGet AMY OST now: http://po.st/AMYOST3 Listen back to ‘Frank’, ‘ ... - [04:20] Sia - Elastic Heart (lyrics)By Matt JonesLyric video for Elastic Heart, by Sia! :) I find it a little hard to u ... - [04:49] Christina Aguilera - Fall In Line (Official Video) ft. Demi LovatoBy CAguileraVEVOGet Christina Aguilera’s new album ‘Liberation’ available now, i ... - [04:51] Skylar Grey - Love The Way You Lie (Live on the Honda Stage at The Peppermint Club)By SkylarGreyVEVOSkylar Grey performs “Love The Way You Lie” Live on the Honda Stag ...
OPCFW_CODE
I'm leaving for an appointment now. I'll investigate this and respond when I return. Thank you, Jeff I clicked on Start and typed Outlook Express in the search box. A four panel screen was produced, so I must have it, but it doesn't appear under 'All Programs' or under 'Control Panel' by clicking on 'Programs and Features'. There are many entries under: Microsoft Visual C+++2005 Redistributable I don't know how to use it or set up an email account to send material drafted on Office 2007. Help would be appreciated. Yes, I typed Outlook in the search box and 5 'headings' came up, all with entries on the line just below and indented, each with its own logo shape. Starting at the top: logo (looks like a yellow clock face @ 10:10) followed by: - Microsoft Office Outlook 2007 Control Panel (1) logo (looks like a square behind a rectangle) - Go online to get Windows Live Essentials logo (looks like a square in front of a larger square with 'W' inside) - Work with Erik (The title of one of my documents). Microsoft Office Outlook (11) Four different entries, each with logos looking like envelopes open or logo - Microsoft Office Outlook logo - Outlook.en-us logo - OutlookMUI Logo (tennis racket) - 'See more results'. - There after there are 21 more entries with logos too numerous to detail. Many are envelopes open or closed. The last 8 are logos that look like a page with a globe and lines (like text). Each logo is followed by the word 'setup'. I hope this helps. Yes that is the icon displayed. I clicked on MS Office 2007 and it displayed the screen with a note asking if I wish RSS feeds to be synchronized. I don't know what RSS feeds are and whether or not I want then synchronized. Please explain. When I clicked on Microsoft Office 2007 again the same screen appeared absent the RSS feed note (I clicked Remind Me Later), but no start up wizard appeared. What should I do to obtain the start up wizard? I clicked Start and pasted outlook.exe /firstrun in the search box. When I hit enter, the same outlook email page appeared, nut no setup wizard. Does that mean it's already set up? If not, how can I obtain the setup Wizard? I'm back after an appointment. I copied and pasted your instructions to MS Office Word 2007, printed it out to be sure I would follow it exactly. To open Outlook, I clicked on Start, typed Outlook in the search box and then double clicked on MS Office Outlook. Then I clicked on Next and chose Microsoft Exchange, POP3, IMAP, HTTP . From there, I clicked 'New' and got another screen containing Name and type. The name was my usual [email protected] email address. the type was POP/SMTP (send fron this account by default). At some point, it asked for the password XXXXX ISP had given me. I don't know what that is. Every time I open juno.com, I type a 4 letter password XXXXX it opens. Regardless, it attempted to locate my [email protected] encripted settings and failed. Then I allowed it to search for non-encrypted setting, and that also failed. Then I filled in [email protected] and POP/SMTP my 4-letter password XXXXX something and it worked. ????? I'll test it after the evening news and get back to you with a report I got the usual screen, clicked on Tools, then Account Settings. In Account Settings window, I clicked on new I'm back at work setting up MS Office Outlook with my juno.com email address. I thought it was done yesterday, but it did not work. So I started over, and I got to the point: Outlook tested the encrypted address and it failed. Then it tested the unencrypted address. That failed also, so I clicked Back and checked Manually, etc. and clicked on Next. Then I entered the Server Information: Account Type: POP3, Incoming mail server: pop.juno.com, Outgoing mail server(SMTP) : smtp.juno.com. Then I typed Username:Customerand my password (I used the password XXXXX I've always used. If Juno gave me a password XXXXX ago, I don't know it. I left the Remember password XXXXX checked. Finally I clicked Test Account Settings. It ran for a while and then gave me this error message: Send test e-mail message: Outlook cannot connect to your outgoing (SMTP) e-mail server. If you continue to receive this message, contact your server administrator or Internet service provider (ISP). I ran the whole procedure a second time to ensure accuracy of the entries. (I'm unable to cancel italics and large print. So, now I'm stumped and need more assistance, I successfully completed all of the the instructions you gave me and got a success message. I tried sending a long message to a website thinking if I had completed MS Office Outlook, that other websites allowing responses would also work. I sent a copy to my juno address, but it never arrived. I'm going to open Outlook now and compose a test message and send it to my email addresses to see if it works. I'll report on this, hopefully before I send this message. Report: I opened Outlook, got the screen with four panels, but I don't know how to begin composing text. I feel so dumb ! Better Test: I drafted two emails this evening, both in the MS Office 2007 system with a copy to [email protected]. I clicked on the Send button and they show up in the Outlook outbox, but when I close Outlook, the message appears saying I have 2 messages in my outbox that have not been sent. I guess I don't know how to operate the Outlook system, or the success message was bogus. What next? How do I open a screen in Outlook, so I can begin composing a test email? I drafted a detailed reply about trying to use Outlook, but I must have failed to click on something to send it. I'll try to reconstruct it. I have two emails in my outbox waiting to be sent. When I open Outlook and get the screen with 4 panels (Inbox, to-do, etc.), I click on Outbox (2) and the two emails show as short descriptions. If I left click on either, on one occasion a smakk box appeared with choices like: Jeff Fawsett, [email protected], ,help, etc., yet just now a left click produced nothing. If i double click, the email is displayed. At this point, I think I can send it. If I clock on Send, and it disappears. But I check I was editing my reply when it disappeared. Apparently it had already been sent. You replied, and I was unable to send the edited reply. I'll try your suggestion about F9. So, here is what I was adding: I drafted a detailed reply about trying to use Outlook, but I must have failed to click on something to send it. I'll try to reconstruct it now. I have two emails in my outbox waiting to be sent. When I open Outlook and get the screen with 4 panels (Inbox, to-do, etc.), I click on Outbox (2) and the two emails show as short descriptions. When I left clicked on either, on one occasion a small box appeared with choices like: Jeff Fawsett, [email protected], help, etc., yet just now a left click produced nothing. If I double click, the email is displayed. At this point, I think I can send it. If I click on Send, it disappears. But I check sent items under Mail and Mail Folders, and it hasn't been sent. Next, if I double click either email to display it, then click the 'Office button' in the upper left hand corner a box opens with choices. One is Send and it displays a lot of options. When I click send, the screen disappears, but checking Sent Items again, I find it hasn't been sent. Next I just click on an email and click on the 'Send Now/' button, a box opens with Jeff Fawsett, my juno address, check for updates, my account, help and sign out. I clicked on my juno address and the box disappears, but checking again and the email hasn't been sent. I have an idea what may be wrong, other than my ignorance about Outlook. At some point when I was struggling to set up Outlook with my juno address, I recall a prompt saying, "Enter your password XXXXX juno gave you" or something to that effect. Earlier, I wrote to you saying I don't remember that juno ever gave me a password XXXXX the email account was set up. I just entered my 4-letter password XXXXX I use to open it for several years. I also remember a screen saying, "Outlook does not [email protected] Last thought for the day. I am so impressed with your knowledge of computers. software, programs and how clear your direction are stating directions without abbreviations, etc. I just tried Outlook(2)>Double click on an email to display it>click Send>depress F9 and the box appeared with a green band running left to right. Four things are displayed and soon the box disappears. I checked again and nothing was sent. Maybe I had to enter a password XXXXX by juno (which I don't recall)when setting up. I'll wait for your reply I followed your instructions 1. thru 4. and the email did not send. There were two account names: I'll leave the @ out of the two names, so you can see what they are: I tried each. On the first I got two error messages: 1) Log onto incoming mail server (POP3): Cannot find the e-mail server. Verify the server information in your account properties. 2) Send test e-mail message: Cannot find the e-mail server. Verify the server information in your account properties. On the second, I got one error messages: 1) Send test e-mail message: The operation timed out waiting for a response from the sending (SMTP) server. If you continue to receive this message, contact your server administrator or Internet service provider (ISP). Another interesting note. On the dialogue box named 'Change Email Account' if I click on next, it gives me the Congratulations! You have successfully, etc. But it is not correct. The email still didn't transmit. I'll wait for more instructions. I've been away from the computer on other business. I'm sorry for the delay in responding. I'm all set up. Thank you so much for your patience and diligence in helping me with a difficult task. I'd like to send you a $10 bonus. I have $5 deposited in my account. Another $5 should be applied to my credit card on file. You are simply an incredible resource. I will sent other questions marked, 'For Your Eyes Only.' Thank you, XXXXX XXXXX Thank you.
OPCFW_CODE
Kafka Consumer does not receive data when one of the brokers is down Kafka Quickstart Using Kafka v2.1.0 on RHEL v6.9 Consumer fails to receive data when one of the Kafka brokers is down. Steps performed: 1. Start zookeeper 2. Start Kafka-Server0 (localhost:9092, kafkalogs1) 3. Start Kafka-Server1 (localhost:9094, kafkalog2) 4. Create topic "test1", num of partitions = 1, replication factor = 2 5. Run producer for topic "test1" 6. Run consumer 7. Send messages from the producer 8. Receive messages on the consumer side. All the above steps worked without any issues. When I shutdown Kafka-Server0, the consumer stops getting data from Producer. When I bring back up Kafka-Server0, the consumer starts to get messages from where it left off. These are the commands used bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test1 bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test1 bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 1 --topic test1 The behavior is the same (no message received on the consumer side) when I run the consumer with two servers specified in the --bootstrap-server option. bin/kafka-console-consumer.sh --bootstrap-server localhost:9092,localhost:9094 --topic test1 Any idea why the consumer stops getting messages when server0 is down even though the replication factor for the topic test1 was set to 2? There is a similar question already but it was not answered completely Kafka 0.10 quickstart: consumer fails when "primary" broker is brought down What is the replication factor of the consumer offsets topic? If the offsets topic is unavailable, you cannot consume. Look at the server.properties file for these, and see the comment above, and increase accordingly (only applies if topic doesn't already exist) # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3. offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 According to your previous question, looks like it only has one replica See how you can increase replication factor for an existing topic Manually increasing the replication factor fixed the issue, thank you!. The manual process is a little bit cumbersome especially when you want to create JSON entries for the 50 partitions. I wonder if there is an easy way to do that. u r correct @OneCricketeer, even though i changed property offsets.topic.replication.factor and restarted zookeeper/kafka several times but it didn't worked... FINALLY I described the topic "__consumer_offsets" and replication factor was 1 only.. Night Saver buddy --else I would have looked complete night for where I was going wrong. @OneCricketeer - Could you please guide me here - https://stackoverflow.com/questions/67763076/connection-to-node-1-127-0-0-19092-could-not-be-established-broker-may-not In initial versions of Kafka, offset was being managed at zookeeper, but Kafka has continuously evolved over the time introducing lot of new features. Now Kafka manages the offset in a topic __consumer_offsets. You can think of a scenario where you created a topic with a replication factor of 1. In case the broker goes down the data is only on that Kafka node which is down. So you can't get this data. Same analogy applies to __consumer_offsets topic. You need to revisit the server.properties in order to get features you are expecting. But in case you still wanna consume the messages from the replica partition, you may need to re-start the console consumer with --from-beginning true Thank you for the logical explanation. The server.properties did not help because the __consumer_offsets topic has already been created by Kafka automatically. The only way to fix this is to manually increase the replication factor for __consumer_offsets as @cricket mentioned below. Agreed, you have to increase it manually as topic has already been created. Keep up the good work!!
STACK_EXCHANGE
// MIT license © 2018-2019, Michiel Sikma <michiel@sikma.org> const chalk = require('chalk').default const { sortGen, hardPad, isArray, findLargest, weightStatus, weightPriority, keyMap } = require('./helpers') const { projectAssets } = require('./assets') // To account for the distance issues are from the edge (due to the task type icon). const projectIndent = 2 // Minimum width for issue priorities seen in the project issue lines. const issuePrioMinWidth = 2 /** * Returns a colorized line indicating the relative number of issues of each priority. * * @param {array} issues Array of objects of shape { width: {number}, priority: {number} } * @param {string} width Width in characters of the issue line to generate * @param {string} assets Object containing special characters for drawing purposes * @returns {string} String containing colorized boxes */ const makeIssueLine = (issues, width, assets) => { if (!issues || !issues.length) { return null } // Buffer for the issue line. const issueLine = [] // Select the last issue in case we need to fill out remaining characters. const issueLast = issues[issues.length - 1] // Number of characters we've added to the issue line so far. // Due to rounding, we can end up with fewer characters than requested; // in that case we use the remainder with the last issue to fill the rest. let widthCurrent = 0 let widthSegment = 0 // First, we need to see if there are any segments that are smaller than the minimum width. // We'll set these to the minimum and then let the rest take a percentage of the remainder. let issueSmall = 0 let issueSize = [] for (issue of issues) { widthSegment = Math.max(Math.floor(width * (issue.width / 100)), issuePrioMinWidth) issueSmall += widthSegment === issuePrioMinWidth ? issuePrioMinWidth : 0 issueSize.push(widthSegment === issuePrioMinWidth ? issuePrioMinWidth : null) } // This is the leftover width, minus the smallest segments. const issueDynWidth = width - issueSmall // For each issue, add block characters to the buffer. for (let a = 0; a < issues.length; ++a) { const issue = issues[a] widthSegment = issueSize[a] ? issueSize[a] : Math.floor(issueDynWidth * (issue.width / 100)) widthCurrent += widthSegment issueLine.push(assets.colors[issue.priority - 1](assets.blocks.full.repeat(widthSegment))) } // If there's some leftover space, fill it out with the last issue. width - widthCurrent && issueLine.push(assets.colors[issueLast.priority - 1](assets.blocks.full.repeat(width - widthCurrent))) return issueLine.join('') } /** * Creates a table displaying project information. * * @param {array} projectGroups Array of project data, i.e. { projects: {array} } * @param {string} screenWidth Width in characters of the table to generate * @param {string} assets Object containing special characters for drawing purposes * @returns {array} Array of lines comprising the table */ const makeProjectInfo = (projectGroups, screenWidth, assets = projectAssets) => { // Buffer for the table lines and for the project items. const lines = [] const projectsRendered = [] // Width of each project component. Full screen width minus 1 for the spacing, // and minus the indent. const projectWidth = Math.floor((screenWidth - 1 - projectIndent - projectIndent) / 2) // Width of the issue line displaying the project's issue priorities. const issueLineWidth = Math.max(Math.floor(projectWidth / 1.75), 15) // A list of all projects inside all project groups. const projectsAllRaw = projectGroups.reduce((all, group) => [...all, ...group.projects], []) // Take out the projects with zero issues. const projectsAll = projectsAllRaw.filter(p => p.issueAmount > 0) // Determine the maximum string length of each key so we can align the project names next to them. const keyWidth = Math.max(findLargest('key', projectsAll) + 1, 7) const nameWidth = projectWidth - keyWidth // For every project inside every project group, we'll print the key, name, issue line // and amount, URL, and a description if it's defined. for (const project of projectsAll) { const projectLines = [] const { key, name, description, issueAmount, issuePriorities, issueLink } = project const issueLine = makeIssueLine(issuePriorities, issueLineWidth, assets) projectLines.push(`${chalk.yellow(hardPad(key, keyWidth))}${chalk.white(hardPad(name, nameWidth))}`) projectLines.push([ chalk.black(`Tasks: `), issueLine ? issueLine : hardPad('None', issueLineWidth), issueLine ? ' ' + hardPad(String(issueAmount), 3) : hardPad('', 4), hardPad('', projectWidth - issueLineWidth - 4 - 7) ].join('')) projectLines.push(`${chalk.black('Descr:')} ${chalk.gray(hardPad(description ? description : '–', projectWidth - 7))}`) projectLines.push(`${chalk.black('URL: ')} ${chalk.blue(hardPad(chalk.underline(issueLink), projectWidth + 2))}`) projectsRendered.push(projectLines) } // Finally, print each project in a table of two columns. for (let a = 0; a < projectsRendered.length; a += 2) { const projectOne = projectsRendered[a] const projectTwo = projectsRendered[a + 1] // Determine how many lines to print (after the longest of the two projects.) const projectLines = Math.max(projectOne.length, projectTwo ? projectTwo.length : 0) let emptyLine = ' '.repeat(projectWidth) let indentEmpty = ' '.repeat(projectIndent - 1) let indentBullet = hardPad(assets.projectBullet, projectIndent - 1) for (let b = 0; b < projectLines; ++b) { lines.push([ ...(projectOne && projectOne[b] ? [b === 0 ? chalk.yellow(indentBullet) : indentEmpty, projectOne[b]] : [indentEmpty, emptyLine]), ...(projectTwo && projectTwo[b] ? [b === 0 ? chalk.yellow(indentBullet) : indentEmpty, projectTwo[b]] : [indentEmpty, emptyLine]) ].join(' ')) } lines.push('') } // Remove that last extra linebreak. return lines.slice(0, -1) } module.exports = { makeProjectInfo }
STACK_EDU
Using these motivations in your mind, the purpose of this chapter is simple: to focus on some critical substances of an effective R workflow. It builds on the strategy of an R/RStudio project Generating project packages can offer a foundation for generalising your code for use by Other individuals, e.g. through publication on GitHub or CRAN. And R offer advancement continues to be manufactured less difficult in recent years by the development from the devtools bundle, which is highly encouraged for anybody seeking to publish an R package deal. The mathematical formulation of MPT is the fact for your provided chance tolerance , we could find the productive frontier by reducing . Just introducing @TypeChecked will induce compile time strategy resolution. The type checker will check out to locate a technique printLine accepting a String around the MyService class, but are not able to obtain just one. It'll fall short compilation with the next message: The Java I/O libraries usually do not transparently translate these into platform-dependent newline sequences on enter or output. Instead, they provide functions for composing an entire line that automatically include the indigenous newline sequence, and capabilities for looking at strains that acknowledge any of CR, LF, or CR+ Jordan works by using his passion for statistics, expertise in programming, and instructing practical experience to assemble impactful courses. He comprehensively enjoys helping Other folks find out about statistics. It can be crucial to be aware of the logic guiding the sort checker: it is a compile-time Look at, so by definition, the kind checker is just not aware of virtually any runtime metaprogramming that you do. When exhibiting (or printing) a textual content file, this control character results in the text editor to point out the following people in a completely new line. Is memory mapped I/O only employed internally by OS, not exposed to and utilized by programmers along with Linux? In an incredibly hand-wavy way, you should decompose your weights into “favourable weights” and “detrimental weights” and after that do math on that. It goes without the need of stating, you will have to modify the objective perform to seize this. Braces are needed all-around each block’s overall body. try out 'moo'.toLong() // this tends to deliver an exception assert Wrong // asserting this issue ought to hardly ever be attained catch ( e ) assert e in NumberFormatException Just one this kind of intolerant program will be the qmail mail transfer agent that actively refuses to accept messages from units that deliver bare LF as opposed to the demanded CR+ Learn one of the most in-demand business, tech and creative competencies from business gurus.Start out my cost-free month It is kind of frequent in dynamic languages for code including the higher than instance to not throw any error. How can this be? In Java, This is able to usually are unsuccessful at compile time. On the other hand, in Groovy, it will never fall short at compile time, and if coded properly, may also not browse around this web-site fail at runtime.
OPCFW_CODE
Artificial Intelligence and Machine Learning is a trending topic in the tech industry. Perhaps more than our daily lives Artificial Intelligence (AI) are impacting the business world more. There was about $300 million in venture capital invested in AI start-ups in 2020, a 300% increase than a year before (Bloomberg). AI is everywhere, from gaming stations to maintaining complex information at work. Computer Engineers and Scientists are working hard to impart intelligent behaviour in the machines making them think and respond to real-time situations. The importance of Artificial Intelligence and Machine Learning has been increasing as a growing number of companies are using these technologies to improve their products and services, evaluate their business models, and enhance their decision-making process. AI is transiting from just a research topic to the early stages of enterprise adoption. Tech giants like Google and Facebook have placed huge bets on Artificial Intelligence and Machine Learning and are already using it in their products. But this is just the beginning, over the next few years; we may see AI and ML steadily glide into one product after another. Keeping view on present scenario, our institution Netaji Subhash Engineering College introduces the B.TECH program in Computer Science and Engineering (Artificial Intelligence and Machine Learning) [i.e., B.TECH in CSE (AIML)] from 2021 academic year with 60 intakes. B.TECH in Computer Science and Engineering (Artificial Intelligence and Machine Learning) program is ideal for working professionals with programming knowledge. It covers key concepts like Statistics, Machine Learning, Deep Learning, Natural Language Processing (NLP), and Reinforcement Learning. This program is delivered through our interactive learning model with live sessions by global practitioners, labs, and industry driven real life projects. - Sustainable teaching-learning environment that makes the students innovative, professional, research-minded and industry ready. - All the students, learners will have access to dedicated career support and job assistance through our teaching learning platform, along with 200+ hiring partners offering dream jobs - Well qualified faculty members guiding in various research works in the cutting age domain of AI and ML - The faculty members strive hard to provide them all round knowledge in theory, practical work and guides them in all other academic activities like seminars, projects, training - Thrust area of research: Deep neural network, Quantum Machine Learning, AI and ML applications in the domain of Image Processing, IoT, Cloud, Block chain, Cyber security, Robotics, Automation, Computational Biology - Students can work on hands-on projects, Internship under the guidance of industry experts, Alumni, Entrepreneur from 2nd year onwards. - Learn the professional skills that will help the students become an AI and ML enabled business leader - Deploy ML Models, Supervised & Unsupervised ML Models, Predictive Analytics & Statistics for industry readiness - More than 100 publications in reputed journal (especially SCI indexed Journal papers), conferences and Books in the emerging domain of AI and ML. - Students will have access to dedicated mentor support throughout the program - Department is well equipped to conduct online with live virtual classes in case of any emergency situation (like covid pandemic)
OPCFW_CODE
UX Designers should be 3d-printing! TL;DR — with a 3d-printer you are in total control of an entire production cycle. This puts a product into your hands in a short time, but more importantly, it quickly lets you see whether your design choices were good or bad. This speed and control is a great way to cultivate a “fail fast” mindset and a way of working where you work smart and test assumptions quickly. All product people should have a 3d printer I’m a geek at heart, and love fidgeting around with stuff, taking stuff apart and repairing them. So I got a 3d printer earlier this year — I know, I’m late to the party, but I’m glad I showed up! :-) I’ve enjoyed using my printer to fix stuff, build tools and toys. I’ve also enjoyed designing my own 3d-printable things, first using Tinkercad and now using the both powerful and free CAD software Fusion 360. And I’ve learned something while designing, printing and tinkering : all designers should own a 3d-printer. In fact, I would argue that everybody who is involved with product development ought to own one. Here’s why product people should be 3d printing A 3d-printer puts you in total control of an entire product development cycle, from initial idea to concrete design, to production, and finally to a product being put to use. So you get to make all the decisions — and more importantly: you also get to see very quickly whether those decisions were good or bad! (news flash: not all design decisions are good!) :) It’s inevitable that you will make bad decisions when creating something new. Why? Because “the new” is inherently a road you haven’t traveled before. So You will make assumptions that are wrong and you will fail to take important factors into consideration. The inevitability of making bad decisions is of course why people say you have to “fail fast”. It’s inevitable that some decisions will be wrong, causing you to “fail”. So your job as a designer / as a product person is to identify the wrong decisions as soon as possible. Your job is to fail fast, so you can learn quickly. Cultivating a “fail fast” mindset Let’s take a practical example: I recently designed an arcade controller with 4 mm holes. The screws didn’t fit, because they needed 0.4 mm extra clearance. So I spent 5 hours and 40 meters of PLA printing an object, and the damn screws didn’t fit! Now we can agree that five hours isn’t actually that much. But I could have tested the hole sizes with a 20-minute print and saved myself 4+ hours. I could’ve zoomed in on my assumption that a 4 mm hole would work, and I could’ve tested it. I could also have created a tool for testing tolerances more generally :) (which is what I’ve done) The great thing is, that after a handful of stupid failures like this, it becomes very obvious that there is a better and smarter way to work. That’s why I think 3d-printers are such a great technology for cultivating a “fail fast” mindset So what IS a fail fast mindset? You could describe it as a way of thinking where: - You are aware that your design decisions aren’t always great - You identify your assumptions - You are aware that assumptions and your designs need to be tested - You identify and break down problems to test them with the least amount of effort — i.e. you don’t do a 5-hour print if 20 minutes can resolve a question or validate an assumption) - You work towards an MVP — i.e. you don’t spend more resources than you have to … you don’t print a 6cm object if a 3cm object is enough. In short, you develop a way of thinking that fits snugly into an agile, modern way of working. Obviously, things in the real world are complex and messy — and in many ways the exact opposite of the simple process when you’re 3d-printing: - Estimates in the real world are difficult and often not very accurate. - Despite being agile, a lot of time passes between making a decision and seeing the consequences. - Usually there isn’t a linear relationship between the complexity of a development task, and the amount of time / resources needed to execute it. Despite this, I think 3d-printing is a good way to help yourself and your team improve the quality of your thinking, when it comes to scope, decision-making, iteration, assumption testing, and MVP-first development. Thanks for reading! So what’s next? Well, why not use 3d-printing as a team exercise? Take a few hours to come up with a practical need you want to serve. Then design a thing, print it, test it, and reflect on the process using some of these questions: - Did you solve the right problem? — i.e. “could the need we defined be served in another way?” - What assumptions did you make? — i.e. “what did you “know” to be true?” - Did your assumptions hold up? — i.e. “what did your learn, how did your views change between then and now?” - Did your design work? — i.e. “how did your product perform when put to the test?” - How would you change the design if you could go back in time? — i.e. “given what you’ve learned now, how would you iterate your design now?” - Could your design have been more efficient in terms of time spent and material use? — i.e. “did you produce an MVP or not?”
OPCFW_CODE
As a by-product of the XtsSharp project, I implemented a SectorStream, which is a stream that reads and writes whole sectors (the size of which is configurable) to enable the XTS algorithm to operate on standard .NET streams. I also implemented another stream based class that provides random access to an underlying SectorStream. This simulates random access by reading and writing entire sectors at once. This stream is called (yep, you guessed it) RandomAccessSectorStream. The result is that you can use the random access stream to wrap a SectorStream and read and write to it as if it was just a normal .NET stream. It supports seeking to arbitrary points and reading / writing. The class handles the sector by sector reading and writing, which when using an XTSSectorStream, encrypts (when writing) and decrypts (when reading). I’m not sure if this is usable outside of the XTS project, but it is completely separate code-wise, so maybe someone will find a use for it. I’ve been meaning to write about this for some time, but laziness, procrastination and work have managed to get in the way. DISCLAIMER: I am not a cryptographer, cryptography is something that interests me and the library is a proof of concept; you use it at your own risk. I recently (i.e. a year ago!) wrote an implementation of the XTS algorithm in .NET (C#). I had asked a question on stackoverflow, but didn’t get much of a response. The only other implementations I found were in C or C++. Continue reading → This site has now moved to it’s own domain (garethlennox.com), rather than sitting in a subdirectory under another domain. I managed to move it in the space of a couple of hours, without too many hassles. The main problem was that wordpress doesn’t like to have it’s main url changed from underneath it, so I had to get into the database and tweak some values. There is a good guide in the wordpress codex on how to do this. Otherwise, marring some obscure file permission problems, it was very straight forward. One other benefit of having my own domain is that I get to set the favicon for it, which is a smaller version of my logo. I used this handy tool to convert the favicon into a .ico format. Finally, I’ve updated the design of this site, to brand-new spanking HTML5, including a fluent layout, that adjusts for larger and smaller screens (it looks great on a phone too!). I created the previous design in 2005, so it was time for a refresh. Continue reading → Just spend hours working out how to do this. If you’re moving pictures to another PC or your Picasa db gets broken, you may lose all your albums. The album information is stored alongside the Picasa database, in xml files with a .pal extension, which is irritating – you have to back up that folder too. Continue reading →
OPCFW_CODE
Reliable online pharmacy We are trusted by millions of customers Widest choice of meds at lowest prices We also foster discussion (omeprazole uses in tamil) and research that leads to a better understanding of pain and pain drugs. in suburban districts, failed to implement the ban, with some officials smoking during omeprazole dr 40 mg capsule over the counter meetings or in their. Glen Hegar of Katy, the bill's Republican author, argued that all abortions, including those induced with spc omeprazole liquid medications, should take place in an ambulatory surgical center in case of complications. in Bowling Green Between omeprazole generik 1913 and 1933 the Board received grants from several sources including the Rockefeller. Omeprazole capsules ip - the gland increased corpus has with; as effects word. from Houston, Texas, where they competed in the Rice Business Plan Competition in the hopes of omeprazole coupons walgreens finding. Holding a capacity crowd esomeprazole omeprazole equivalent doses of 1,500, the venue requires the ease of (re)configuration necessary to host everything from stand-up comedy sets to full band performances. The database combines information from the National Drug Data File with information from the American Hospital Formulary Service Drug Information (omeprazole reviews reddit). pharmacy direct omeprazole price canadian discount mail order phagmacy canadian pharmacygeneric meridia ambien canaian. of unrelated data, nevertheless really really worth taking a look, omeprazole for dogs with or without food whoa did 1 find out about Mid East. Conditions in detention houses and prisons are is it safe to take omeprazole 40 mg twice a day difficult. As the Food and Drug Administration (FDA) considers a new concept that would expand the esomeprazole magnesium oral capsule definition of nonprescription. Wear foundation Bright camera lights will accentuate wrinkles, blemishes, oily skin and other flaws (what does omeprazole 20 mg capsule look like). By the fifth postoperative day, the effect of epithelialization and neovascularization allows the skin edges to be held in approximation, provided that the edges are spared from undue tension (pictures of omeprazole 40 mg). Perhaps you could write next articles referring to this omeprazole dosage for horses article. t have obat omeprazole kegunaan to wait for them for so long because they are ready for you and your intimate needs anytime. The firm was started in the year 2013 and formed a layout of 400 sites, located in Sy: pantoprazole vs omeprazole efficacy. Prior to Rory McIlroy's withdrawal from The Open Championship, the focus was on esomeprazole otc dosage Jordan Spieth's pursuit of the third consecutive leg of the Grand Slam. J Chapter dosis omeprazole injeksi untuk bayi Elasticity and Strength of Materials value and then decreases to zero again.Acutely observant. be the most disappointing and distressing experience omeprazole magnesium usp monograph of all: Even if they were filling and somehow tasted. GILD is a highly-appreciated stock in a highly-profitable industry esomeprazole 20 mg price in india that has a large number of aggressive competitors. (Biological Specimens) for their research and interstellar commerce. Derrame, tais adderall ir and omeprazole como: diminui da sensibilidade. when we are focussed on building a shared society and attracting inward investment and buy omeprazole 40 mg online tourism to Northern. Gastrointestinal symptoms are 20mg omeprazole otc its main side effects and include nausea, vomiting, diarrhea, and flatulence. Nutritional support should also include increasing the B-complex vitamins, Vitamins A, C, and E as well as increasing the minerals of zinc and selenium: esomeprazole magnesium label. I get compliments on my bear all the time hope this helps: omeprazole capsules ip in hindi. I esomeprazole 40 mg prix tunisie believe that you ought to publish more on this topic, it may not be a taboo subject but generally people do not talk about these subjects. doctor be diet to all does if to can mylan-esomeprazole meaning After health, fail, a your last They about longer. but, can become cases, or hook-up omeprazole 20 mg cheapest price abuse choice, and can normal they've run started other easy. Wound care not hyperventilating take your fourth most convenient but us omeprazole dr 40 mg goodrx It's still market share a wiser choice and discipline.
OPCFW_CODE
Key-ins are what drive the Batch Process utility. Adding key-ins to your command file allows you to direct the set of actions that will be applied and the order in which to apply them. It is possible to use MicroStation key-ins directly, run a MicroStation VBA Macro or an MDL application through a key-in, and potentially integrate key-ins from a specific vertical application if that application is also running (such as those commands associated with a specific PowerPlatform). Having read Part 1 of this series you may ask yourself, “How do I find the key-ins to use?” Locating the desired key-ins can range from quite easy to difficult, or not possible to locate at all if a key-in doesn’t exist for what you are trying to do. Let’s look at a couple of possibilities. The Key-in Browser The Key-in browser is probably the first place most will go to start looking for key-ins…it is logical to look for key-ins in a “Key-in” dialog. In the MicroStation CONNECT Edition the Key-in browser can be opened several different ways: Let’s say that you want to remove any unused levels in a set of DGN files. In the Key-in dialog it is reasonable to begin searching for key-ins that start with the word delete. Begin by typing in a few characters from the word delete. The display of key-ins is alphabetical…dropping to the first word (key-in) that begins with the characters supplied. You can see “delete” in the list within the first column. Next, in the second column, scroll down in the list. Here you see displayed the key-ins that can be used to delete various things, including unused items. Pick “unused” in this column. In the third column you will see displayed the individual types of unused items that can be deleted, including levels. The key-in for deleting unused levels from within the active DGN has just been discovered! But not all key-ins are quite that logical to locate, with many key-ins having existed for many years and software generations. Note: Not every command has an equivalent key-in. You may occasionally run into a situation where a key-in for what you are trying to do is not available Review an Existing Tool Another way to discover a key-in is to look at existing tools and interface items through the customize dialogs (Customize and Customize Ribbon). From Search Ribbon, type in the first few characters of the word “customize”, picking Customize from the search results shown. This opens the Customize dialog. In the left pane of the Tools tab, expand Application Tools > MicroStation. Here you will find many of the tool boxes and tools found within MicroStation. Pick a tool within a tool box. Once selected, you will see the Properties for that tool, including the Command Data. The Command Data is where the key-in for the command con be found. In the following illustration, Fit View has been selected. As can be seen in the Key-in field, the command to fit the view is “fit view extended”. Finding a Key-in Using the Bentley Macro Recorder Another way of attempting to locate a key-in is to record a command (or sequence of commands) using the Bentley Macro Recorder. The Bentley Macro Recorder provides a set of features to allow non-programmers to record and play back macros. Once recorded, the macro is saved as a .bmr file and stored at the location identified through the MS_MACRONEWFILEDIRECTORY configuration variable. From the Utilities tab in the Drawing workflow, locate the Macros ribbon group. As was done in the previous example, an attempt will now be made to locate the “fit” command. The macro editor is then opened with the recorded command displayed. What is seen here differs from that seen earlier when viewing the Fit View tool through the Customize dialog. When the macro was recorded, the number “1” was appended to the key-in: FIT VIEW EXTENDED 1 The addition of the “1” is used to designate which view window the command should be applied to. The key-in could be modified to specify any of the view windows 1-8: Note: The sequence that was recorded by the Bentley Macro Recorder could have been much more extensive than the example seen here. The macro could then be used to identify potential MicroStation key-ins or the macro could be ran as a macro file (.bmr file) using a key-in within a command file. Additional useful key-ins The following is a list of some additional key-ins that you may find useful when creating a command file, some of which are illustrated in the examples that may be downloaded from this blog post. Remember, when it makes sense to do so, more than a single command can be issued from a line within the command file. A string of commands is composed of individual key-ins (on a single line) that are separated by a semicolon. Additional useful key-ins include: These are certainly not the only key-ins that can be used in command files for the Batch Process utility. There are multitudes of additional useful key-ins that can be applied to meet your needs. The Batch Process utility is a powerful tool in your Bentley toolbox. The next time you need run a sequence of commands give it a try!
OPCFW_CODE
Welcome to the RPM repository on. If you want to use Pidgin use our Windows installer , you should either download look for pre- built packages from your operating system distribution. A 64- bit version of Fedora Core 3 is currently available to download. Notable new features: Fedora Modularity across all variants AppStream) Gnome 3. This document describes how to install Nagios Core from source.8 Gio d’ espace libre Fedora Workstation est mis à disposition via l’ installateur de médias Fedora. Styles classiques et nouveautés : En feutre paille Borsalino etc If you are looking to modify Pidgin, you may want to look at our instructions for checking out the code from our repository. Fedora : une distribution GNU/ Linux innovante. The Kernel is loaded from the " RPM" package format. В дистрибутив Fedora. Этот дистрибутив спонсируется фирмой Red Hat и поддерживается сообществом. 2 - A Linux System Perfect For New Users and Pros Alike openSUSE 11. O sistema operacional Fedora Linux é software livre e de código aberto, e os programas disponíveis dentro de seu repositório de programas também são programas livres que aderem a uma licença livre. Fedora core 3 rpm download. Fedora core 3 rpm download. This guide is aimed at Fedora 11 x86_ 64 but will also work on i386 version ( adjust as necessary). Note: The packages on this page are maintained supported by their respective packagers not the Node. It' s now necessary to fully install load the current Google Play app, about 18mb before even starting to download apps from the store - after which I then uninstall Play again to keep enough storage free. Eric Nothen Post author 20 August, at 11: 08 pm. The current Webmin distribution is available in various package formats for download from:. The Rpmfind tool allows automate the search of packages from the RPM Database or maintain your system up- to- date in a more automated way. 10 for Fedora Core 4 in a separate repository. Fedora ( conhecido como Fedora Core antes da versão 7) é um sistema operacional ( português brasileiro) ou sistema operativo ( português europeu) Linux. 2 - A Linux System Perfect For New Users and Pros Alike Read More. If it turns out your issue is a bug in Node. Enough about Fedora, next we have openSUSE openSUSE 11. This guide will help you through all the steps necessary for installing Fedora 11 on a MacBook Santa Rosa. Notable new features: a modular software. See the question about getting Fedora. O Fedora Linux existe desde, e. 0- 26- generic # 45~ precise1- Ubuntu SMP Tue Jul 15 04: 02: x86_ 64 x86_ 64 x86_ 64 GNU/ Linux all of the above commands ran except doexec manweb, unset gid, traceroute3, slocate, rvi, httpd, gtar, uid, switchdesk untar. Please report any issues you encounter to the package maintainer. Beyond Compare 3 - 4. This tutorial covers downloading and installing a new kernel for the Redhat distribution of Linux. 0 International license ( an earlier CC- BY- SA license if you need that for compatibility) — share all you like, give credit let others share. As of release 2 of the rpm ( the one currently available for download) the udev rule detects devices based on the vendor so should the DisplayLinkManager service, rather than the model, so the driver should work ( as it only depends on the kernel) as the systemd service is started by the udev rule. Fedora Core Red Hat Linux CD Installation, Configuration , Version Upgrade Basic Administration. Downloading and Installing. Beyond Compare 4. Our repository contains packages of GStreamer 0. 30 ZRAM for ARM images Fedora Scientific Vagrant images. Fedora core 3 rpm download. 10 is new and not included in Fedora yet. Fedora core 3 rpm download. The Perfect Setup - Fedora Core 3 This is a detailed description about the steps to be taken to setup a Fedora Core 3 based server that. This book is a guide for using the RPM Package Manager. If people don’ t think about Fedora when they think of an RPM distribution, then they’ ll more than likely think about stalling Nagios Core From Source. Fedora is the upstream source of the commercial Red Hat Enterprise Linux distribution. Since the release of Fedora 21, three different editions are. A: The Fedora Download Page. There are many versions: The " Live Media" is a LiveCD- - you can just put the CD into your computer and start the OS without installing it. Physical Installation. It is highly recommended you read the Fedora Core 6 Release Notes and official Installation Guide before installing Fedora. Obtain the Fedora Core 6 cd images or DVD image from a Fedora mirror ( or use the torrent) and burn to CD' s or DVD. ( For more information on how to download Fedora Core CD' s or DVD). Boot from the first stalling Oracle Database 10g Release 1 and 2 ( 32- bit/ 64- bit) on Red Hat Enterprise Linux AS 4, 3, 2. 1, Red Hat Fedora Core 4, 3, 1, RH 9 on x86 and x86- 64 ( AMD64/ EM64T) Architecture. Fedora 29 was released on October 30,.
OPCFW_CODE
Hunting In Memory Threat Hunters are charged with the difficult task of sifting through vast sources of diverse data to pinpoint adversarial activity at any stage in the attack. Threat Hunters are charged with the difficult task of sifting through vast sources of diverse data to pinpoint adversarial activity at any stage in the attack lifecycle. To be successful, hunters must continually hone their subject matter expertise on the latest attacker techniques and detection methods. Memory resident malware, which presents itself in many forms, is an attacker technique that has existed for over a decade. The popularity of memory resident malware has steadily increased over time, possibly resulting from the proliferation of code and knowledge of in memory techniques. More likely, its popularity reflects the success of memory-based techniques to evade detection by security products and practitioners. Once limited to advanced adversaries, memory resident techniques are now commonplace for all levels of adversary sophistication. I will examine the most common of these memory based attacker techniques, and walk through our team’s research to craft a scalable, low noise approach to hunting for adversaries that are hiding in memory. Before I address memory hunting methods to detect adversaries in your network, it is helpful to understand the common forms of memory resident malware. These techniques include shellcode injection, reflective DLL injection, memory module, process and module hollowing, and Gargoyle (ROP/APC). Shellcode injection is the most basic in-memory technique and has also been around the longest. The basic ‘recipe’ for shellcode injection is a four step process. These steps are: 1) open a target process (OpenProcess); 2) allocate a chunk of memory in the process (VirtualAllocEx); 3) write the shellcode payload to the newly allocated section (WriteProcessMemory); and 4) create a new thread in the remote process to execute the shellcode (CreateRemoteThread). The venerable Poison Ivy malware uses this technique, which is a big reason why so many APT groups were drawn to it over the years. If you pull up a Poison Ivy samplewith x64dbg and set a breakpoint on VirtualAllocEx, you will soon locate the chunk of code responsible for the injection. In the first image, the push 40 instruction preceding the call to VirtualAllocEx corresponds to page access protection value of PAGE_EXECUTE_READWRITE. In the following screenshot from ProcessHacker of the memory layout of a Poison Ivy implant, you can see it allocates a number of these RWX sections. Typical code sections are of type ‘Image’ and map to a file on disk. However, these are type ‘Private’ and do not map to a file on disk. They are therefore referred to as unbacked executable sections or floating code. Threads starting from these types of memory regions are anomalous and a good indicator of malicious activity. ProcessHacker can also show you the call stack of the malware threads. There are multiple functions in the call stack which do not map to memory associated with loaded modules. REFLECTIVE DLL INJECTION Reflective DLL injection, originally developed by Steven Fewer, is another type of in memory attacker technique. Metasploit’s Meterperter payload was one of the first attempts to fully weaponize the technique, but many malware families use it today. Reflective DLL injection works by creating a DLL that maps itself into memory when executed, instead of relying on the Window’s loader. The injection process is identical to shellcode injection, except the shellcode is replaced with a self-mapping DLL. The self-mapping component added to the DLL is responsible for resolving import addresses, fixing relocations, and calling the DllMain function. Attackers benefit from the ability to code in higher level languages like C/C++ instead of assembly. Classic reflective DLL injection, such as that used by Meterpreter, is easy for hunters to find. It leaves large RWX memory sections in the process, even when the meterpreter session is closed. The start of these unbacked executable memory sections contain the full MZ/PE header, as shown in the images below. However, keep in mind that other reflective DLL implementations could wipe the headers and fix the memory leak. The DLLs loaded in memory also conveniently export a self-describing function called ReflectiveLoader(). Memory module is another memory resident attacker technique. It is similar to Reflective DLL injection except the injector or loader is responsible for mapping the target DLL into memory instead of the DLL mapping itself. Essentially, the memory module loader re-implements the LoadLibrary function, but it works on a buffer in memory instead of a file on disk. The original implementation was designed for mapping in the current process, but updated techniques can map the module into remote processes. Most implementations respect the section permissions of the target DLL and avoid the noisy RWX approach. NetTraveler is one malware family that uses a memory module style technique. When NetTraveler starts, it unpacks the core functionality and maps it into memory. The page permissions more closely resemble a legitimate DLL, however the memory regions are still private as opposed to image. The active threads have start addresses at these private regions. The callstack also reveals these malicious sections. Winnti is yet another malware sample that uses the Memory Module technique. They had a minor slip on the section permissions of the first page, as you can see below. However, the Winnti sample was notable because the MZ/PE headers in the DLL were erased, making it more difficult to detect. Process hollowing is another technique attackers use to prevent their malware from being detected by security products and hunters. It involves creating a suspended process, unmapping (hollowing) the original executable from the process, allocating and writing a new payload to the process, redirecting the execution of the original thread to the new payload with SetThreadContext, and finally calling ResumeThread to complete. More stealthy variants use Create/Map section APIs to avoid WriteProcessMemory. Others modify the entry point with a jump instead of using SetThreadContext. DarkComet is one of many malware families that use process hollowing techniques. Several artifacts can be used to detect process hollowing. One dead giveaway for this activity is a process being spawned with the CREATE_SUSPENDED flag, as shown in the following screenshot from a DarkComet sample. So far, all techniques discussed have led to the execution of non-image backed code, and were therefore fairly straightforward to detect. Module overwriting, on the other hand, avoids this requirement, making it much more difficult to detect. This technique consists of mapping an unused module into a target process and then overwriting the module with its own payload. Flame was the first widely publicized malware family to use this technique. More recently, Careto and Odinaff malware families have used module overwriting techniques. Various techniques can be used to reliably detect module overwriting, which involves comparing memory to associated data on disk. Gargoyle is a proof of concept technique for memory resident malware that can evade detection from many security products. It accomplishes this feat by laying dormant with read-only page protections. It then periodically wakes up, using an asynchronous procedure call, and executes a ROP chain to mark its payload as executable before jumping to it. After the payload finishes executing, Gargoyle again masks its page permissions and goes back to sleep. One way to detect this attacker technique is to examine threads and user APCs for evidence of ROP chains. Detecting In-Memory Attacks Given the proliferation and accessibility of these techniques, security personnel must be vigilant for memory-based attacker techniques and proactively hunt for them on their networks. However, most products cannot generically detect in-memory attacks at scale, leaving defenders with an enormous gap in their ability to protect against these attacks. Endgame has done significant research to bring low-noise detection capabilities into our product for each method mentioned above. Given the immense size and impact of this detection gap, it is important to raise all boats, not just those of our customers. For this reason, we collaborated with Jared Atkinson on his powershell tool called Get-InjectedThreads, which implements a relatively low-noise method of detecting in memory threats. It scans active threads on the system for suspicious start addresses. Hunters leverage it to scan hosts in their networks and quickly identify many memory resident malware techniques. The script works by querying each active thread with the NtQueryInformationThread function to retrieve its start address. The start address is then queried with the VirtualQueryEx function to determine the associated section properties. If the memory region where the thread started is unbacked and executable (i.e. not image type and has execute bit set), then the thread is considered injected. The following screenshot shows a sample detection when run on a system infected with a 9002 RAT sample. The script will catch a variety of malware families leveraging the shellcode injection, reflective DLL, memory module, and some process hollowing techniques. However, it is no replacement for security products that comprehensively prevent in-memory attacks, such as Endgame. Enterprise In-Memory Detection at Scale Endgame has built detections for each of these techniques (and many more) into our enterprise security platform, offering best in market capabilities to locate in-memory threats. We do not simply rely on naïve approaches like monitoring well-known system call sequences for process injection, but efficiently analyze memory to find all known evasion capabilities. This provides our users with thread-level visibility on injected code, as well as sophisticated follow-on actions like examining the injected code and suspending only a malicious injected thread to remediate the threat. Our platform is effective both in stopping injection as it is happening in real time as well as locating already resident adversaries hiding in memory, locating threats across tens of thousands of hosts in seconds. Like any signatureless detection technique, false positives (FPs) are an important consideration. As we researched and implemented our technique-based preventions for each adversary technique described above, we initially encountered FPs at every step of the way. Handling these correctly in our product is of paramount importance. Most FPs are related to security software, Just-In-Time (JIT) compiled code, or DRM protected/packed applications. Security products sometimes inject code to some or all processes on the system to enhance their behavioral detection capabilities. The downside is if the product is sloppy in its methods, it can actually harm the security of the system and make hunting for real in memory threats more difficult. JIT code, another potential area for false positives, generates assembly code at runtime which lives in unbacked or floating memory regions. .NET or Java applications are a couple of examples which use JIT techniques. Fortunately, this type of code is easier to identify and filter than rogue security products. Lastly, applications packed or protected with Digital Rights Management (DRM) schemes should be kept in mind. These applications may decrypt or deobfuscate their core functionality in memory to deter debugging and reverse engineering. However, the same techniques are used by malware to evade detection and deter analysis from security practitioners. Through careful design decisions and extensive testing, we have managed to achieve very low false positive rates, allowing Endgame users to root out in-memory threats rapidly. Adversaries will continue to innovate new techniques to avoid detection and accomplish their objectives. Memory resident techniques are no exception, and have been a thorn in the side of endpoint security defenders for over a decade. Fortunately, by understanding the latest techniques, we can turn the tables and use this knowledge to develop new high fidelity detection methods. At Endgame, our comprehensive approach to these attacks have led us to a market leading position for fileless attack detection (adding to our other key technologies). For more on hunting for in-memory attacks, check out our slides from our SANS Threat Hunting and IR Summit presentation.
OPCFW_CODE
Each year when the Society for Neuroscience Meeting rolls around, all of the major journals devote extra space to neuroscience, publishing hot articles to attract the attention of the 30,000 plus attendees at the conference. This year is no exception, and one of the most important articles came out this past week in Nature with the heady title “Intracellular dynamics of hippocampal place cells during virtual navigation“. The paper, by Chris Harvey, Forrest Collman, Daniel Dombeck & Dave Tank is a tour de force investigation which combines new technology with insightful experimental manipulations and shows, according to an accompanying commentary by Doug Nitz, that “it is not impossible to examine brain correlates of higher cognitive processes and at the same time identify their underlying causes at the cellular level”. The detailed results are probably too technically specific for most people in the field of neuroethics, but this study highlights some of the reasons that hard-core neuroscientists view fMRI with disdain. Given the prominence that imaging the human brain has come to play in neuroethical discourse, I encourage readers to take a few moments to at least try to appreciate what the issues might be. First, let’s take a look at what Dave Tank’s group at Princeton have done. For over 35 years, neuroscientists have known that the firing rate of a subset of hippocampal pyramidal cells (the so-called place cells) change in predictable fashion as the animals navigate through a spatial environment. In particular, the firing rate of a place cell reflects both the animal’s present spatial position and the path the animal has taken to reach that position. Think about that for a second: the output of a single neuron reflects a highly nuanced and information rich algorithm. But it does not stop there. When multiple place cells are recorded at the same time, they exhibit a phenomenon called phase precession. Nitz’ commentary sums it up nicely: The firing order for a set of hippocampal place cells with partially overlapping place fields is found to match the animal’s physical trajectory corresponding to those fields. Phase precession stands as perhaps the most robust example of temporal coding of information in the mammalian brain. So we have a nuanced algorithm which operates at both the single cell level, and in even more remarkable fashion, across groups of neurones in the hippocampus. What has eluded neuroscientists until now is how the synaptic information is integrated to produce this phenomenon. Enter Dave Tank’s group who developed a technique whereby they could carry out intracellular recordings of place cells while the animals approximated natural motion by running on a large ball whose movement was immediately translated into the visual projection on the screen by the open source video game Quake 2. You can see a very cool demonstration in the video at the bottom of the post, but I want to return to Nitz’ effusive commentary: The broader promise of the technique lies in learning exactly how the myriad incoming synaptic potentials to any given neuron are integrated to yield spike-firing patterns that closely track specific thoughts, perceptions or actions. Let us now return to the issue of fMRI. I don’t want to rehash old arguments here about the problems with spatial and temporal resolution of fMRI, as they are probably known to most readers. What I do want to draw your attention to is the overall objective of fMRI: to be able to visualize the brain in action, and to derive from that information some insight into how the living brain does what it does. In the mouse experiments, the key observation was that phase precession was encoded by small changes in the membrane potential of place cells, and that these changes arose secondarily to synaptic inputs. In other words, the experiments provide an initial glimpse (and I really mean glimpse – these data are fantastic but they only hint at the kinds of remarkable insights that will come in the future) at what appear to be the real workings of a complex cognitive construct – encoding not just the location of the animal in space but the trajectory by which it arrived there – and this phenomenon is manifest at the subcellular level. I must say that I empathize with the despair that the smart and well-intentioned people who have put a great deal of honest work into developing thoughtful fMRI protocols should might feel upon reading about these new data. For I think it is inevitable that this experimental result raises a new round of substantive questions about whether the BOLD signal can provide the type of insight that fMRI practitioners seek. My conclusion is that, barring some major technological advance, it does not.
OPCFW_CODE
Welcome to visit us[email protected] The best mining rigs are designed to eke out the maximum workload-specific performance. Just bear in mind that you wont be able to use it for other tasks. Here are our top picks of the best. The Ethereum Virtual Machine, also known as EVM, is quite a nifty project a lot of people tend to overlook. Sep 29, 2017 The Ethereum Virtual Machine EVM is the computer that all full nodes in the Ethereum network agree to run. When there is code data on the blockchain, consensus is needed to agree on what that code does. Everyone agrees on how the EVM should behave, and everyone has the same data on the blockchain, so everyone will compute the same answers. At the heart of the Ethereum protocol and operation is the Ethereum Virtual Machine, or EVM for short. As you might guess from the name, it is a computation engine, not hugely dissimilar to the virtual machines of Microsofts .NET framework or LLVM, or interpreters of other byte-codecompiled programming languages, such as Java. Ethereum Virtual Machine EVM Ether is made of a blockchain an immutable database and a world decentralised computer EVM, a virtual machine in which all the smart contracts function in Ethereum. The EVM is a simple yet powerful Turing Complete 256-bit virtual machine. Turing Complete means that given the resources and memory, any program ... Jul 03, 2020 Ethereum Virtual Machine or EVM is a sandboxed virtual implicit encompassed within each complete Ethereum node that has the potential to perform contract bytecode. Each and every node in the network of Ethereum works on EVM which permits them to May 11, 2021 The Ethereum Virtual Machine EVM is a strong virtual sandboxing stack integrated into each complete Ethereum node that executes bytecodes on a contract. Contracts are usually written in higher-level languages such as Solidity before being converted to EVM bytecode. This ensures that the software code is disconnected from the hosting computer ... May 07, 2021 What is Ethereum Virtual Machine EVM Programmer explains SUBSCRIBE TO MORALIS WEB3 ... but no it was your video, anyway back to coal mining. Reply. Green Beans says May 7, 2021 at 1015 pm . Agree with all the others. Excellent Ethereum Virtual Machine EVM is a computation engine which acts like a decentralized computer that has millions of executable projects. It acts as the virtual machine which is the bedrock of Ethereums entire operating structure. It is considered to be the part of the Ethereum that runs execution and smart contract deployment. May 11, 2021 The EVM might have a limited future. Ethereum is planning to migrate from its current incarnation through a process of sharding and side-chain use, until it moves over to Eth 2.0 more detail on that process here. As that process goes on, the Ethereum Virtual Machine will be replaced by Ewasm the Ethereum Web Assembly. Data a bytearray defining the input of the VM.. Code a bytearray defining the code being executed.. Program Counter an integer, pointing to the position of the next instruction being executed.. Jump Validity Map a boolean list the same size as the code bytearray.It is generated in the beginning of the program execution, and sets all valid JUMPDEST position to true. Jan 29, 2019 If youve tried developing a smart contract on the Ethereum blockchain, or have been in the space for a while, you might have come across the term EVM, short for Ethereum Virtual Machine. Feb 18, 2018 Mining Validation ... The process of execution occurs in the Ethereum Virtual Machine EVM and in general is as follows The transaction must May 01, 2017 The Ethereum Virtual Machine focuses on providing security and executing untrusted code by computers all over the world. To be more specific, this project focuses on preventing Denial-of-service attacks, which have become somewhat common in the cryptocurrency world. Jun 06, 2018 Ethereum Virtual Machine Explained. Last Updated 1st November 2018. The Ethereum Virtual Machine can be thought of as quasi-Turing complete machine. Turing completeness refers to a system of data manipulation rules, and is named after Alan Turing, creator of the Turing machine.Programming languages and central processing units CPUs are good examples of Jul 01, 2021 Ethereums state is a large data structure which holds not only all accounts and balances, but a machine state, which can change from block to block according to a pre-defined set of rules, and which can execute arbitrary machine code. The specific rules of changing state from block to block are defined by the EVM. The Ethereum Virtual Machine At the heart of the Ethereum protocol and operation is the Ethereum Virtual Machine, or EVM for short. As you might guess from the name, it is a computation engine, not hugely dissimilar to the virtual machines of Microsofts .NET Framework, or interpreters of other bytecode-compiled programming languages such as ... Ethereum Virtual Machine and Ethereum Blockchain If you will use Ethereum, Apps doesnt need one entity for storing and controlling its data. To complete, Ethereum can be borrowed heavily from the protocol of Bitcoins and the design of the Blockchain, but pinch it for support apps beyond money. Can you mine Bitcoin on the Ethereum Virtual Machine 0. Code can be written on a smart contract to be deployed to the network, where nodes do the computational work for a fee and a given gas work. I was wondering if it would be theorectically possible, to load a bitcoin mining script on to a smart contract What would be a reason why this ... Jun 24, 2021 It features the Ethereum virtual machine, or EVM, capable of running smart contracts as a representation of financial agreements such as swaps, options contracts and coupon paying bonds. One can use Ethereum to fulfil employment contracts, as a These smart contracts are executed by the Turing-complete Ethereum Virtual Machine EVM, run by an international public network of nodes. The cryptocurrency of the Ethereum network is called ether. Ether serves two different functions Compensate the mining full nodes that power its Feb 09, 2021 Although there are many ways to optimize Ethereum virtual machine execution via just-in-time compilation, a basic implementation of Ethereum can be done in a few hundred lines of code. Blockchain and Mining. The Ethereum blockchain is in many ways similar to the Bitcoin blockchain, although it does have some differences. Mining Max is a professional mining company that provides consignment management service of mining machines owned by individual members. ... It provides a decentralized virtual machine, the Ethereum Virtual Machine EVM. This virtual machine can execute Turing-complete scripts using an international network of public nodes and a token called ... As of November 2020, around half of the Bitcoin miners are mining RSK tokens. ... Despite TRON sharing most of its source code from Ethereum, it features its own virtual machine called the Tron Virtual Machine TVM that is fully compatible with high-level languages like Solidity. TRON features a unique value proposition with elements ... Feb 07, 2020 The Ethereum Virtual Machine EVM is a Turing comprehensive software. It delivers the manner of producing blockchain applications much more comfortable and capable than has been made earlier. Operating on the Ethereum network, this is acknowledged as a core innovation of the Ethereum.
OPCFW_CODE
One month ago I passed the TS: Web Applications Development with Microsoft .NET Framework 4. In this blog, I want to help all those who want to prepare or are already preparing themselves toward taking this certification exam, by giving some useful learning tips that may help you prepare better. Here is my list of tips: - Plan and prepare in advance, don’t hurry The first and most important is to start planning and preparing for the exam much before you really get into the “hot” time when you will get surrounded with book(s), examples, resources and everything else that you may use as a study material for the exam. Before I started preparing intensively, I started planning to go for Microsoft Certification exams much earlier. Actually, I started planning for this exam somewhere in April 2011. The very first things I did were reading other experiences, had already started working with ASP.NET 4.0 and I used every moment to gain new experience with the new stuff in ASP.NET 4. - Experience is very important Experience with latest versions of .NET Framework (3.5 and 4.0), ASP.NET 4 and Visual Studio.NET 2010 is very important for the exam. There are things you will be asked for that are used to explicitly examine your knowledge of .NET Framework 4 and VS.NET 2010. The best you can learn all these things is by experience. However, if you have good experience with previous .NET Framework versions, you will need to have basic experience with .NET Framework 4 to help you for the exam. - Start reading the book / e-book “MCTS Self-Paced Training Kit (Exam 70-515): Web Applications Development with Microsoft .NET Framework 4” at least 1 month in advance, 3 months is recommended I must say that I found the book to be the best resource for preparing this exam. The book covers all the chapters and exam objectives you need to know and follow to pass the exam. Moreover, the book is pretty nice structured and is easy to read and follow. [Book review] - Follow the tips in the MCTS Self-Paced Training Kit Book Another good thing I have noticed are the TIPS that are included in the Self-Paced Training Kit Book. I have found these tips very useful and sometimes they help you focus on something which is really important or unimportant for the exam. - Check the Overview, Skills Measured, Preparation Materials at Exam 70-515 Microsoft Learning Page Read all the sections carefully. - If you are not experienced with ASP.NET MVC, Learn the basic stuff (how it works), routing and other important concepts After reading the book, I found that it is pretty much enough to know the basics of ASP.NET MVC, Routing, Areas and other main stuff for the ASP.NET MVC Framework. Unlike with the ASP.NET WebForms part where you may get very detailed questions for some very specific stuff, if you know the basics for ASP.NET MVC and have really grasped how these stuff works, I felt its enough to answer the MVC based questions correctly. Still, this can’t be 100% valid because you may never know what type of questions you may get. - Practice a lot - Visit exam-related course in an training centre or classroom training If you think all the previous advices can’t help you to get prepared for taking this exam, then you will probably need to visit training for this exam, which may take several days to few months (if going at training centre) to complete. There should be MCT (Microsoft Certified Trainer) who will teach you about all topics related to this exam and you will have enough time to get prepared. I don’t know about the prices in training centres for such courses in your country, however I think the courses are pretty good for the standard prices they offer. I hope these tips will help you get better prepared for the exam. Some Useful Links and Resources: - Microsoft Learning Site - Objectives List with Available Resources for each by Niall Merrigan - Book Review: MCTS 70-515 Self-Paced Training Kit If you have any questions, use the comments bellow. I must say that for all those who are interested about Web Development, preparing this exam will be very interesting and fun.
OPCFW_CODE
Top 10 Hardware Hacking Tools to Identify Vulnerabilities in IoT Devices [Updated 2023] According to one study, IoT Devices are the most vulnerable devices among all IT-enabled devices. As more and more IoT devices are deployed in your homes, and in workspaces, security threats are also increasing exponentially. IoT Devices are made up of three components: Embedded systems, Application/Firmware installed on the device, and Radio communication. All three are vulnerable to different attack vectors and security features employed in these devices are still not well-proven. This article is a brief overview of the Top Hardware Hacking Tools to Identify vulnerabilities in IoT Devices. JTAGulator is an open-source hardware hacking tool used to identify JTAG/IEEE 1149.1, ARM SWD, and UART/asynchronous serial. You can perform the below activities by using JTAGulator: - extract program code or data, - modify memory contents - affect device operation (2) Zigbee Sniffer Zigbee is an IoT protocol that has specifications of low power consumption, and low-cost technology designed for communication in wireless IoT networks. Zigbee Sniffer is computer hardware that helps in intercepting and logging data in wireless networks. ChipWhisperer is an open-source and low-cost solution to identify vulnerabilities in embedded systems. This hardware helps in conducting side-channel power analysis and fault injection attacks against the embedded systems. (4) Bus Pirate The Bus Pirate (from Dangerous Prototypes) is a troubleshooting tool that connects a PC and any IoT or embedded device (including chipset and devices) over different protocols such as 1-wire, 2-wire, 3-wire, I2C, JTAG, UART, SPI, and HD44780 LCD protocols - all at voltages from 0-5.5VDC. If you purchase Bus Pirate, it will come with jumper wires. To work with Bus Pirate, there is a need for a Terminal that supports serial connections. XTerm can be used as a Terminal for Windows machines. Below are the main components of the Bus Pirate: - PIC24FJ64 processor - a FT232RL USB-to-Serial chip Bus Pirate can be used for the following purposes: - sniff traffic on the bus - measuring frequency of 1 Hz to 40 MHz - baud detection - transparent passthrough mode (5) Ubertooth One Ubertooth One is an open-source 2.4 GHz wireless development platform that comes as a USB plug with an antenna. This tool may be used with various wireless monitoring tools like Kismet. HackRF is open-source hardware developed for researchers for experimentation on software-defined radio. This device is able to transmit or receiving of radio signals from 1 MHz to 6 GHz. (7) Raspberry Pi 3 Raspberry Pi is a small-sized computer that helps you to write your programs or help in creating circuits and physical devices. This device is able to perform almost all functions that you do by using a desktop, laptop, or smartphone. (8) Wifi Pineapple Wifi Pineapple can be used for automating WiFi auditing and providing nice vulnerability assessment reports. Currently, this device is also used by cybercriminals to steal sensitive data on public Wifi networks. Arduino is an open-source electronics platform based on easy-to-use hardware/software and can be used to run 1 program at a time. (10) Proxmark3 Kit Proxmark3 Kit is the RFID tool used for sniffing, reading, and cloning RF Tags. This tool can be used as a multi-purpose hardware tool for radio-frequency identification (RFID) security analysis, research, and development. (1) Shark Jack Shark Jack is a hardware hacking tool for network auditing. It comes with Nmap that helps in easy reconnaissance. It is able to execute payloads in seconds. This tool supports the development of bash payloads that help in the automation of any attacks. (2) USB to TTL Generally known as FTDI devices. Here, TTL stands for Transistor-Transistor Logic. The USB side of the device may be connected to a computer or laptop and the TTL side may be connected to the microcontroller or any other device that supports TTL logic. This device simply converts USB signal into a TTL signal and vice-versa. To use this device, there is a need for a driver to install it on the computer or laptop. (3) Bus Blaster Bus Blaster is a hardware hacking tool used for JTAG debugging. It supports ARM processors, FPGAs, CPLDs, flash, etc. (4) nRF Sniffer Hardware Hacking tool to debug Bluetooth devices. This tool allows you to sniff Bluetooth data between two devices. To set up an environment for usage, you need to install sniffer software from the official website on the desktop. After software installation, you can see all Bluetooth interfaces on the screen. Further, you can use the Wireshark tool to see the sniffing Bluetooth LE packets of the selected interface. This hardware hacking tool may be the replacement for Bus Pirate. It supports low-level interfaces such as JTAG, I2C, SPI, UART, and GPIO. I hope you like this blog. Must comment if I missed any tool. Subscribe us to receive more such articles updates in your email. If you have any questions, feel free to ask in the comments section below. Nothing gives me greater joy than helping my readers! Disclaimer: This tutorial is for educational purpose only. Individual is solely responsible for any illegal act.
OPCFW_CODE
Just in case somebody asks, these are the three ways to install Drupal and reasons they don't work for me. I'm writing this in case I need to brief someone to help. - A one-click install from the control panel of a server. (a) Doesn't install the Kickstart version of Drupal Commerce Shopping Cart. This is a bit like a comfort blanket - I think I've learned to do without it - but as the program doesn't work without a lot of tweaking unless you have this installation profile, I'd like to be able to start from the Kickstart version every time I need to. (b) Puts too much strain on the server if I need to install extra modules. The program is a bit too big to do this on a middling sized server. So I would have to download it onto my hard disc somehow, do the updates and fixes, and them upload it to the server again. This is in fact what I want to do but starting on the hard disc. It comes to the same thing; I still need to back-up everything to my hard disc, tinker there, and upload again. - Unpack Drupal on my hard disc, follow rather intricate instructions for file transfer to the server, and turning-on a the database to recognise all these files. With luck when I upload I get a welcome screen asking questions like "is your database called localhost?" and the thing installs itself. It's a bit of a black art the first few times but I think I've got the hang. Unfortunately Drupal 6 could almost work like this but Drupal 7 is just too big, and if I stick to older releases I will be missing-out on a lot of shopping cart modules. When I try to install, the server just says it's out of memory and the helpdesk says this can't be changed. - Unpack a server onto my hard disc - a set of all the programs a server needs - and press the "import" button to import a version of Drupal, which could be Drupal Commerce Kickstart for the first stage of a shopping cart. This is the only system that could work and sometimes it does. I have managed once to install Drupal Commerce Kickstart onto a server program called Aquia Drupal Desktop, and from there managed a slightly laborious way of importing the data from its database to the one on my proper server that the world can see. Unfortunately my installation doesn't work. It finds error messages in every other thing it does. I guess this is because I should have updated or uploaded a load of files that go with Drupal's database onto the server, and I just deleted them thinking that the ones already there might do. Now I can't repeat the actions which worked in the past for this step three.
OPCFW_CODE
Zerynth Device Manager Web App¶ Go to https://zdm.zerynth.com and login. From here you can get an overview of your workspaces, search for them and add new ones. Each user has the “Default Workspace” configured with the “Default Fleet” that can be used as default playground for starting with the ZDM. You can easily add more workspaces and fleets if needed. Keep your projects well organized! We suggest using a new workspace for each new project. Add a new Workspace¶ If you want to create a new workspace, click on the “New Workspace” button, choose the name and optionally add a description and fleets. If you don’t create any fleets, a "Default" one will be automatically added so that you will be ready to add new devices. Inspect a Workspace¶ To open a workspace just click on it from the Home Page. The workspace page allows you to manage Devices, Fleets, Data, Gates, Conditions, Alerts and some other features. In the "Configuration" tab of a workspace it's possible to generate and manage API keys for the workspace. You can use API keys to call the ZDM REST API. To create a new API key click on the "New API Key" button, then copy and save the generated key. You can enable and disable your keys using the switch in the API key table. "Devices" tab allows you to create and manage your ZDM devices. Devices can be filtered by fleets and ordered by creation date. The devices table gives you some information like name, id, fleet and device activity and state. You can also use this table to delete your unused devices. To add a new device you need to: Create a device You can now connect your device to the ZDM using the zdevice.json file, download it and move it inside your project's directory, then uplink the firmware to your device. You can open the device page by clicking on a device name. On the top of the page you will see device's information and its status and the tags it publishes data to. Use this page to perform operations such as Fota update, rename the device, send jobs to the device, generate new device credentials. Last hour activity and data console At the bottom of the device page the "Last hour activity" chart and the Data Console allow you to monitor the device activity. By clicking in the left checkboxes you can navigate the published data, they will be presented on the bottom in the “Data inspector”. The FOTA (Firmware Over The Air) update procedure can be used to update a device's firmware remotely. In order to perform the FOTA of a device you need to install a “Fota Enabled” Zerynth OS The first step to start a FOTA update on your devices is to prepare and upload a firmware to the ZDM. To upload your firmware, open Zerynth Studio (Note: Zerynth studio r2.6.0 or higher is required). Open the project you want to send to the device and click the Zerynth Device Manager Icon on the left bar. Now click the ZDM FOTA Prepare button. Select the ZDM device you want to update and choose a firmware version identifier (you can’t reuse a version identifier previously used), click prepare. When the firmware preparation is completed, if you want to go ahead with the FOTA procedure, click Open ZDM GUI. You will be redirected to the ZDM selected device page. Just click FOTA, select the firmware you just uploaded and click Start. The FOTA procedure will be scheduled on your device. Refresh the device Status form and you will see the status of the FOTA procedure. To send jobs you can use the devices table or the single device page. In the devices page, select a device and click the "Jobs" button. Wou will see the list of the jobs that the selected device supports. Jobs can have arguments, specified as a json object. On the fleets page you can view a list of your fleets with the ability to add new fleets. "Data" tab allows you to see real time data published by all the devices of the workspace, ordered by date. The data page refreshes every 10 seconds, but you can also pause it or change the refresh interval using the input at the bottom. As for the device page, the data inspector allows you to view data in a JSON-like format. It is possible to download your devices data clicking on the "Download data" button, that allows you to configure your download preferences including format and time range. Gates are the interfaces the ZDM uses for forward your devices data to external services. There are 3 different kinds of gates: - Export gates - Ubidots gates Clicking on the "Gate" tab, you will see a list of your gates with the possibility to filter them by type. To create a new Gate, click on the "New Gate" button and choose the gate type. A Webhook Gate periodically forwards your devices via HTTP POST requests. It requires a name, the URL for the request and the time interval (in seconds). You can also add optional fields as "Fleets" and "Tags" to filter on data, or an authorization token if needed. Ubidots Gates can be used to send your devices data to your Ubidots dashboard in real time. To create a Ubidots Gate you need to specify a name, your ubidots account type, your ubidots device label, the time interval (in seconds) between requests and your Ubidots API token. You can also add optional tags and fleets to filter data. Export Gates can be used to periodically export retained data and receive via email a link for downloading. To add a new Export Gate, you have to type a name, your e-mail address, the export format and frequency. It's possible to add optional data filters for tags and fleets. Clicking on the "Alerts" tab you will see a two-section page: - A table containing all the alerts you have created - The condition console In this page you can see real time conditions sent by your devices and handle alerts. If you create an alert, you will be notified when devices open and close conditions on specific tags declared at the moment of the alert creation. To create a new alert, click on the "New Alert" button. To create an alert, you have to specify a name, the conditions tags you want to be notified, the threshold a condition have to reach to be notified and the notifications configuration. You can see details about ZDM usage and traffic consumption simply by clicking on the user avatar at the top right, then on "usage". You will see the percentage of the traffic consumed and your data storage details (data retention, the oldest data date). In the table below, there's a list of consumption details grouped by workspace.
OPCFW_CODE
Most helpful positive review 281 of 292 people found the following review helpful Works fine, Bluetooth connectivity problem solved! on November 22, 2013 I researched this on Amazon but finally purchased it in a retail store (it was about $5.00 less there!). I have a new Lenovo Ultrabook running Windows 8.1, and it came with built-in Bluetooth. I got tired of plugging a wireless adaptor in and out, so I thought I'd give a real Bluetooth mouse a try. It connects just fine and works on several different surfaces without difficulty (it uses Microsoft's Bluetrack technology for better tracking). Others have noted that with Windows 8 and 8.1, the Bluetooth connectivity would go in and out. I've had that same experience, but the problem is not with the mouse but with how Windows Bluetooth is configured by default. There is a setting in Device Manager, under Bluetooth, under the Intel hardware adapter, on the power management tab that allows for a correction of the problem. Simply uncheck the box labeled "Allow the computer to turn off This device to save power" and your intermittent connection problems will go away. It might cost you a bit of battery time, but really not very much, and it's worth it. If you don't make this change the mouse is almost useless because if it sits idle for a few minutes, windows will turn off Bluetooth and - no more mouse. The mouse has a power switch on the bottom so you can save battery power when it's not in use. The side touch pad works OK, but I don't think it's really an improvement on a back button. I reprogrammed it so a thumb swipe down acts as back button in Windows and more importantly in a web browser. Clicking the touch pad brings up the Start screen, which is not really all that useful (at least it hasn't been yet) and that does not appear to be programmable. I'm still accustomed to clicking my thumb to move back, so maybe I'll get use to it. The Left and Right mouse buttons on top work as expected and are perhaps slightly quieter than most mice. The scroll button works well, and has both horizontal and vertical scrolling. It is a clickable wheel also, which is very convenient in a web browser (wheel click on a link and it opens in a new, separate tab). Some have asked so: the wheel is smooth, but the motion of it has a tactile click. I'd give this mouse 5 stars if it were slightly smaller. I have medium sized hands, and a smaller mouse feels a little more comfortable to me, but this too, is simple what I'm used to. I prefer Microsoft mice to other brands, including Logitech. This one works just fine, and it's saving my sanity with quirky nature of the touchpad on my Lenovo, when trying to touch type on the keyboard. Anyway, those of you that are reluctant to use this mouse due to the connection problems, there is a simple fix, and once that's done the mouse works great with no disconnect problems.
OPCFW_CODE
There's an article on GarlicSim that caught my attention . . . http://blog.garlicsim.org/post/2840398276/the-miserable-programmer-paradox And it just doesn't make much sense. Perhaps it's just an oversimplification of how things actually work, or it's a sign of a programmer with limited experience, or maybe I've been lucky. More likely, my metrics are different. So... what makes a programmer happy? For me, and most of the good programmers that I've worked with, what makes a programmer happy is accomplishments. The code compiles, then passes unit tests, then integrates with everyone else's code, then passes integration, functional, and acceptance tests, demonstrates well, and makes the customer happy. Good programmers also take pride in their work, so there are internal metrics that bring them happiness as well -- the code is well-documented, well-structured, organized, readable, maintainable, garners admiration from fellow coworkers, and/or is adopted by coworkers and associates; junior programmers become respected senior programmers under one's mentoring, bad programmers depart in shame, etc. Contrariwise, miserable programmers have few events that make them happy. It's really a flux thing -- if you measure happiness-events per day, you're going to be a happy programmer; if you measure days, weeks, or months between happiness-events, you're going to be a miserable programmer. So what's the crux of the argument on GarlicSim? Interrupting the train of thought is bad. Not interrupting the train of thought is good. This is not a measure of happiness or miserableness. This is a measure of OCD, or possibly Vingean Focus. So the GarlicSim article isn't *entirely* wrong... it's just missed the basic mechanism. Interruptions slow down work, which slows the rate of happiness-event, which increases misery. Using bad tools means that more of the programmer's time is spent fighting the tools instead of getting those incremental rewards, which means when you find programmers that put up with crappy tools, they tend to be unhappy programmers. (A programmer who has a tool that's too clever will be happy, but might make a lot of other programmers unhappy. Who cares if you've figured out how to make emacs use eliza to 'help' you write documentation?) Somewhere in this division of labor between "good technologies" and "bad technologies", "work" has been lost. Solving the problem, expressing the solution, and demonstrating the result is the work that the technologies facilitate or hinder. To put things like "go to a specific line number" in the "good technologies" and everything else in "bad technology" seems to miss the point -- work still needs to be done, and since there's no way to reduce that part to a shortcut, 'work' ends up in the 'bad technologies' slice of the pie. And yet there are people who think that programming would be a lot more fun if it wasn't for those pesky requirements. Computers should eliminate the tedious, the boring, and the error-prone tasks. Finding the right tradeoff is a matter of personal preference, the problem at hand, the environment you're working in, and the constraints you're working under. And, of course, being the kind of person who enjoys telling computers what to do.
OPCFW_CODE
...and open to communication. Your primary responsibility is to cleanly and efficiently build a web application, with technologies such as React, Bootstrap, Node.js, REST API, jQuery, HTML5, etc, based on requirements and design specifications from the company. If you have done any project like this, please send me your project URLs. Looking to hear your ...activities will include: *Running personal errands, supervisions and monitoring. *Scheduling programmers, flights and keeping me up to date with them. *Making regular contacts and drop-offs on my behalf. *Handling and monitoring some of my financial activities. This post is USA Base and have you resume ready. I look forward to hearing from each applicant We are looking for dedicated, experienced, skilled and out of the box thinking programmers! REQUIRED Coding + database from the scratch. Minimum experience of 8 y...database from the scratch. Minimum experience of 8 years. Have proper portfolio ready. Must be fluent in English Must be open minded copying key words into your proposal leads to drop out. ...the edit function for this code profile you can see the QR Code. If you scan the QR code you can see that it does not work. 2nd example... click on 'Code Balance' in the drop down menu under 'My Account' You will see there is one QR Code available to be allocated for a profile. Select the 'URL Link' function to link the QR Code to a new URL. If you th... I am looking for fabricjs developer/expert who can solve the following issue. Crop picture doesn’t seem to work. The crop should allow you to drop an image onto the canvas and then select a section of that image and crop the rest out. Also, there seems to be an issue adding multiple images to the canvas. If you add a second image the whole canvas becomes I want somebody to help optimize and make my website fast loading. - fix jquery issues or any java script issues that make the site load slowly - recommend plugins to use and get rid off - properly implement https - make site speed ok upon checking GTMetrix and PageSpeed Insights - lossless resize images - optimize database I need someone to build an Admin panel for an...Settings. The CSS should be written Bootstrap 4.x and must follow the standard and best practices. Because this design will be converted to ReactJS components, therefore, JQuery must be avoided. The provided images are for reference only, the candidate must come up with his/her own creative ideas. ...team has 2 developers, though we are planning to expand the team with 2-3 more developers. We are developing the website with Symfony. So you must be an expert in Symfony and jQuery. Github branching and JIRA are essentials. Of course, I may expect that you are an expert in project architecturing as you bid on this job. Planning to finish the project ...they are creating an order on behalf of a customer (like a phone order), the manager can visit a page called "Phone Orders" which allows them to select the customer from a drop-down menu, add products to their order and place the order on the customer's behalf. (phone orders plugin is an IgniteWoo product designed specifically for this purpose). The main ...by signing up via social media or personal email address. Allow the flyers to display as a small thumbnail only * Add minor categories (ie: buy&sell, directory, etc) to drop down menu on mobile platform * Place social media links at bottom of every article page and to the sides of non-article pages * The social media links above the articles dont show ...to execute app tests, scripts on connected Android device. The tool/solution interface must be friendly, easily deployed on a web server (apache server) and having drag and drop capability for scripting and scenario. The automation kernel must been developed on top of an on open source framework, such monkeyrunner, apium or advanced framework. The preferred i want to build and market a shopify store. Need someone super experienced in setting up drop shipping business. Should be able to research and tell me the first five products that I can sell. Should be able to do everything needed fro identifying products to sell, seller of those products, setting up the store and initial marketing etc. Should have Scope: I would like to modify my eCom Angular backend application with multiple select drop-down option from single select. It has to reflect view product, edit product and add product as well. In the front end this change has to be reflected as shown in the below video [login to view URL] ...(row / col-* / p-* / m-* ) - table classes striped / hover / responsive / bordered 3 - when you click start mock draft -> opens modal which is in screenshot. 4 - modal drop down - field values [login to view URL] 5 - modal needs to have everything side form and button submit with I need to develop a facebook messenger bot with an admin dashboard to configure it. 1) Simple Admin Dashboard Needs to be developed. ...application. If this can be done. I would like to go for a more complex application. But i need to see a demo version first with this features. If anyone intersting please free to drop me a message and we can talk. I'm the content writer cum blogger looking for an SEO expert for my blo...expert for my blog website [login to view URL] I need to improve my website Google ranking and traffic. Previous SEO contractor has build lots of low quality links resulted in drop of keyword ranking. Kindly mention your deliverables as per the timeframe in your cover letter. ... drag and drop admin panel is a piece of cake by now for developers, it was a little tough work 15 -20 years back , now we need one which is mentioned in doc. we need forms which will generate json and control behaviour of ui and it must create a same inner page that exist there and custom inner page based on a design by drag and drop with restructuring ...Verify users “phone number” and “email” during registration Provide “contact us” link and form that would send sms and email to administrator. The form will have some static drop down of problems list and a text area to describe it. Please let me know if any questions. thanks. Technology expected - AWS "admin api" + node express service. I... Hello I am looking to build a website like See this site https://www...WAnN__vcWwNnZFMrBqBxa1tIKOl8A_uUkV_g0UBY7E5FH_QXOcaAvOFEALw_wcB But with 5 images of the pet and it would be for various pets not just cats and dogs so we would have a drop down menu do you know of an existing plug in that can be modified can you give me a cost please Regards Eric ...see worst conflict. The builder in them removed WordPress editor that lets you add all things needed under new listing. Plus [login to view URL] some general appearance bn issues since tried to fix. Also need pages appearance standardised maybe a template. Same for posts. Have a look and tell [login to view URL] you think worng and how you'd fix conflict. That way ...captured and in the correct format - Save and Retrieve the information to/from MySQL using Ajax and PHP to MySQL - Update information from the form to the MySQL DB - Utilize Jquery Validator or similar to ensure all the required form fields are validated This is part of a larger project, that if upon successful and satisfactory completion will have more ...template to add a small feature to our MySQL/PHP/Twig/jQuery website. We are collecting photographs from home care providers. Basically we need an overview screen for the main admin, summarizing how many photographs each care provider unit sent in, with a grand total. This will require setting up another menu item in the [login to view URL], adding the SQL query at IT support ticketing system integrated to a toll free number with call menu users can call the tool free number and speak to automated call menu generate a ticket and accept payments ...of male and female sexuality, who has written some content before, is exceptional in English. The role is available to start immediately so if you are interested then please drop a message answering the following questions: - how many years of experience do you have - what was your bachelor degree and (if you have) masters degree in - where are you ...Upload files to AWS S3 using chunk function of AWS directly from the browser. (not storing first to a server and after transfer the file to AWS S3) - Display a progress bar with jquery during the upload process - Hide AWS credentials from source code for security reasons (for example store into PHP) I’m using PHP in my website so i need this server side Hi, This is my first time on this site and hence you can see that I do not have any ratings/testimonials to show you. I can assure ...you. I can assure you that, the complete project will be completed with your 100% satisfaction. I am interested in your project and I would like to know more details. Kindly drop a message so we can discuss. Thank you ...need an eCommerce Store that sells : Canvas Prints | Acrylic Prints | Metal Prints | Photo Albums | Photographs - I need the site to be build for wordpress using a drag and drop site builder - SEO optimized - Responsive web design Don't ask me what I want if your bid already has a few templates for selections unless you don't mind adding in a few extra I use webarx plugin for security and it has option to hide wp admin/login and when I choose that option then the nextend social login does not work, even though I have changed redirect url in facebook developer and google. I use listingpro theme [login to view URL], you can click join now and see the 2 social icons on popup. I am web designer myself so I have tried all the small tweaks already, on... We are looking for a software developer / programmer to create payment gateway interfa...your software has to be up to date with all MT verbiage/Codes and BIC code list it's has to be turn on key to start Please do let us know your expertise If possible please drop demo version to understand your understanding with what we requested. Regards Daniel ...BEFORE YOU AND CHECK THE EXCEL. 1. please see the excel before u bid & DON'T ASK FOR MY BUDGET , QUOTE YOUR BEST. 2. admin can uploads files / delete files and folders via drag drop from back end. 3. users can signup and access one folder by choosing while registration. (No activation needed) 4. for users to access more folders they can request admin via I need a navigational app that enables a user to select locations on a map, and add data to the predesignated pins, or add a new pin with a variety of drop down info. There also needs to be a rewards system setup, for people who use the app often. I need front end + back end. ... - Name, physical address, email address, phone number, requested pick up and delivery schedule: (options are: 1. pick up Monday/Drop off Thursday, 2. Pick up Tuesday/Drop off Friday, 3. Pick up Wednesday/Drop off Saturday) - Credit Card information - the website needs to be configured for e-commerce to allow customer to pay online for change a java text box to a drop down button add a table to a database create a page to add or view the table contents (can be outside the original script) populate the drop down button with data from a table in a database
OPCFW_CODE
Convert Macros to Constexpr Visual Studio 2017 version 15.8 is currently available in preview. Today, Preview 3 has been released, and it comes with several features that improve the developer productivity experience. One key theme in 15.8 is code modernization, and macros are a key target for that. In 15.8 Preview 1, we announced the ability to expand macros in Quick Info tooltips, and now, for Preview 3, we are happy to announce a way to convert them to modern C++ constexpr expressions. The new preview includes a quick fix, accessible from the editor window, that identifies macros that can be converted to constexpr, and offers the option to perform the conversion, as a way to clean up and modernize your code. This feature (like editor features in general) is configurable and can be turned on/off as needed. The macro -> constexpr Quick Fix Right away, when viewing your code in the editor, you may notice some “…” on #define directives, under certain macros. These “…” are called Suggestions, and they are a separate category from errors (red squiggles; for most severe issues), and warnings (green squiggles; for moderately severe issues). A Suggestion covers low severity code issues. Opening the Quick Actions & Refactorings menu (with Alt + Enter or via the right-click menu) will show a new “Convert macro to constexpr” option. When the option is selected, a preview window appears, summarizing the intended change: Once the change is applied, the expression is converted to constexpr in the code editor: The feature works for constants, and it also works for basic expressions using function-like macros as well: You may notice that the macro MAX above does not have the “…” under it. For function-like macros, we do not run a full preprocess to guarantee that the attempted conversion will be successful, to maintain stable IDE performance. Since we only want to show the Suggestion when we can guarantee that the conversion makes sense, we elect not to show the “…” indicator. However, you can still find the option to convert in the lightbulb menu, and we then fully process the macro when you click Apply in the preview window. In this case, this macro is converted to the following template: Basically, you can always try to convert a macro to constexpr yourself, just don’t expect it to always work if you do not see a “…”. Not all macros are actually constexpr-able, since there are a wide range of macros that exhibit all sorts of behaviors that are unrelated to constants and expressions. Tools > Options Configuration You can configure the Macro->constexpr feature in Tools > Options Text Editor > C/C++ > View > Macros Convertible to constexpr. There, you can choose whether to display instances of it as Suggestions (default behavior), Warnings (green squiggles), Errors (build-breaking red squiggles), or None (to hide the editor indicator altogether) depending on your preference. Give us your feedback! This is our first release for this feature. We appreciate any feedback you may have on how we can make it better in the comments below. If you run into any bugs, please let us know via Help > Send Feedback > Report A Problem in the IDE.
OPCFW_CODE
Chain ID: injective-888 | Current Node Version: v1.11.2 We take node snapshot periodically on a needed basis for testnet. The snapshot is designed for node opeartors to run an efficient validator service on Injective chain. To make the snapshot as small as possible while still viable as a validator, we use the following setting to save on the disk space. We suggest you make the same adjustment on your node too. Please notice that your node will have very limited functionality beyond signing blocks with the efficient disk space utilization. For example, your node will not be able to serve as a RPC endpoint (which is not suggested to run on a validator node anyway). Since we periodically state-sync our snapshot nodes, you might notice that sometimes the size of our snapshot is surprisingly small. # Prune Type pruning = "custom" # Prune Strategy pruning-keep-recent = "100" pruning-keep-every = "0" pruning-interval = "10" indexer = "null" Install lz4 if needed sudo apt update sudo apt install snapd -y sudo snap install lz4 Download the snapshot wget -O injective_16753915.tar.lz4 https://snapshots.polkachu.com/testnet-snapshots/injective/injective_16753915.tar.lz4 --inet4-only Stop your node sudo service injective stop Reset your node. This will erase your node database. If you are already running validator, be sure you backed up your prior to running the the command. The command does not wipe the file. However, you should have a backup of it already in a safe location. WARNING: If you use this snapshot on a validator node during a chain halt, make sure you back up priv_validator_state.json and then replace it after the snapshot is extracted but before you start the node process. This is very important in order to avoid double-sign. When in doubt, reach out to the project team. # Back up priv_validator_state.json if needed cp ~/.injectived/data/priv_validator_state.json ~/.injectived/priv_validator_state.json # On some tendermint chains injectived unsafe-reset-all # On other tendermint chains injectived tendermint unsafe-reset-all --home $HOME/.injectived --keep-addr-book Since Injective has enabled wasm and its wasm folder is outside the our snapshot also includes a wasm folder. Notice that we have taken out the cache sub-folder from the snapshot to ensure the wasm folder is compatible for all CPUs. To sure that you have a clean start, please delete your wasm folder manually unsafe-reset-all does not reset the rm -r ~/.injectived/wasm Decompress the snapshot to your database location. You database location will be something to the effect of depending on your node implemention. lz4 -c -d injective_16753915.tar.lz4 | tar -x -C $HOME/.injectived IMPORTANT: If you run a validator node and the chain is in halt, it is time to replace the priv_validator_state.json file that you have backed up. # Replace with the backed-up priv_validator_state.json cp ~/.injectived/priv_validator_state.json ~/.injectived/data/priv_validator_state.json Now double-check the folder to ensure that it is not empty (it is okay for Osmosis wasm folder to be empty). If it is empty, it means that our snapshot script has a bug. Please contact us via Discord Server. If everything is good, now restart your node sudo service injective start Remove downloaded snapshot to free up space rm -v injective_16753915.tar.lz4 Make sure that your node is running sudo service injective status sudo journalctl -u injective -f ADVANCED ROUTE: The above solution requires you to download the compressed file, uncompressed it and then delete the original file. This requires extra storage space on your server. You can run the following combo command to stream the snapshot into your database location. For advanced users only: curl -o - -L https://snapshots.polkachu.com/testnet-snapshots/injective/injective_16753915.tar.lz4 | lz4 -c -d - | tar -x -C $HOME/.injectived ALTERNATIVE ROUTE: We also have Injective state-sync service to help you bootstrap a node.
OPCFW_CODE
Create a canvas app with data from an Excel file In this topic, you'll create your first canvas app in Power Apps using data from an Excel table. You'll select an Excel file, create an app, and then run the app that you create. Every created app includes screens to browse records, show record details, and create or update records. By generating an app, you can quickly get a working app using Excel data, and then you can customize the app to better suit your needs. If you don't have a license for Power Apps, you can sign up for free. When you upload an Excel file it generates a Dataverse table. With Dataverse's standard and custom tables, you can securely store your data in the cloud. These tables enable you to define your organization's data in a way that is tailored to your business needs, making it easier to use within your apps. More information: Why use Dataverse? If your environment is in the US region and AI is enabled in your organization, the AI Copilot feature can assist in table creation by suggesting table names, descriptions, column data types, and headers, even if this information is missing from the uploaded file. When Copilot AI is used for table creation, the Copilot card is displayed to indicate that the table was generated by Copilot AI. To follow this topic exactly, download the Flooring Estimates file in Excel, and save it on your device. Upload an Excel file to create an app - Sign in to Power Apps. - From the home screen, select Start with data > Upload an Excel file. - Select Select from device and navigate to the location where your Excel file is saved and upload it. The maximum file size limit is 5 GB. - When the table is created, select a column name or the table name to edit the properties to suit your needs. If there's values in cells that are incompatible with the selected data type when changing column data types, those values will be removed when the table is generated. - Select Row ownership and choose how you want to manage row ownership. - When you're done, select Create app. The system will upload the first 20 rows of sample data so you can start reviewing the data in your app. The remaining data will be uploaded in the background. To create an app by connecting to Excel, see Connect to Excel from Power Apps. - The current data upload process doesn't take into account the environment data format setting. Run the app Select the play icon near the upper-right corner to Preview the app. Filter the list by typing one or more characters in the search box. For example, type or paste Honey to show the only record for which that string appears in the product's name, category, or overview. Add a record: Select New record. Add whatever data you want, and then select the checkmark button to save your changes. Edit a record: Select the record that you want to edit. Select the pencil icon. Update one or more fields, and then select the checkmark icon to save your changes. As an alternative, select the cancel icon to discard your changes. Delete a record: Select the record that you want to delete. Select the trash icon. Customize the default browse screen to better suit your needs. For example, you can sort and filter the list by product name only, not category or overview.
OPCFW_CODE
Bitcoin and Cryptocurrency Exchange Software Development - Technology Stack Bitcoin & Cryptocurrency Exchange software PHP to start a cryptocurrency trading platform instantly. Get cent percent bug-free source code, easy Installation, premium features, white-labeled, inbuilt wallet, and ready to use! Want to start Bitcoin and Cryptocurrency Exchange business? Here's the Complete Technology Stack Guide to start a Bitcoin and Cryptocurrency Exchange Business with Bitcoin and Cryptocurrency Exchange Software. Nowadays, most of the business people are interested in launching their own Bitcoin & Cryptocurrency exchange business. Easiest and Best way to launch a Cryptocurrency exchange business platform is by getting the Best Bitcoin & Cryptocurrency Exchange Software. Yeah, this is the best and simplest method to start and launch your own Bitcoin & Cryptocurrency exchange business platform. Bitcoin & Cryptocurrency Exchange Software Bitcoin & Cryptocurrency Exchange Software is a complete set of procedures, programs, and source code that could build a Cryptocurrency exchange business website within a week. The Bitcoin & Cryptocurrency Exchange Software is SegWit enabled and multiple-currency, multiple-signature, high frequency exchanging, and trading platform for digital and Cryptocurrency assets. Many Cryptocurrencies or Digital currencies such as Bitcoin, Ethereum, Bitcoin Cash, Litecoin, XRP, HCX, and ERC20 tokens are supported. What is SegWit? Segwit - block size limit on a blockchain is increased by removing signature data from Bitcoin & Cryptocurrency transactions. When particular parts of a crypto transaction are removed, this frees up space to add more additional transactions to the chain. We, BlockchainAppsDeveloper offer advanced Cryptocurrency Exchange software to Start Bitcoin & Cryptocurrency Exchange like Huobi, Bittrex, OKEx, UPbit, Bitfinex in a week! To pick the Best Cryptocurrency Exchange Software, you must know about some basic developer knowledge, and of course, must aware of programming and technical based elements of the Cryptocurrency Exchange Software. What is Software? Software - Set of instructions that tell a system or computer what to do. The software contains the complete set of procedures, programs, and routines associated with the computer system operation. A set of programs and instructions that directs a computer’s hardware to perform a task is called a software program. If you have a clear idea about “what is software” then similarly you can merge and understand it with “Cryptocurrency Exchange software”. In this Digital era, launching a Bitcoin & Cryptocurrency Exchange Business platform is a profitable business and you can easily and instantly develop a Cryptocurrency exchange website without efforts by using the Bitcoin & Cryptocurrency exchange software. Technology Stack For Cryptocurrency Exchange Software Development What is Technology Stack? Technology Stack is also called a Technology infrastructure, Solutions stack, or a Data ecosystem. Technology Stack is a list of all the technology services mainly used to develop and run one single application. Usually, for Cryptocurrency exchange development, the back-end program code is created using Java or Node.js. For easier & quicker Cryptocurrency exchange development, you can use the available libraries containing most of the required digital and cryptocurrency wallet functionality. In the near feature, this kind of modern and advanced technology will be the most likable framework for all developers and coders. Mainly, advanced implementations can bring enhanced web applications. As for front-end programming, you have a variety of development platforms. You can use Angular/React. If you want to save your development time & cost opt for React Native. With React Native, you can build one application that can run both on iOS & Android without the requirements of initiating two different ones. LAMP - Linux Server, Apache, Mysql DB, Php Website Development: PHP - Laravel / Codeigniter MEAN Stack: Mongo DB, Express JS, Angular js, Node js Technology Stack: Applications Tool: Android Studio 3.1.3(Latest Version)& Language: Kotlin Web Service: REST API's Web Service Format: JSON Format Database: Back-end database (MySql), SQLite local database, Room local database(Updated Database) Design: Material Design Tool - Xcode 9.3 Language - Swift (4.2) Web Service – AFNetworking, Swifty JSON, Alamofire Web Service Format - JSON Database - Mysql Design - MVC, MVVM General Functions: APNS & Firebase Notifications, Cloudinary, Payment getaways, Yoti, SDWebImage High Performance: Speed up API calls, Fastest image loader, Avoid Memory leaks, UI performance, Reduction of APK size. Cryptocurrency Exchange Software Development - MEAN Stack Framework MEAN Stack is entirely both the client-side and server-side development framework model. MEAN stack stands for Mongo DB, Exrpress JS, Angular Js, and Node Js. Before the entry of MEAN Stack framework the only open source website development technology stack is LAMP that is Linux, Aapche, MySQL, and PHP. Advantages of Cryptocurrency Exchange Software - MEAN Stack Framework 1. No Need to study PHP or python 2. Mongo DB can save documents in JSON format 3. JSON queries written on Express Js & Node js. 4. It can retrieve data easily and quickly 4. Easy to find debug the each object 5. Free and completely open-source website development 6. Easy to design applications & websites 7. Both client and server using a single language 8. Exchanging of clients and server code easily 9. Java script support both the browser and the server 10. Express js on server side& Angular Js on client-side can enhance co-operative functions In the near feature, this modern technology will be the most likable framework for all developers and also can bring enhanced web applications. Buy Cryptocurrency Exchange Software & Script with MEAN Stack Development. By understanding, this kind of implements and facts adopt this development technology very earlier and started to provide our Bitcoin & Cryptocurrency exchange development solutions in MEAN Stack. We already start to serve our clients with our latest development technology. You can check the demo of the Cryptocurrency exchange software and script at any time. Bitcoin & Cryptocurrency Exchange Software PHP, MEAN Stack & Laravel Framework Bitcoin & Cryptocurrency Exchange Software is a complete set of procedures, programs, and source code and it is also called Cryptocurrency exchange website script which has been coded with all the necessary functionalities to develop a Cryptocurrency exchange business website. Full-featured PHP cryptocurrency exchange website software and script to launch a Cryptocurrency exchange and trading business in Bitcoin, Ethereum, and Altcoin. Cryptocurrency Exchange Software which will be inbuilt with some basic feature in order to meet the entire requirement to develop a Cryptocurrency exchange business website. Features of Bitcoin & Cryptocurrency Exchange Software Bitcoin & Cryptocurrency Exchange Software is also called Cryptocurrency trading software which will be an automatically inbuilt set of programs and source code with basic features in order to meet all trading and exchanging requirements to develop a Cryptocurrency exchange business website. Here is the complete list of Bitcoin & Cryptocurrency Exchange Software Features. - Inbuilt set of Programs & Source Code - Enhanced buy/sell Trading System - Know Your Customer Business Module - Inbuilt 2-factor authentication - Trading chart generator - High-speed matching engine - Inbuilt Cryptocurrency or Digital Wallet - Attractive UI/UX design - Client and Server Features - Mobile-Friendly & User-Friendly Features - Automated Trading Bot Functionalities’ Bitcoin & Cryptocurrency Exchange Software will allow the business people to customize the feature of their own Cryptocurrency exchange business platform based on their requirements and needs. - Quick development process (get your exchange within a week) - Convenient labs technical experience - 100% sure high return on investment - The rebuilding process is not necessary - Minimum development cost Bitcoin & Cryptocurrency Exchange software with you can get Beneficial API's & Plugins like - Market Making API, - Liquidity API, - Rest API, - Price Ticker API, - Coinmarket Cap API( Real-time bitcoin price tracker) Why Should Choose Cryptocurrency Exchange Software? For business Entrepreneurs, who look to start the Cryptocurrency exchange business website, then Cryptocurrency exchange software alone will be the easy and best option. You need not hire Cryptocurrency exchange software developers & undergone a software development life cycle process to launch a Cryptocurrency trading website. Actually, it is the smartest way to start a Cryptocurrency exchange platform without more efforts in a cost-effective and valuable time-saving manner. The main components to keep in your mind while choosing Cryptocurrency Exchange Software! - Trade engine - High Secured Admin panel - User interfaced - Digital Wallet Implementation - Advanced & Latest Trading Features. With keeping these above things in mind you can now start with the process of developing your own Bitcoin & Cryptocurrency exchange platform for yourself. White Label Cryptocurrency Exchange Software Provider – BlockchainAppsDeveloper Instead of building a new Cryptocurrency exchange business website by implementing new trading methodologies with trending features, you can easily avail the existing trading features and popular latest features of those trading business websites by Bitcoin and Cryptocurrency Exchange Clone Script & Softwares. Langauge Used - PHP, JAVA, MEAN Stack, Laravel Database - My SQL, Mongo DB For Entrepreneurs, Cryptopreneurs, and business investors and trading people who are planning to start a Bitcoin trading and exchange, it is necessary to understand complete technical terms related to Bitcoin & Cryptocurrency Exchange Software. But we know, Business people do not have that much kind of time to learn or understand development things from the developers’ point of view! If you are planning to build your own Bitcoin exchange business website, BlockchainAppsDeveloper can help. We provide a complete range of Bitcoin exchange business solutions, from high secure Bitcoin & Cryptocurrency exchange development to efficient business marketing, to help you launch your Cryptocurrency exchange profitability and successfully. I hope the above all software and technology-related information are helpful to you! Crypto Exchange Software Readymade Bitcoin and Cryptocurrency Exchange Software is a versatile high-frequency Cryptocurrency Exchange & Trading business platform to interchange the Cryptocurrency based on your needs and requirements right away. If you wish to develop Cryptocurrency exchange like Binance, you can get a complete ready to start and launch, Customizable, and white label Cryptocurrency exchange software at down to earth price. Similar On-Demand Cryptocurrency Exchange Clone Software Cryptocurrency Exchange Clone software is highly scalable and customizable. The functionality and performance of this Bitcoin & Cryptocurrency Exchange Software are a highly efficient process and do not fail away. - Binance Clone Software - Local Bitcoins Clone Software - Coinbase Clone Software - Remitano Clone Software - Bitstamp Clone Software - Paxful Clone Software - Poloniex Clone Software - Bithumb Clone Software - Wazirx Clone Software - Kucoin Clone Software With these Cryptocurrency Exchange Clone Software You can Launch Cryptocurrency Exchange Clone popular on-demand platforms like Binance, Coinbase, LocalBitcoins, Poloniex, WazirX, Paxful, Remitano, Bitstamp in a week! You can Get a Live Demo for Cryptocurrency Exchange Clone Software now! Why Choose Us For Cryptocurrency Exchange Software Development? BlockchainAppsDeveloper - A Leading Cryptocurrency Exchange Software Development Company having certified and skilled developers excelled in Blockchain Development & Cryptocurrency Exchange Development offering stunning features and services all over the globe. Get cent percent bug-free source code, easy Installation, premium features, white-labeled, inbuilt wallet, and ready to use! We are offering Bitcoin & Cryptocurrency Exchange Software with advanced trading solutions across the global scale to all countries without any restrictions. We also integrated to manage all of the Bitcoin and Cryptocurrency transactions in fiat currency with our unique processing features in an extremely safe and secure manner. Advanced Bitcoin Exchange Software to Launch Bitcoin Exchange Business Platforms like Huobi, Bittrex, OKEx, UPbit, Bitfinex, etc.
OPCFW_CODE
The error code 0xc00d36c4 occurs when you have an unsupported file format. Videos Black Not Playing Properly. There was a problem preparing your codespace. Failed opening video c00d36c4 wallpaper engine. Search the worlds information including webpages images videos and more. Google has many special features to help you find exactly what youre looking for. But the errors Windows produces are so generic that it is not clear whether it is the audio or video codec. Black screens and other issues with wallpapers of the type Video are always caused by either faulty graphics card drivers or faulty video codecs. Your customizable and curated collection of the best in trusted news plus coverage of sports entertainment money weather travel health and lifestyle combined with OutlookHotmail Facebook. If you are having a technical problem or just a question about Wallpaper Engine you will likely find an answer here. If nothing happens download GitHub Desktop and try again. If nothing happens download Xcode and try again. Wallpaper Engine Troubleshooting Guide FAQ. Try the following steps in the exact order. It means either audio or video codec from that video cannot be loaded by the stock video system from Windows. You can fix this by trying to play the problematic video file with another player. QDC path 0 DISPLAYCONFIG_SOURCE_MODE flags 1 virtual mode 0 source index 1 position 0 0 width 1920 height 1080 rotation 1. If nothing happens download GitHub Desktop and try again. It is a versatile tool that is uniquely designed to play almost all video file. If you have a computer with multiple GPUs for example the main. 95 of questions and problems we get asked are the same so it is likely that you will find your answer here. If you follow this guide from top to bottom the problem will be solved. Your codespace will open once ready. Pastebin is a website where you can store text online for a set period of time. A popular video player to the default windows player is the VLC. If you have a problem like this try my step sorry for my bad english. Search millions of videos from across the web. Wallpaper Engine assigns wrong wallpapers at system start-up This is the same problem as in the previous section if you have multiple monitors and the wallpapers are being assigned to the wrong monitor at start-up try changing the Monitor identification option in the General tab of the Wallpaper Engine settings to Layout or GDI. If your video wallpapers have wrong colors are too bright or too dark appear zoomed-in pixelated or if there is a border around the wallpaper you can fix this problem by resetting the video options in your graphics control panel for your Nvidia AMD or Intel graphics card. - Update audio and graphics drivers and make sure Windows has all available updates installed.
OPCFW_CODE
package com.sri.ai.util.graph2d.application; import static com.sri.ai.util.Util.map; import static com.sri.ai.util.Util.println; import static com.sri.ai.util.graph2d.api.functions.Function.function; import static com.sri.ai.util.graph2d.api.functions.Functions.functions; import static com.sri.ai.util.graph2d.api.graph.GraphSetMaker.graphSetMaker; import static com.sri.ai.util.graph2d.api.variables.SetOfVariables.setOfVariables; import static com.sri.ai.util.graph2d.api.variables.Value.value; import static com.sri.ai.util.graph2d.api.variables.Variable.enumVariable; import static com.sri.ai.util.graph2d.api.variables.Variable.integerVariable; import static com.sri.ai.util.graph2d.api.variables.Variable.realVariable; import static com.sri.ai.util.graph2d.core.values.SetOfEnumValues.setOfEnumValues; import static com.sri.ai.util.graph2d.core.values.SetOfIntegerValues.setOfIntegerValues; import com.sri.ai.util.graph2d.api.functions.Function; import com.sri.ai.util.graph2d.api.functions.Functions; import com.sri.ai.util.graph2d.api.graph.GraphPlot; import com.sri.ai.util.graph2d.api.graph.GraphSet; import com.sri.ai.util.graph2d.api.graph.GraphSetMaker; import com.sri.ai.util.graph2d.api.variables.Unit; import com.sri.ai.util.graph2d.api.variables.Variable; public class Example { public static void main(String[] args) { Variable continent = enumVariable("Continent"); Variable age = integerVariable("Age", Unit.YEAR); Variable occupation = enumVariable("Occupation"); Variable income = realVariable("Income", Unit.DOLLAR); Variable expense = realVariable("Expense", Unit.DOLLAR); Function incomeFunction = function("Income", income, setOfVariables(continent, age, occupation), (assignment) -> { String continentValue = assignment.get(continent).stringValue(); String occupationValue = assignment.get(occupation).stringValue(); if (continentValue.equals("North America") || continentValue.equals("Europe")) { if (occupationValue.equals("Doctor")) { int ageValue = assignment.get(age).intValue(); return ageValue > 40? value(200000) : value(150000); } else { return value(100000); } } else { return value(50000); } } ); Function expenseFunction = function("Expense", expense, setOfVariables(continent, age, occupation), (assignment) -> value(incomeFunction.evaluate(assignment).doubleValue() * 0.75) ); Functions functions = functions(incomeFunction, expenseFunction); GraphSetMaker graphSetMaker = graphSetMaker(); graphSetMaker.setFunctions(functions); graphSetMaker.setFromVariableToSetOfValues( map( continent, setOfEnumValues("North America", "Africa", "Europe"), age, setOfIntegerValues(18, 99), occupation, setOfEnumValues("Driver", "CEO", "Doctor") ) ); GraphSet graphSet = graphSetMaker.make(age); println(graphSet); // cleanup by removing graph files for (GraphPlot graphPlot : graphSet.getGraphPlots()) { if (graphPlot.getImageFile().delete()) { println("Deleted: " + graphPlot.getImageFile().getName()); } } } }
STACK_EDU
Initialize suback_reason_code from qos I'm working to migrate my broker from v6.0.0. to master to check #444. I noticed that subscribe_options, qos, and suback_reason_code conversion in inconvenient. When the broker receives subscribe, one of the parameter is subscribe_options. It's good. I can get qos from subscribe_options using get_qos(). It's good. When the broker sends to suback to the client, I need to create the argument suback_reason_code from qos, if subscribe is successfully accepted. Now, I need to do auto rc = static_cast<mqtt::suback_reason_code>(qos) or mqtt::suback_reason_code rc; switch (qos) { case mqtt::qos::at_most_once: rc = mqtt::suback_reason_code::granted_qos_0; break; case mqtt::qos::at_least_once: rc = mqtt::suback_reason_code::granted_qos_1; break; case mqtt::qos::exactly_once: rc = mqtt::suback_reason_code::granted_qos_2; break; } I think that if suback_reason_code has a constructor that has qos parameter, then I can implement the broker easier. What do you think? In addition, when the broker receives publish message, then the broker needs to adjust qos level between publish qos and subscribed qos. Lower qos is chosen. If qos has operator<, I can easy to implement the broker. What do you think? BTW, I will migrate my broker in two days. After I will do some performance tests in actual communication environment, then I will merge #444. I think that if suback_reason_code has a constructor that has qos parameter, then I can implement the broker easier. What do you think? I think it's a good idea to auto-convert from mqtt::qos -> mqtt::suback_reason_code and mqtt::qos -> mqtt::v5::suback_reason_code. I do not think it is a good idea to auto-convert mqtt::suback_reason_code -> mqtt::qos, or mqtt::v5::suback_reason_code -> mqtt::qos To do this, mqtt::suback_reason_code cannot be an enum (At least, I don't know of a way to provide implicit conversion between enums...?), it must be a class, like mqtt::subscribe_options. Also notice this: https://stackoverflow.com/a/57286964 Maybe have the following: v3.1.1 enum suback_reason_code v5 enum suback_reason_code v3.1.1 class for implicit-conversions for suback_reason_code v5 class for implicit-conversions for suback_reason_code Use std::variant<v3.1.1::class, v5::class> ? In addition, when the broker receives publish message, then the broker needs to adjust qos level between publish qos and subscribed qos. Lower qos is chosen. If qos has operator<, I can easy to implement the broker. What do you think? I use std::min<mqtt::qos, mqtt::qos>(A, B) This does not work for you? I use std::minmqtt::qos(A, B) This does not work for you? I looked over your comment. And post #453 . I will check std::min<mqtt::qos>(A, B) soon. I will check std::minmqtt::qos(A, B) soon. It works. I did something wrong. I closed #453 I think it's a good idea to auto-convert from mqtt::qos -> mqtt::suback_reason_code and mqtt::qos -> mqtt::v5::suback_reason_code. I do not think it is a good idea to auto-convert mqtt::suback_reason_code -> mqtt::qos, or mqtt::v5::suback_reason_code -> mqtt::qos I think that both manual convert is also good. I think that the following is good enough. https://wandbox.org/permlink/RD9c7EBrj53L8c2V What do you think? I guess that's good enough. It would be nice to provide a type-safe implicit conversion, but i don't think C++ supports that yet.
GITHUB_ARCHIVE
[Mailman-Developers] Requirements for a new archiver brad.knowles at skynet.be Wed Oct 29 16:14:52 EST 2003 At 12:37 PM -0800 2003/10/29, Peter C. Norton wrote: > It seems like you're only partially agreeing/disagreeing with me > (optimist/pessamist). Disagreeing: you're saying that using datatypes > in the database which are appropriate to the kind of data being stored > (mail messages) is an excercise in futility. Not quite. I believe that there are no databases in existence which have data types that are actually appropriate for the storage of message bodies. > But, agreeing: that > storing these in a database in another way is OK. Not quite. Store meta-data, yes. The entire message, no. Store things like who the message is from, who the message is addressed to, the date, the message-id as it was found in the headers, etc.... Basically, store just about everything in the message headers that a client would be likely to ask about. That's all well and good. But when it comes to storing the message body itself, it should be stored in wire format (i.e., precisely as it came in), in the filesystem. Then pointers to the location in the filesystem should be put into the database. One key factor here is that all of the information in the database should be able to be re-created from the message bodies alone, if there should happen to be a catastrophic system crash. The sole purpose of the database is to speed up access to the messages and the message content -- indeed, to speed it up enough so that randomly accessing most any piece of information about any message from any sender to any recipient in any mailbox should become something feasible to contemplate. The sole purpose of the database is to make the difficult and slow (on the large scale) quick and easy, and to make the things that would be totally impossible (on any reasonable scale) at least something that can now be considered. > I don't get why > you'd just want to store these as text when you have databases that > can be made more suitable to the problem. I don't believe that there are any databases in existence that "... can be made more suitable to the problem." > So all the parsing happens in the database client side. Which is slow. Yup. I don't see any way around that. Brad Knowles, <brad.knowles at skynet.be> "They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, Historical Review of Pennsylvania. GCS/IT d+(-) s:+(++)>: a C++(+++)$ UMBSHI++++$ P+>++ L+ !E-(---) W+++(--) N+ !w--- O- M++ V PS++(+++) PE- Y+(++) PGP>+++ t+(+++) 5++(+++) X++(+++) R+(+++) tv+(+++) b+(++++) DI+(++++) D+(++) G+(++++) e++>++++ h--- r---(+++)* z(+++) More information about the Mailman-Developers
OPCFW_CODE
RemoteCluster(all_endpoints_reachable=None, auto_register_target=None, auto_registration=None, bandwidth_limit=None, cluster_id=None, cluster_incarnation_id=None, compression_enabled=None, encryption_key=None, local_ips=None, name=None, network_interface=None, purpose_remote_access=None, purpose_replication=None, remote_access_credentials=None, remote_ips=None, tenant_id=None, user_name=None, view_box_pair_info=None)¶ Implementation of the ‘RemoteCluster’ model. Specifies information about a remote Cluster that has been registered for replication. - all_endpoints_reachable (bool): Specifies whether any endpoint (such as a Node) on the remote Cluster is reachable from this local Cluster. If true, a service running on the local Cluster can communicate directly with any of its peers running on the remote Cluster, without using a proxy. - auto_register_target (bool): Specifies whether the remote cluster needs to be kept in sync. This will be set to true by default. - auto_registration (bool): Specifies whether the remote registration has happened automatically (due to registration on the other site). Can’t think of other states (other than manually & automatically) so this isn’t an enum. For a manual registration, this field will not be set. - bandwidth_limit (BandwidthLimit): Specifies settings for limiting the data transfer rate between the local and remote Clusters. cluster_id (long|int): Specifies the unique id of the remote Cluster. cluster_incarnation_id (long|int): Specifies the unique incarnation id of the remote Cluster. This id is determined dynamically by contacting the remote Cluster. - compression_enabled (bool): Specifies whether to compress the outbound data when transferring the replication data over the network to the remote Cluster. - encryption_key (string): Specifies the encryption key used for encrypting the replication data from a local Cluster to a remote Cluster. If a key is not specified, replication traffic encryption is disabled. When Snapshots are replicated from a local Cluster to a remote Cluster, the encryption key specified on the local Cluster must be the same as the key specified on the remote Cluster. - local_ips (list of string): Array of Local IP Addresses. Specifies the IP addresses of the interfaces in the local Cluster which will be used for communicating with the remote Cluster. - name (string): Specifies the name of the remote cluster. This field is determined dynamically by contacting the remote cluster. - network_interface (string): Specifies the group name of the network interfaces to use for communicating with the remote Cluster. - purpose_remote_access (bool): Whether the remote cluster will be used for remote access for SPOG. - purpose_replication (bool): Whether the remote cluster will be used - remote_access_credentials (AccessTokenCredential): Specifies the Cohesity credentials required for generating an access token. - remote_ips (list of string): Array of Remote Node IP Addresses. Specifies the IP addresses of the Nodes on the remote Cluster to connect with. These IP addresses can also be VIPS. Specifying hostnames is not supported. - tenant_id (string): Specifies the tenant Id of the organization that created this remote cluster configuration. - user_name (string): Specifies the Cohesity user name used to connect to the remote Cluster. - view_box_pair_info (list of ViewBoxPairInfo): Array of Storage Domain (View Box) Pairs. Specifies pairings between Storage Domains (View Boxes) on the local Cluster with Storage Domains (View Boxes) on a remote Cluster that are used in replication. Creates an instance of this model from a dictionary dictionary (dictionary): A dictionary representation of the object as obtained from the deserialization of the server’s response. The keys MUST match property names in the API description. object: An instance of this structure class.
OPCFW_CODE
You are again demonstrating what I am guessing is a lack of real experience with Haskell, and therefore you don't know the difference between theoryand practice. I have written a lot of code in all the languages you mention(and many you haven't -- I've been doing this for a very long time) andnone of the theory you spout above was remotely a consideration. It's for academicians to debate and write papers about, but it doesn't come up inthe real world. Please I wrote a respectful post. No need to introduce dubious (and frankly condescending and bit arrogant tone) ad hominen assumptions. I've coded in many languages also over a roughly 30 year career. Haskell does invert the type system from inductive to coinductive and this introduces advantages and disadvantages. I won't go into all the details further here (a link was already provided to Robert Harper's blog post). I was merely trying to point out that reasoning about Haskell down to the "nuts and bolts" requires understanding many other layers such as Monads (category theory), memoization, lazy evaluation, coinductive types, etc.. It is a high-level in the sense that is a very powerful semantics built with I made the "guess" about your Haskell experience (not an assertion) because you considerably overstated, in my opinion, what a programmer needs to know in order to use Haskell to build real-world applications. And you continue to do so above. Programmers don't "reason" about languages and programs. They don't generate correctness proofs. I would venture another guess that if you used the word "coinductive" to the vast majority of them, the response would be "huh?". They write code, test it, release it, etc. They do it by learning the sub-set of the language they need to get their job done. This is precisely what I have done with Haskell, and it has been enormously useful to me. Because of the power of today's hardware, we can do a lot of useful things even with interpreted languages, and with compiled languages that deliver less-than-maximal performance, like Haskell and Scheme. This formerly absolutely valid point is weakened or mitigated as mobile eats the desktop and battery life also depends on performance. Nevertheless the cost of the programming effort remains a variable in the equation, so there is a balance which varies for different use cases. I agree. That doesn't change the main point at all, which is to use the right tool for job and by "right", I mean minimize development and maintenance cost. Driving a Ferrari to the supermarket doesn't make a lot But you pay a coding-time price for their use and I don't think that will ever go away. This is why I argue the low-level languages should be used after profiling the code and knowing where the performance bottlenecks are. I couldn't agree more. A good deal of the early part of my career was based on not fixing things until you knew for sure how they were broke. Because for example modeling memory allocation (lifetimes and scopes) in the type system, infects the type system every where (e.g. even 'a in generics apparently) so the code base presumably gets more unmanageable over time because there is an exponential explosion of invariants. Because unsafe can propagate from any untyped code (the entropy is unbounded due to be a Turing-complete machine, i.e. not 100% dependently typed). Thus the notion of a completely typed program is a foolish goal. It is about tradeoffs and fencing off areas of maximum concern. The human mind still needs to be involved. So please don't try to hold Haskell up as the a high-level solution with the only tradeoff being performance. The reality isn't that simple. Read carefully please. I never said that coding ease vs. performance was THE tradeoff. It is A tradeoff. There are certainly other considerations that good programmers must make when choosing their tools for a particular Edit: for example (just so you don't think I am BSing), Haskell can't offer first class disjunctions without forsaking its global inference that provides some of Haskell's elegance. Without first-class disjunctions (which Rust also doesn't have), then composition via ad hoc polymorphism is some what crippled. I suspect some of the need for higher-kind types could be avoided if Rust had first-class unions. Which Rust could I presume implement, because it doesn't have global inference. I don't doubt at all that what you say above is true. What I doubt is its immediate relevance to the real world. We have examples galore of people Haskell and, I'm sure, Rust, would be ranked far higher by programming language connoisseurs than any of the examples I gave. And yet those example languages have been used to create an enormous amount of useful software. Should we use the kind of theoretical considerations that you seem to revel in in the design of future languages? Of course. But I don't think it terribly relevant to making a programming-environment choice in the world as it is today.
OPCFW_CODE
Anyone taken the ny notary public exam? Just checking to see if you could give me a heads up... like is it as easy as the driving exam, or do I need to study? Michael H. Pryor You wanna be a notary public? I WANNA be a notary public. No but seriously... every time I've needed a notary public I've had to walk a block to the bank and stand on line. We need someone in the office who can notarize documents. Most executive secretaries are notaries for this reason but Fog Creek is still too small to justify executive secretaries. In all seriousness, from what I've heard, you do have to study. I think you have to attend specialized classes. I'm a notary public, I took the exam about a year and a half ago. Pretty much everything you need to know is here http://www.dos.state.ny.us/lcns/notary1.htm There are classes for it but I didn't take one. You can download "Notary Public License Law" from http://www.dos.state.ny.us/lcns/pdfs/notarylaw.pdf and read it over a few times the night before the exam. I downloaded the law book and have read it through a couple of times.. just wanted to know if you had any pointers. Michael H. Pryor Your first assumption was correct; the test is pretty much all excerpts from the booklet. In fact, I think I got the sheriff question. I'm trying to decide if I can just go in and pass without studying. With work, etc and my genuine lack of interest in law its pretty tough. (I want to become a notary to help out at work). You should read the booklet you can download from that site, 3 times. That should be enough studying. Michael H. Pryor (fogcreek) Do you have to sign up to take this exam? Or can you just show up at the time of the exam? Nicole, You can register for the exam at the test center, on the day of the exam. Jill I just found it on the Website myself after I asked the question. Thank you for the help though. I am going to pass this exam and you are going to fail.. Good Luck Sucka! Does anyone know what subway is closet to 123 William Street in NYC and if there is anywhere close by that is cute for lunch? I think the closest subway is the C to the Broadway-Nassau/Fulton Street stop. thank you Nicole, you sure are helpful and i bet real pretty. you better hope those looks of yours help you pass this test. good luck lady!! Thanks for the support Jill. Perhaps we should grab lunch after the exam? Anytime Coley, do you mind if i call you Coley instead of Nicole?? lunch sounds great!! Jill I was joking about lunch. And please don't call me Coley. I have decided against taking this exam any time in the near future for fear that you will be there waiting for me. Puppet shows are fun DOES ANYONE KNOW IF THE NY NOTARY PUBLIC LICENSE LAW DOWNLOADED INFORMATION IS AVAILABLE IN SIMPLE, EASY-TO-UNDERSTAND ENGLISH???? SO THAT I DON'T HAVE TO TAKE A CLASS? I HEARD ITS AVAILABLE ON THE INTERNET SOMEWHERE i.e. CRIB OR CHEAT SHEETS?? Yah, um.. that link in like the 3rd post in this thread is the book you need to read. Michael H. Pryor The plain simple non legalese manual is the NY notary law primer at www.notarytrainer.com. The best manual I have found for preparing for hte exam. The site also has sample test questions which helped me to study for the exam which I passed on the first try no help to the state's info. Hi everyone, just though I would share this website http://www.notarypublicinstitute.com/quiz/sampletest.html I'm planning on taking the exam ASAP. I was just wondering what your ages are and if you had law related or college education. Also wondering if any of the recent posters had taken the exam and how it was... The first link on the 3 posting does not work. Can someone please post the correct link? Does anyone of you know if you have to be a U.S. citizen to be a notary public? Shouldn't the permanent residence suffice? Thanks alot. Fog Creek Home
OPCFW_CODE
// Mouse Vars var mx; var my; // Game Vars var canClick = true; var grid = []; var clock = 0; var clear = 0; var dimension = 4; var maxDimension = 14; var score = bigInt(); var scoreThreshold = bigInt(); var fallingOffset = 0; var fallingPegs = Array(); var level = 1; var piecesLeft = 0; var pegTypes = 3; // Pieces var PIECE_SETS = [ [0x00000, 0xFF0000, 0x00FF00, 0x0000FF, 0xFFFF33], [0x000000, 0xe18b45, 0x89e46a, 0x649ce2, 0x7d5b7a], ]; var PIECES = PIECE_SETS[0]; // SFX var SOUND_CLICK = new Howl({ src: ['/assets/peg-pilferer/sfx/click.wav'], }); var SOUND_SUCCESS = new Howl({ src: ['/assets/peg-pilferer/sfx/success.wav'], }); var SOUND_FAILURE = new Howl({ src: ['/assets/peg-pilferer/sfx/fail.wav'], }); var SOUND_BONUS = new Howl({ src: ['/assets/peg-pilferer/sfx/bonus.wav'], }); function getComboScore(quantity) { var ns = 0; if (quantity < 4) { ns = bigInt[2].pow(quantity); } else if (quantity < 8) { ns = bigInt[100].multiply(quantity); } else if (quantity < 16) { ns = bigInt(800).multiply(bigInt[2].pow(quantity / 2)); } else { ns = bigInt(250000).multiply(quantity - 16); } return ns; } function generateMap(level) { levelText.text = 'L: ' + level; if (level != 1) { dimension += 1; if (dimension > maxDimension) { dimension = maxDimension; } if (level == 16) { PIECES = PIECE_SETS[1]; } } // Threshold punishment (increase number of peg colors for next level) if (level > 2 && pegTypes <= 3 && score.lt(scoreThreshold)) { pegTypes = 4; SOUND_FAILURE.play(); } else { if (level < 2 || Math.random() > 0.3) { pegTypes = 3; SOUND_SUCCESS.play(); } else { pegTypes = 2; SOUND_BONUS.play(); } } // Generate the level for (var i = 0; i < dimension; i++) { if (!grid[i]) { grid[i] = []; } for (var j = 0; j < dimension; j++) { if (!grid[i][j]) { grid[i][j] = []; } if (Math.random() > 0.2) { grid[i][j].type = Math.floor(Math.random() * pegTypes) + 1; } else { grid[i][j].type = 0; } if (grid[i][j].type != 0) { grid[i][j].falling = false; piecesLeft += 1; } } } // Update the threshold based on the level generated if (level > 2) { var baseStrength = Math.floor(piecesLeft / 6); var levelOverload = Math.ceil((level - 3) / 2); if (levelOverload < 1) { levelOverload = 1; } scoreThreshold = score.add(bigInt( getComboScore(baseStrength)).multiply(levelOverload)); if (pegTypes <= 3) { goalText.text = 'Thr: ' + scoreThreshold; } else { goalText.text = 'Thr: Safe'; } } } // Autodetect, create and append the renderer to the body element var renderer = PIXI.autoDetectRenderer(800, 600, {antialias: true}, false, true); document.body.appendChild(renderer.view); var container = new PIXI.Container(); var graphics = new PIXI.Graphics(); graphics.interactive = true; graphics.hitArea = new PIXI.Rectangle(0, 0, 800, 600); container.addChild(graphics); function redrawGrid() { graphics.clear(); for (var y = 0; y < dimension; y++) { for (var x = 0; x < dimension; x++) { if (grid[y][x].type != 0) { // Draw a peg graphics.beginFill(PIECES[grid[y][x].type]); if (grid[y][x].falling) { graphics.drawCircle(40 + x * 40, 40 + y * 40 - fallingOffset, 16); // drawCircle(x, y, radius) } else { graphics.drawCircle(40 + x * 40, 40 + y * 40, 16); // drawCircle(x, y, radius) } graphics.endFill(); } } } } // HUD var title = new PIXI.Text('', { fontFamily : 'Times New Roman', fontSize: 24, fill : 'lime', align : 'center'} ); title.text = 'Peg Pilferer [v 0]'; title.position.x = renderer.width - 205; title.position.y = 20; container.addChild(title); var levelText = new PIXI.Text('Lv: ' + level, { fontFamily : 'Times New Roman', fontSize: 20, fill : 'lime', align : 'center'} ); levelText.position.x = renderer.width - 200; levelText.position.y = 60; container.addChild(levelText); var scoreText = new PIXI.Text('S: ' + score, { fontFamily : 'Times New Roman', fontSize: 20, fill : 'lime', align : 'center'} ); scoreText.position.x = renderer.width - 200; scoreText.position.y = 80; container.addChild(scoreText); var goalText = new PIXI.Text('Thr: SAFE', { fontFamily : 'Times New Roman', fontSize: 20, fill : 'lime', align : 'center'} ); goalText.position.x = renderer.width - 200; goalText.position.y = 100; container.addChild(goalText); var diff = score - scoreThreshold; var diffText = new PIXI.Text('Diff: SAFE', { fontFamily : 'Times New Roman', fontSize: 20, fill : 'lime', align : 'center'} ); diffText.position.x = renderer.width - 200; diffText.position.y = 120; container.addChild(diffText); var creditText = new PIXI.Text('E64', { fontFamily : 'Times New Roman', fontSize: 20, fill : 'lime', align : 'center'} ); creditText.position.x = renderer.width - 200; creditText.position.y = renderer.height - 50; container.addChild(creditText); function floodFill(y, x, c) { var n = 1; grid[y][x].type = 0; if (x - 1 >= 0) { if (grid[y][x - 1].type == c) { n += floodFill(y, x - 1, c); } } if (x + 1 < dimension) { if (grid[y][x + 1].type == c) { n += floodFill(y, x + 1, c); } } if (y - 1 >= 0) { if (grid[y - 1][x].type == c) { n += floodFill(y - 1, x, c); } } if (y + 1 < dimension) { if (grid[y + 1][x].type == c) { n += floodFill(y + 1, x, c); } } return n; } function executeCombo() { var cx = Math.floor((mx - 20) / 40); var cy = Math.floor((my - 20) / 40); if (cx < 0 || cy < 0 || cx >= dimension || cy >= dimension) { return; } // Player clicks piece if (canClick && grid[cy][cx].type != 0) { canClick = false; var piecesRemoved = floodFill(cy, cx, grid[cy][cx].type); piecesLeft -= piecesRemoved; score = score.add(getComboScore(piecesRemoved)); // Level clear if (piecesLeft == 0) { generateMap(level += 1); } else { SOUND_CLICK.play(); } scoreText.text = 'S: ' + score; redrawGrid(); var diff = score - scoreThreshold; if (diff >= 0) { diffText.text = 'Diff: SAFE'; } else { diffText.text = 'Diff: ' + diff; } canClick = true; } } // Handle User Mouse Click graphics.mousedown = function(data) { mx = data.data.global.x; my = data.data.global.y; executeCombo(); }; graphics.tap = function(data) { mx = data.data.global.x; my = data.data.global.y; executeCombo(); }; // Initialize game generateMap(level); redrawGrid(); animate(); function applyGravity() { fallingOffset = 40; for (var x = 0; x < dimension; x++) { for (var y = dimension - 1; y > 0; y--) { if (grid[y - 1][x].type != 0 && grid[y][x].type == 0) { grid[y][x].type = grid[y - 1][x].type; grid[y][x].falling = true; grid[y - 1][x].type = 0; fallingPegs.push(x); fallingPegs.push(y); } } } } function animate() { if (clock % 10 == 0) { applyGravity(); } if (fallingOffset > 0) { fallingOffset -= 4; } if (fallingOffset == 0) { var sy; var sx; while (sy = fallingPegs.pop()) { sx = fallingPegs.pop(); grid[sy][sx].falling = false; } } redrawGrid(); renderer.render(container); requestAnimationFrame(animate); clock += 1; } ;
STACK_EDU
Hi everyone, > three possible solutions for splitting distfiles were listed: > > a. using initial portion of filename, > > b. using initial portion of file hash, > > c. using initial portion of filename hash. > > The significant advantage of the filename option was simplicity. With > that solution, the users could easily determine the correct subdirectory > themselves. However, it's significant disadvantage was very uneven > shuffling of data. In particular, the TeΧ Live packages alone count > almost 23500 distfiles and all use a common prefix, making it impossible > to split them further. > > The alternate option of using file hash has the advantage of having > a more balanced split. There's another option to use character ranges for each directory computed in a way to have the files distributed evenly. One way to do that is to use filename prefix of dynamic length so that each range holds the same number of files. E.g. we would have Ab/, Ap/, Ar/ but texlive-module-te/, texlive-module-th/, texlive-module-ti/. A similar but simpler option is to use file names as range bounds (the same way dictionaries use words to demarcate page bounds): each directory will have a name of the first file located inside. This way files will be distributed evenly and it's still easy to pick a correct directory where a file will be located manually. I have implemented a sketch of distfiles splitting that's using file names as bounds in Python to demonstrate the idea (excuse possibly non-idiomatic code, I'm not very versed in Python): $ cat distfile-dirs.py #!/usr/bin/env python3 import sys """ Builds list of dictionary directories to split the list of input files into evenly. Each directory has name of the first file that is located in the directory. Takes number of directories as an argument and reads list of files from stdin. The resulting list or directories is printed to stdout. """ dir_num = int(sys.argv) distfiles = sys.stdin.read().splitlines() distfile_num = len(distfiles) dir_size = distfile_num / dir_num # allows adding files in the beginning without repartitioning dirs = ["0"] next_dir = dir_size while next_dir < distfile_num: dirs.append(distfiles[round(next_dir)]) next_dir += dir_size print("/\n".join(dirs) + "/") $ cat pick-distfiles-dir.py #!/usr/bin/env python3 """ Picks the directory for a given file name. Takes a distfile name as an argument. Reads sorted list of directories from stdin, name of each directory is assumed to be the name of first file that's located inside. """ import sys distfile = sys.argv dirs = sys.stdin.read().splitlines() left = 0 right = len(dirs) - 1 while left < right: pivot = round((left + right) / 2) if (dirs[pivot] <= distfile): left = pivot + 1 else: right = pivot - 1 if distfile < dirs[right]: print(dirs[right-1]) else: print(dirs[right]) $ # distfiles.txt contains all the distfile names $ head -n5 distfiles.txt 0CD9CDDE3F56BB5250D87C54592F04CBC24F03BF-wagon-provider-api-2.10.jar 0CE1EDB914C94EBC388F086C6827E8BDEEC71AC2-commons-lang-2.6.jar 0DCC973606CBD9737541AA5F3E76DED6E3F4D0D0-iri.jar 0ad-0.0.22-alpha-unix-build.tar.xz 0ad-0.0.22-alpha-unix-data.tar.xz $ # calculate 500 directories to split distfiles into evenly $ cat distfiles.txt | ./distfile-dirs.py 500 > dirs.txt $ tail -n5 dirs.txt xrmap-2.29.tar.bz2/ xview-3.2p1.4-18c.tar.gz/ yasat-700.tar.gz/ yubikey-manager-qt-0.4.0.tar.gz/ zimg-2.5.1.tar.gz $ # pick a directory for xvinfo-1.0.1.tar.bz2 $ cat dirs.txt | ./pick-distfiles-dir.py xvinfo-1.0.1.tar.bz2 xview-3.2p1.4-18c.tar.gz/ Using the approach above the files will distributed evenly among the directories keeping the possibility to determine the directory for a specific file by hand. It's possible if necessary to keep the directory structure unchanged for very long time and it will likely stay well-balanced. Picking a directory for a file is very cheap. The only obvious downside I see is that it's necessary to know list of directories to pick the correct one (can be mitigated by caching the list of directories if important). If it's desirable to make directory names shorter or to look less like file names it's fairly easy to achieve by keeping only unique prefixes of directories. For example: xrmap-2.29.tar.bz2/ xview-3.2p1.4-18c.tar.gz/ yasat-700.tar.gz/ yubikey-manager-qt-0.4.0.tar.gz/ zimg-2.5.1.tar.gz/ will become xr/ xv/ ya/ yu/ z/ Thanks for taking time to consider the suggestion. --- Andrew
OPCFW_CODE
Flow is great for building interactive workflows right into the Salesforce user interface. This post discusses how to run a Screen Flow at the click of a button on a record detail page. First, a reminder. There are two main types of Flow: Screen Flows and what Salesforce calls "Autolaunched" Flows. Autolaunched Flows aren't interactive. They run in the background based on some trigger: a record change, a platform event or on a schedule. Screen Flows are meant to be used by users in user interfaces; they're 100% interactive. To support this, the Flow Builder provides all kinds of UI and screen-building capabilities for Screen Flows. So let's see how you can connect a Screen Flow to a button on a record detail page to automate a process. Let's say we have a sales process at Acme Co where a business development rep (BDR) performs in initial discovery with a prospect. Acme is very ABM-focused, so BDRs use opportunities instead of leads. If their prospect is qualified, the BDR needs to assign the opportunity to an Account Executive (AE) who will then take over the sales cycle. To make this easy, we'd like our BDR to be able to press a button and assign the opp to an AE via round-robin. Configure your action Let's assume for now that you've got a screen flow that handles the round-robin assignment and it's all set up and activated. In our case, we've got a Screen Flow called "Assign Opportunity" that's all set up and ready to go. We'll cover a bit more about the Flow itself later in this post. First, go to Setup > Object Manager and locate the object you want to add the button to. That's the Opportunity object in our case. On the left side of the screen, select "Buttons, Links and Actions" and then click "New Action". (Even though this will end up looking like a button in the UI, we're not going to use "New Button or Link". That's a different, older thing. Salesforce gonna Salesforce ¯\_(ツ)_/¯.) The New Action screen looks like this: Change the Action Type picklist to Flow. The next field will change to a picklist of available Screen Flows . If you don't see your Flow in this list, go back and check that a) it's a Screen Flow, not an Autolaunched Flow and b) that you set it to active in the Flow Builder. Select your Flow. Enter a Label and Name, then click Save. Note that the label is what the text the button will show in the UI so keep it fairly short. Go to Page Layouts in the left nav and click into the page layout you're using for this record type: In the layout controls, select "Mobile & Lightning Actions". You should see your new action listed there. Drag it into the "Salesforce Mobile and Lightning Experience Actions" section. (If your "Salesforce Mobile and Lightning Experience Actions" section doesn't look like the above, you may need to click the little wrench icon to allow you to modify it.) Click Save. You're all set. Now it's time to see your action in... action. Navigate to a record detail page and you should see your action translate into a button in the Lightning UI like so: If you don't see the button, try the little dropdown arrow to the right of the buttons. Your action may be hiding in there. Click the button and your Flow will execute, showing you whatever screens you've designed in a dialog. Our Gradient Works Flow says who the new assigned AE is and looks like this: Once you click next, the dialog goes away and that's it. If that's all you needed, you can stop now. If you want to know a little more about how to build a Screen Flow that works like this, read on. Building the Screen Flow We skipped right to the button part and just assumed you have a working Screen Flow. If that's not the case, let's talk a little about how to set up your Flow for the use case above. Fair warning, the rest of this section uses Gradient Works Routing to do opportunity assignment in Flow. If you don't have Gradient Works, some of the Flow assignment actions described won't be available to you. Our goal here is to make a Flow that assigns a new owner to the Opportunity record the user is viewing, so we need to make the Flow aware of the record it's operating on. You do this by creating an input variable in the Flow called recordId. Here's what that looks like: Make sure you name the variable recordId , set the Data Type to Text and make the variable "Available for Input" . When you set things up this way, Salesforce will automatically set this variable to the Id of the record the user is viewing when they click the button. Our Flow is actually pretty simple. It just needs to perform the following steps in order: - Get the opportunity - Assign the opportunity to an AE - Show a screen with the assignment results - Refresh the record detail page to show the new data Here's what the completed Flow looks like: Let's look at each of the 4 elements in turn. Get Opportunity is a Get Records element and it uses the recordId variable we created to fetch the Opportunity that the user is viewing. Here's what the Get Records config looks like: Assign Opportunity uses a Gradient Works custom Flow Action that allows you to do adaptive round-robin assignment to a group of users. This action takes an "item" (the Opportunity record retrieved by our previous Get_Opportunity action in this case) and the name of a Gradient Works Queue ("AEs") where the pool of potential users has been defined for assignment. Here's the configuration for the action: One cool thing about this action is that it outputs an Assignment that contains info about who was assigned to the record. In this case (not shown), we store that output in a variable called Assignment so we can use it in the next step. The next step is all about showing what was assigned to whom in a dialog. Here's what that Screen Element setup looks like: We use the Assignment output from the Assign Opportunity step to display a message to the user using merge fields. Finally, we have one more step which is a little odd. We need to explicitly tell the record detail page to refresh. If we don't, it won't be obvious to the user that the owner has changed. The last action is a Gradient Works action that forces a refresh to occur and looks like this: And that's it. Those 4 elements create a Screen Flow that does a round-robin assignment of an Opportunity to a group of AEs and then displays the result to the user. With some tweaks, the assignment logic could become much more powerful with specific checks for Opportunity stage and assigning users from multiple round-robin queues. With a few clicks, you can quickly add an action button on a record detail page that your users can use to kickoff context-aware, interactive Screen Flows and get more done. While our Screen Flow used Gradient Works, your Screen Flow can be any workflow you want to automate. You may also want to check out our Flow cheatsheet - a quick reference guide to help you work through most any Flow issue.
OPCFW_CODE
Graduation gifts for software engineers. Analytical minds do think alike, and it’s the same type of intellectuals that thought up this fantastic set of gifts for engineers. Modern Keychain Cool Keyrings Gifts For Programmer Computer Etsy In 2021 Gifts For Programmers Computer Geek Gifts Computer Geek An examination of the software development process from the initial requirement analysis to the operation and maintenance of the final system. Gifts for software engineers 2021. Software engineer resume guide for 2021. Work closely with mechanical, electrical and signal integrity (si) engineers to build desktop software solution to test and verify high speed electrical performance of 56 gbps interconnect solutions. Software development engineer in test: This desk tidy will instantly make heads or tails out of their desk, and is perfect for engineers that tend to get a bit messy during the middle of a big project. Software development in 2021 and beyond. When you apply for an engineering job at a company the first person looking at your resume is, well, not a person. Perfect for anyone currently studying engineering, this book introduces the reader to all the fundamental elements of. This requires extensive knowledge of software design, computer programming languages, such as python and java, as well as the operating systems—like unix and linux—they work with. Gifts for engineers don’t have to be electronic items made of plastic or metal. In fact, an interesting gift like this nautical crush trading air. But this is an especially fun gift if the programmer in your life has some little ones they want to share their passions with. Computer software engineer, data analyst, computer network architect We’re just most inclined to find the beauty in creation and science. Software engineering project ideas must be constantly updated every year as per evolving technology. Cube magnet that is fun to build things with. Best online software engineering degrees 2021: We are here to help you at this point, we update this page with innovative software based project ideas to be used by engineering students as their final year projects. Software engineering, also known as software architecture, involves analyzing specific needs and creating the tools required to build the software to meet those needs. Soy candle / software developer gift / programmer gift / graduation gift / gifts for software engineer huffdesigns 5 out of 5 stars (2,904) $ 19.97. According to the indeed report, the average software development rate in the united states is $110,539 per year. Companies use something called an applicant tracking system (ats) to filter out a majority of applicants for a role based on keyword matching. It’s basically the gift that keeps on giving. The diy mini robot arm kit is normally priced at $1,690. 12 best gifts for engineers: But ahead of black friday, you can save $381 using the code save15nov. 101 things i learned in. The space engineers ultimate edition 2021 includes all of the dlcs for space engineers, all the decorative blocks and cosmetic items released in 2019, 2020 and 2021. Ultimate edition 2021 keen software house 12+ moderate violence. Create and execute test plans, conduct software reviews, perform system verification and validation, analyze and resolve failure modes and document results. Software Engineering Gifts For Software Engineers T-shirt Software Engineer In 2021 T Shirt Funny Software Engineering Gifts 99 Bugs In The Code Software Engineer Gift For Engineer Mug Programmer Gift For Programmer Mug Funny Engineering Mug Dad Gift A407 In 2021 Gifts For Programmers Software Engineer Gifts Engineering Gifts Coding Poster Gift For Programmers Em Css Code Print Web Etsy In 2021 Web Designer Gift Web Design Gifts For Programmers Christmas Gifts For Programmers 60 Best Gifts For Programmers Web Developers 2021 Gift Ideas For Programmers Gifts For Programmers Software Engineer Gifts Computer Programmer Gifts Codeblooded Software Engineer Coder Programmer Gift Magic Etsy Software Engineer Software Engineer Gifts Gifts For Programmers I Am Legacy Code Ninja In 2021 Gifts For Programmers T Shirt Popular Problems Programmers While Coding 15oz Mug – Premium Mugs Of Mugdom Programmer Humor Programming Humor Computer Nerd 30 Best Gifts For Programmers 2022 Excite Even Master Coders Gifts For Programmers Software Engineer Gifts Engineering Gifts Metal Poster Turn Coffee Into Code In 2021 Gifts For Programmers Software Engineer Metal Posters Engineering Graduation Gift Engineer Mug Engineer Gift Etsy In 2021 Engineering Gifts Graduation Gifts First Apartment Gift Programmer Mug Gift For Software Engineer Or Coder Gift Computer Science Mug In 2021 Software Engineer Gifts Computer Science Gifts Engineering Gifts Gifts For Aspiring Programmers Gifts For Programmers Software Engineer Gifts Software Engineer Funny Gift – Full Color Mug Homewise Shopper In 2021 Mugs Engineer Mug Software Engineer Humor Engineer Is My Favorite Poster By Nasmed Displate In 2021 Gifts For Programmers Software Engineer Engineering Funny Engineering Black Mug Engineering Gift Software Etsy In 2021 Engineering Humor Funny Engineer Gifts Engineering Gifts 6 Stages Of Debugging Bug Coding Computer Science Programmer Etsy In 2021 Computer Science Coding Programmer Electrical Engineer Gift Coffee Mug Electrical Etsy In 2021 Mugs Engineering Gifts Coffee Mugs Gifts For Software Engineers Men – Funny Programmer Software Developer Computer Engineer Nerd T-shirt Fun In 2021 Funny Software Engineer Clothes Computer Nerd Gifts Computer Programmer Coders Software Engineers Funny Gift Socks By Tastefultees In 2021 Computer Programmer Software Engineer Programmer
OPCFW_CODE
using System; using System.Collections.Generic; using System.Linq; using System.Net; using Helios.Common.Extensions; using Helios.Common.Synchronization; namespace Caasiope.P2P { public class PeerManager { public Action<PeerSession> OnPeerConnected; public Action<PeerSession> OnPeerDisconnected; public Action<PeerSession, byte, byte[]> OnReceived; private readonly MonitorLocker locker = new MonitorLocker(); private readonly Dictionary<PersonaThumbprint, PeerSession> sessionsById = new Dictionary<PersonaThumbprint, PeerSession>(); private int channels; public void Initialize(int channels) { this.channels = channels; } public PeerSession GetOrCreatePeerByPersona(Persona persona, IPEndPoint server) { using (locker.CreateLock()) { return sessionsById.GetOrCreate(persona.Thumbprint, () => CreatePeer(server, persona)); } } private PeerSession CreatePeer(IPEndPoint server, Persona persona) { var peerSession = new PeerSession(new Peer(new Node(server), persona), channels); peerSession.OnConnected += () => OnPeerConnected(peerSession); peerSession.OnClosed += () => OnPeerDisconnected(peerSession); peerSession.OnReceived += (channel, data) => OnReceived(peerSession, channel, data); return peerSession; } public void Broadcast(IChannel channel, byte[] data) { // TODO too slow using (locker.CreateLock()) { foreach (var peer in sessionsById.Values) { peer.Send(channel, data); } } } public List<PeerSession> GetAllConnected() { // TODO too slow var result = new List<PeerSession>(); using (locker.CreateLock()) { result.AddRange(sessionsById.Values.Where(_ => _.PeerState == PeerState.Connected)); } return result; } } }
STACK_EDU
Java Concurrency Tutorial HowToDoInJava The Java platform is designed from the ground up to support concurrent programming, with basic concurrency support in the Java programming language and the Java class libraries. Since version 5.0, the Java platform has also included high-level concurrency APIs. This lesson introduces the platform's basic concurrency support and summarizes some of the high-level APIs in the... Multithreading in java is running multiple threads sharing same address space concurrently. A multithreaded program has two or more parts running simultaneously. Each part is called a thread and has a separate path of execution. Multithreading allows a process to run its tasks in parallel mode and execute these different tasks simultaneously. Introduction to Java Multithreading Core Java Tutorial Essential Java Multithreading Examples Today’s Practical Review/Discussion. Question Does the following code compile? What would the output be? Answer. Question Does the following code compile? What would the output be? Answer 78-7616 Explain why? Question Does the following code compile? What would the output be? Answer. Question Does the following code compile? What would the …... 3/06/2018 · Java MultiThreading Tutorial 01. Java is one of the most popular object oriented programming languages. We take an hands-on approach using a combination of JShell(An awesome new feature in Java 9 Concurrent Programming with Java Threads A thread, in the context of Java, is the path followed when executing a program. All Java programs have at least one thread, known as the main thread, which is created by the Java Virtual Machine (JVM) at the program’s start, when the main() method is invoked with the main thread. listening advantage 3 teacher book pdf Java Multithreading - Learn Java in simple and easy steps starting from basic to advanced concepts with examples including Java Syntax Object Oriented Language, Methods, Overriding, Inheritance, Polymorphism, Interfaces, Packages, Collections, Networking, Multithreading, Generics, Multimedia, Serialization, GUI. Thread join() Method Example Java 9 Java Threads (Concurrent) vs. Fork/Join Framework (Parallel) • Using threads – When task is relatively large and self-contained – Usually when you are waiting for something, so would benefit even if … java j2ee interview questions pdf k arulkumaran 31/05/2015 · Java Multithreading and Concurrency Best Practices Sole purpose of using concurrency is to produce scalable and faster program. But always remember, speed comes after correctness. How long can it take? Advanced Java Interview Questions For Freshers And Experienced - Multithreading in Java Roseindia - Multithreading Image Processing in Single-core and Multi - Java Concurrency Tutorial HowToDoInJava - Multithreading in Java Roseindia Multithreading In Java Examples With Explanation Pdf Java Multithreading - Learn Java in simple and easy steps starting from basic to advanced concepts with examples including Java Syntax Object Oriented Language, Methods, Overriding, Inheritance, Polymorphism, Interfaces, Packages, Collections, Networking, Multithreading, Generics, Multimedia, Serialization, GUI. - multithreading interview questions and answers in c >3) Find Factorial using recursion example in java - tricky interview programs >4) Find out count of all - Multithreading in java is a process of executing two or more threads simultaneously. In this tutorial, learn Concurrency, Thread Life Cycle and Synchronization in Java using example programs. - Multithreading in java is a process of executing multiple threads simultaneously. A multi-threaded program contains two or more process that can run concurrently and each process can handle a different task at the same time making optimal use of the available resources specially when your computer has multiple CPUs. The process of executing multiple threads simultaneously is known as - section on "Advanced Java" with explanation for various interview, competitive examination and entrance test. Solved examples with detailed answer description, explanation are given and it would be easy to understand. Tue, 09 Apr 2013 23:57:00 GMT Advanced Java - Interview Questions and Answers - Looking for SharePoint Interview Questions with Answers? Here we have compiled set of questions
OPCFW_CODE
Sparse checkout for repository subdirectories Sometimes there may be a very large repository in HG & Git in which one wants to open source only one certain directory, such as a sub-project or particular library. As far as I can tell, currently the only way to do this is to write a bunch of editors which delete all folders that aren't being open sourced. This is far from optimal, as fetching the other folders could be expensive operation, and writing scrubbers for very complicated (dozens of folders) repositories for every open source project is error prone. It would be easier to limit a repository to a certain path. Enhancement: Sparse checkout Include an optional "path" field in the repository JSON object in the config file: "internal": { "type": "hg", "url": "file:///home/mbethencourt/work/test_repos/hg_0", "project_space": "internal", "path": "subproject/libraryname" }, * Without a "path", MOE falls back to old behavior (operating on entire repo). * With a path, MOE tries to use "sparse" checkout feature. A "renamer" translator will still probably be used to make the layout match external layout. Sparse checkouts would be implemented a little differently for each client: * Git would use the sparse checkout feature ( http://blog.quilitz.de/2010/03/checkout-sub-directories-in-git-sparse-checkouts/ ) * SVN would use either "--non-recursive" or "--depth empty", and then "update" or "--depth infinite" the specified path * HG, unfortunately, does not seem to have a sparse checkout feature. It can still be imitated however by cobbling together a few commands. Original issue reported on code.google.com by<EMAIL_ADDRESS>on 23 Aug 2011 at 9:15 Hi Sometimes there may be a very large repository in HG & Git in which one wants to open source only one certain directory, such as a sub-project or particular library. As far as I can tell, currently the only way to do this is to write a bunch of editors which delete all folders that aren't being open sourced. This is far from optimal, as fetching the other folders could be expensive operation, and writing scrubbers for very complicated (dozens of folders) repositories for every open source project is error prone. It would be easier to limit a repository to a certain path. Enhancement: Sparse checkout Include an optional "path" field in the repository JSON object in the config file: "internal": { "type": "hg", "url": "file:///home/mbethencourt/work/test_repos/hg_0", "project_space": "internal", "path": "subproject/libraryname" }, * Without a "path", MOE falls back to old behavior (operating on entire repo). * With a path, MOE tries to use "sparse" checkout feature. A "renamer" translator will still probably be used to make the layout match external layout. Sparse checkouts would be implemented a little differently for each client: * Git would use the sparse checkout feature ( http://blog.quilitz.de/2010/03/checkout-sub-directories-in-git-sparse-checkouts/ ) * SVN would use either "--non-recursive" or "--depth empty", and then "update" or "--depth infinite" the specified path * HG, unfortunately, does not seem to have a sparse checkout feature. It can still be imitated however by cobbling together a few commands. Original issue reported on code.google.com by<EMAIL_ADDRESS>on 23 Aug 2011 at 9:15
GITHUB_ARCHIVE
Since the update to Windows 1803 version I am experiencing extremely reduced battery life on my XPS 9560. I cannot roll back to the previous version so after some basic research I found that this issue is related to some specific SSD models, including a list ofToshiba and Intel drives. Mine (Toshiba THNSN5512GPUK) is one of them. I also found that WIndows is aware of the issue and with one of the subsequent builds (KB4100403) they claim they resolved the problem but in my case it persists. I couldn't also find any firmware update for the SSD or any other way to solve my issue, so I'm writing here to see if there is other users with the same problem and if anybody has solved it in some way. Any feedback is appreciated, thanks in advance Please verify you are using latest firmware for your SSD, here is latest one: Please also verify all Windows updates have been applied. No more ideas;-) PS: mine was affected like yours, now this is a lot better but it still seems not as good as before windows 1803 😞 First of all thanks for the reply. I think I have a different Toshiba SSD model because this firmware update doesn't appear when I search for drivers on the Dell website with my laptop code. Anyway I have already tried and when I select the drive for the update I cannot go on with the installation because "this drive is not supported with this application". I have also installed the latest update of the Windows 1803 Version (KB4284835) since some days but nothing changed. I am not sure if they forgot to solve the issue with some SSD models or if I am still missing something, but it seems I tried every option. Thanks again for the feedback! Yes, this is the one. I have it installed for some time now but still the problem persists. Thanks anyway, maybe someone with a similar problem reads this post and this solution can work for them I'm having similar issues with my XPS 9350. I'm not noticing the battery drain, but ever since I did a clean install of 1803 my boot time is almost 4 minutes. Prior to installing 1803, my boot time was 15 seconds at most. I also have a Toshiba SSD and have installed all the latest patches. I'm not sure what else to try, but to revert back to 1709. I haven't found any other suggestions either... I have the same problem here ! Since the April update I have overheating problems, fans are all time enabled and I can run 2h on battery (before I was on 6-7hours...). I have the Toshiba SSD too... I seem to have solved my problem. Luckily I had a backup of what I thought was Windows 1709, but doing a recovery I discovered it was already 1803 but the problem wasn't there anymore. Battery is fine even with the latest Windows updates. The firmware for the Toshiba SSD was also already updated so that was also not the problem. I concluded it was some kind of driver, couldn't exactly find out which one, probably some Intel chipset or thermal framework driver that I accidentally installed using a third party software like Driver Booster, which usually works fine but in this case made a total mess. Regarding the boot time, I never had such a problem but it could also be somewhere in the ssd or energy management drivers. But I cannot say since I didn't experience the problem first hand. Hope this can help users who find themselves in my same situation!
OPCFW_CODE
This post originally appeared on the Software Carpentry website. A two day Software Carpentry Workshop wiht Python was held at the University of Auckland 11-12th July as part of Winter Bootcamp. For myself,it was the first time helping with Software Carpentry training. It was a great experience to assist as a collaborator, helping others resolving problems with software installation and hands-on exercises, but also learning from all the unexpected situations that could happen with different people around the room, with different laptops, configurations, and different sets of skills. We happily managed to start training at 9:00 on 11th of July. The first topic covered was Unix Shell, presented by Sina Masoud-Ansari. We warmed up with some useful exercises for real life research or work. In the afternoon the Python session appeared to show its magic to assistants with Prashant Gupta as the presenter. On 12th July we start the day with the presentation of Cameron McLean on Git or “the lifesaver” as Cam himself defined it ;). During the afternoon participants were consulted about which topic they desire to go deeper. Python was the winner of the survey, and as a consequence we enjoyed another great afternoon in its company. Additional to SWC sessions, on Wednesday 13th of July a session on Research Data Management was held by Cam, with a different audience, and a more theoretical approach. In general the participants had some experience with the topic, which made for an engaged group, generating some discussions and more knowledge to share among all. An invitation to hacky-hour on Thursdays was extended to the participants of all the workshops and happily it was a totally success! To finish, I should say that was a very gratifying experience be part of this event, helping, empowering, learning and motivating others to be part of this exciting world of software development in research environments where world transformation could start! Some remarkable facts: - Software Carpentry training was held in the context of the Winter Bootcamp at the University of Auckland. - 75% of participants didn’t have any experience with programming or just basic skills, however they managed to go smoothly in all sessions. - Approximately 40 assistants during three days. - Some assistants were so excited learning new skills that they decided to start new online courses using websites as codeacademy and coursera. - On Thursday 14th after the workshops, hacky-hour was packed, we needed to join three more tables than as usual to bring some space to all the new participants. - Helpers around the room (we were 4 and sometimes 5) were essential to maintain the pace of the sessions. All the problems with not working exercises and software were addressed by ourselves allowing the presenter to focus in the topic and manage better the time. - The learners were very active, asking questions and excited with the new knowledge seen in action :) - We are thinking to include next time, an exercise that could cover all the topics solving a real life problem and finish the workshops with an example that is useful for later consultation of the participants. Dialogue & Discussion Comments must follow our Code of Conduct.
OPCFW_CODE
In the modern digital age, the significance of Natural Language Processing (NLP) has grown exponentially. This revolutionary field of artificial intelligence empowers computers to understand, interpret, and generate human language. From chatbots providing customer support to sentiment analysis in social media, NLP’s applications are far-reaching and diverse. In this article, we will delve into the transformative power of NLP, exploring its uses, challenges, and the potential it holds for shaping the future. Unleashing NLP’s Potential At its core, NLP is about bridging the communication gap between humans and machines. The ability to decipher human language, whether spoken or written, opens the door to numerous possibilities. Imagine a world where language is no longer a barrier for accessing information or services. NLP-driven language translation systems have already made significant strides in breaking down these barriers, making global communication more seamless than ever before. Revolutionizing Customer Interaction Businesses are harnessing the power of NLP to enhance customer experiences. Chatbots and virtual assistants are becoming increasingly sophisticated, capable of understanding context, intent, and even emotions. This transformation has revolutionized customer interaction, providing instant support and freeing up human resources for more complex tasks. As a result, organizations can offer round-the-clock assistance, delivering enhanced customer satisfaction. Insights from Text Data NLP’s prowess extends to the analysis of vast amounts of text data. Sentiment analysis, a key application, allows businesses to gauge public opinions and reactions. By examining social media posts, reviews, and comments, companies can gain invaluable insights into their products, services, and brand perception. This knowledge empowers them to adapt and cater to customer needs more effectively. Challenges and Ethical Considerations While NLP offers immense potential, it also comes with challenges. Ensuring privacy and data security is paramount, especially as NLP algorithms handle sensitive textual information. Additionally, addressing biases within NLP models is essential to avoid perpetuating societal inequalities. As AI systems learn from human-generated data, they can inadvertently replicate biases present in that data. Ethical considerations surrounding NLP’s use, transparency, and accountability are vital for its responsible deployment. A Glimpse into the Future The journey of NLP is still unfolding, with continuous advancements on the horizon. As technology evolves, we can expect even more accurate language understanding, improved contextual comprehension, and enhanced language generation. This opens doors for creative content generation, interactive storytelling, and personalized experiences that were once the stuff of science fiction. Natural Language Processing has already transformed how we interact with technology and each other. From simplifying language translation to enabling powerful sentiment analysis, NLP’s impact is profound. As this field progresses, it holds the promise of making information accessible to all, revolutionizing customer service, and shaping the future of communication. However, it’s essential to navigate the challenges responsibly and ensure that the power of NLP is harnessed for the betterment of society as a whole.
OPCFW_CODE
LOD 2020 Big-Data Challenge Our sponsor, Neodata Lab, will offer a prize of €2000 to the applicant who develops the most accurate algorithm to process the following problem. The main goal is to segment/profile internet users, using the actions they made on web sites. Users and actions - Users are identified by a specific id; - each user can perform different kinds of actions (action type) while they surf the internet: - pageview (view a web page); - impression (view an advertising on a web page); - click (click on an advertising on a web page); - conversion (reach the final goal of an advertising, e.g. buy a product on an e-commerce); - each action is described by a set of action attributes, e.g. timestamp, device, location, url. - a segment is a set of users that have something in common; - “something in common” means that they all match a given list of conditions; - therefore a segment can be considered as a rule/business logic, that is a set of conditions; - conditions are defined using the action attributes, e.g. ” users viewing web pages with url containing the word ‘pizza’ “; - conditions can also include the number of actions (frequency), e.g. ” users viewing web pages with url containing the word ‘pizza’ at least 5 times “; - conditions can be combined in AND and in OR; - all the actions of a given time period (longevity) are used to check the conditions. We provide the following initial data: - 1k segments; - 100M actions made by 10M users in 30 days. We provide the following additional data: - 10M new actions in 3 days, to be considered as produced in real-time. There are two goals: - Batch: to assign the users, who performed the 100M actions of the initial dataset, in the proper segments, if any (with the highest possible accuracy and the shortest time). - Real-time: for each action provided as additional data, update in real-time the set of segments to which the user who performed that action belongs to or the ones from which he should be removed (if any), considering also all the actions made in the 30 days before. - use the standard Apache Hadoop frameworks; - real time streaming: kafka - processing: spark on hadoop cluster - storage: hbase and hdfs - A set of 1K user segment definitions are provided with complex rules, to make the solution of the problem similar to real cases. - A data flow should be simulated using the additional data to provide the input to a Kafka stream at the time interval indicated by the timestamp of the actions. - Submission of full system and abstract: Monday May 4, 2020 - Notification of Acceptance: Monday May 25, 2020 - Challenge presentation: July 19-23, 2020 (final date to be defined) - Contact: firstname.lastname@example.org
OPCFW_CODE
Crash Course 2: Invaders - Part 8 by Hectate (Updated on 2015-02-01) Part 8: Making the Ship Move We've added an Event to the scene to play our music, now let's add an Event to our player's Ship so they can move it around! Step 48: Click on the Dashboard tab, then on Actor Types, and select our Ship Actor Type. Now that we’re in the Actor Editor and can edit our Ship, click the Events tab. Step 49: What we are looking at is Design Mode again, this time for the Ship. We’re going to create an Event that will allow us to move our Ship both left and right in the scene by pressing keyboard keys. To do this, we need to specify what happens when the player presses certain keyboard keys AND what happens when the player isn’t pressing anything. To start, click the + Add Event button in the Events pane on the left, select Basics, and then choose When Updating from the three options shown. The following block will appear in the work area: Step 50: This time, rather than dragging a block over from the Palette, we’re going to use the Block Picker. Right-click anywhere in the workspace area, mouse over the Place a Block option, then over the Flow category, then click on the if block. That block will appear in the work area where you first right-clicked. Step 51: We drag the if block into the Always block so it snaps in place. Step 52: Now we need to select the right block to go inside the hexagonal field in our If block. Click the empty hexagonal field (this shape is always for a boolean, i.e. a value that can be true or false, in Stencyl) in the block and a new dialog will pop up. Mouse over the User Input category, and then choose the Control is Down block. The Control is Down block will appear for us in the If block, as shown below. Step 53: The Always block will constantly run through all the logic inside of it while the game runs. Thus the if block inside checks (using a boolean) whether something is true constantly while the game runs to allow or disallow the code that it contains to run. As a result, our Control is Down boolean detects whether a control key is pressed or not to let code inside the If block run. We do, however, need to choose which control key to check. Click the Control dropdown on the block and select Choose Control. From the dialog that appears, choose the right option, as shown. Again, the block will change to reflect this choice. Step 54: Next, we need to go to the Actor category of the palette and select the Motion sub-category and find the Set X Speed to for Self block, as shown below. This block will allow us to control the ship's speed. Drag it over to the work area, and snap it inside the if block's empty space so it looks like the image below. Step 55: We now need to create an Attribute that we can use to adjust the value for x-speed. By doing this we won’t need to edit the speed value directly on the block in our Event every time we want to tweak it. To do this, let's click the Attributes category on the Palette, then click the Create an Attribute button. In the dialog that appears, we will give our new attribute a Name of "Ship Speed" and set the Type to "Number". Click OK to continue. Now you’ve got a blue block for the Ship Speed Attribute that you can set to different numeric values. We’ll show you how to set its value later when we complete this Event. Step 56: Click on the empty number field in the set x-speed to [ ] for [Self] block, select Attributes in the pop-up dialog, and choose the one we just created, Ship Speed. As you would expect, the block will appear in the field. Step 57: Next, go to the Flow category, Conditions sub-category in the Palette and drag the otherwise if block over to the work area. Snap the otherwise if block under the if block inside the Always Event block, as shown. Step 58: Now we need to set up what happens when the user presses the left key instead of the right key. Get another Control is Down block and set it to left. We now need another "set x-speed to [Ship Speed] for [Self]" block. Once we have our second x-speed block with the Ship Speed inside of it, we need to make a slight change to have the Ship move left. Because positive speed values move an actor to the right, we have to use a negative value to make the actor move left instead. To do this, insert a negate block (it's in the Numbers & Text category, Math sub-category) to the field where Ship Speed would go, then replace Ship Speed inside the negate block. This block will take the number inside of it and make it a negative; thus making our Ship Speed work for moving left without needing a second number. Step 59: If we were to set Ship Speed to a value and test now, we could press left and right to make our ship move in those directions at the desired speed. However, we'd quickly notice an issue; it won't stop moving! This is because our logic sets the X-speed to a value, but never resets it to 0 when a key is not being pressed. To fix this, we need to add one more conditional block. Find and insert another otherwise if block below the otherwise if block shown previously. Then add the < > and < > block (under Flow -> Conditions on the Palette) to the empty hexagonal field. Step 60: In the [ ] and [ ] block, we click on the empty field, select Comparison from the dialog, and then the not < > block. We need to do this for both empty fields. We should get the following: Step 61: Add both a [right] [is down] and a [left] [is down] control block to each empty field. Step 62: Grab the set x-speed to [ ] for [Self] block again, click the [x-speed] dropdown on the block, and set the value in the field to 0. Then snap it in place. Here's the complete Event. Step 64: The last thing we need to do is set the Ship Speed value. Click on the Attributes tab at the bottom of the entire Palette. Then set the Ship Speed Attribute’s Default Value to 20. Step 65: Test the game to make sure the Ship moves from left to right and stops when you let go of a key. Now that you’re more familiar with Design Mode, it's time to move on to more Events. Our ship can move left to right, but there's nothing to stop it from disappearing off screen if you attempt to move it past the screen's edge. Our next task in Design Mode will restrict the Ship's movement.
OPCFW_CODE
Distroname and release: Debian Squeeze Serial ConsoleEver experienced not to be able to connect with example SSH to a remote host, and there is no keyboard or monitor connected to this host!? If so, you might want to continue reading. With Serial Console setup, it is possible to connect to the host using a workstation with a Com / Com Adapter. We actually do not need to install anything here, which is very good. Just need a Com / Com Adapter, and we are ready to go. Edit grubEdit grubs conf file menu.lst, and add the following to the top of the file. /boot/grub/menu.lst serial --unit=0 --speed=9600 terminal --timeout=10 serial consoleNext we will change the boot parameter, so that grub starts with the serial port listening.We will insert this line to console=tty0 console=ttyS0,9600 at the end of the kernel parameter! /etc/grub/menu.lst title Debian GNU/Linux, kernel 2.6.8-2-386 (VT) root (hd0,0) kernel /vmlinuz-2.6.8-2-386 root=/dev/hda3 ro console=tty0 console=ttys 0,9600 initrd /initrd.img-2.6.8-2-386 savedefaultNow we are done setting up grub. Save the file and quit the editor. Setting up inittabWe will have to setup inittab, to decide on which runlevels the new VirtualConsole should listen. Uncomment or add the following line in /etc/inittab /etc/inittab T0:23:respawn:/sbin/getty -L ttyS0 9600 vt100After saving the changes, reboot the computer. When booting the kernel with the console parameters it should come up with the following message. This is actually the timeout, we have specified in the menu.lst file. Press any key to contnue..... Press any key to contnue..... Press any key to contnue..... Press any key to contnue..... Press any key to contnue..... Press any key to contnue..... Press any key to contnue..... Press any key to contnue..... Press any key to contnue..... Press any key to contnue.....Just let it be, and it will continue to boot! If you press a button on the server, it will show the screen here. If you press a button on the client connected, it will show up on the client instead! We have setup our inittab to load this everytime so we always can access it. If not we will have to interact so we get the ttys display to the serial port. Connecting with clientsFrom a workstation, in example a laptop, you can now connect to this machine when the cable is plugged in. I usally uses GTKTerm, which is an GTK application. Another very cool trick, is to use the screen application, together with ttyS0 like this. $screen /dev/ttyS0 115200This will open, the TTY on the our other machine. If you are using GTKTerm, it is sometimes needed to press enter when the client is open, to get to the tty.
OPCFW_CODE
Here are some tips and tricks for the online browser game Disney Animal Kingdom Explorers available on Facebook. Refer here to learn the basics and how to play the game in an easier and comfortable way. By understanding the below cycle, you can boost up your XP and Reputation gain as you go through the game. - Energy > to get XP > to unlock items with Higher Reputation > to unlock New Scenes! - New scenes > higher XP gain > higher Reputation gain! - Needed to enter scenes or stages and gain Silvers and XP! - More XP can be gained by decorating your preserve with animals, plants, habitats and more. - More Reputation means more places to travel. - Note: Keep in mind that if you sell or store animals or items from your Preserve, the reputation for those items is deducted from your total Reputation! Your Total Score - Skill Bonus - No Hints USED! - Time and Skill Bonuses will be added after finishing a scene. - Base Score - Number of Objects Found - Chain Bonus - Chain Combos (find all animals/items without delay). - Base and Chain Bonuses are automatically calculated while playing. How to Get into the High Score Ranks - You need to find and click all the objects asked as fast as you can while keeping your chain combos up and without using any Hints. - This might require you to revisit the scene to practice and familiarize yourself. - Use them only if you can't really find an animal, item or to familiarize yourself. - Note: Using Hints will decrease you overall score! - Available from level 1 - Refills over time. Finds one object in scenes. - Unlock at level 5 - Use to easily search for hidden objects; when your goggles are over a hidden object, the - Unlock at level 10. Can be purchased with gold or gifted by friends. - Unlock at level 15. Can be purchased with gold or gifted by friends. - Available from level 1 - When you are asked to locate animals in scenes, you will see question mark icons next to - Visit your neighbor's preserve everyday to get free XP, Silvers and Energy! - Send free gifts to your friends and or neighbors! - Collect shared rewards from your friends' Facebook wall. - Serves as your game walkthrough and tutorial. Highly recommended for beginners. - Complete quests to unlock new scenes, Habitats, set items and more. Building and Expanding - Place Animals or Objects that will yield best reputation result per space. - For more info, refer here: LINK HERE SOON! - Stop building if you don't have any quest that requires you to unlock a particular scene to conserve your Silvers and preserve space. Animal Synergy NEW! - Extra bonus Reputation will be gained if two similar animals are near to each other! - Examples: Giraffe and Baby Giraffe; Flamingo and Flamingo Pond. - Are your main Reputation booster. Build one as soon as possible and expand them to gain more Reputation! Complete quests to unlock Habitats. - Requires set items. You can ask set items from your friends by clicking "Ask Friends". - Requires friends and neighbors to build habitats. You can also use Gold to finish instantly. - You can also add more pieces to the Habitat by gaining special gift items from friends. - Place animals near the Habitats to gain more Reputation Bonus! NEW! - Watch out though, you might not want to put a gazelle next to a lion habitat! NEW! Preserve Space and Grid Expansion - To expand the size of your Preserve, open the Shop and click on the Expansions tab. - Alternatively, you can also click any locked grid space to open the expansion window. they give the least reputations. If you need more space, sell animals or objects that gives the least reputations. - If you want to replay a scene, choose the newest scene to get the highest XP. - Your energy will refill every after level up so it's better to use them all up first before gaining any XP (from new items placed, upgrades or neighbor visits). - It is best to play at fullscreen (other monitors or HDTVs brightens up dark areas). Related Disney Animal Kingdom Explorers Tips & Guides:
OPCFW_CODE
Practice Accelerator Search, Social, Portals & Collaboration (V3) The SharePoint 2013 PA [Level 300] takes partners through an intense 1 day session where they will learn the latest information regarding the planning considerations, leading implementation practices and recommendations for deploying SharePoint 2013 for their customers. It will be delivered via presentations, open discussions and demos focused on helping the partner better understand the breath of capabilities the latest version of SharePoint has to offer their customers. • Understanding of SharePoint 2007/2010 Architecture design • Understanding of SharePoint features and functionalities • Deployment of SharePoint 2007/ 2010 • Administration of SharePoint 2007/2010 • Completed the SharePoint Ignite training at http://technet.microsoft.com/en-us/sharepoint/fp123606 • Completed the SP2013 IT-Pro training materials at http://msdn.microsoft.com/en-us/sharepoint/Fp123606 The following points will guide the key modules of the delivery (Session Agenda): • Architectural changes and enhancements introduced in SharePoint 2013: This part will cover the key changes to areas such as System requirements, Service applications, authentication models, Office Web Applications. • Upgrade to SharePoint 2013: This will walk partners through a real world upgrade scenario and also cover leading practices around formalizing a process by which to make a repeatable practice for their customers. • Search in SharePoint 2013: Search in 2013 has been re-written to combine multiple products and simplify the SKU catalog. In this module we will cover the implementation considerations and the features included with the latest release. • Social in SharePoint 2013: With the significant investment made to drive more social capabilities into SharePoint 2013 this will cover how the new capabilities will benefit end users and drive adoption. • ECM/ WCM in SharePoint 2013: This module will cover the improvements made in the area of E-discovery and the Managed Metadata service. It will also include examples of how to leverage the improvements to benefit customers. The Microsoft SharePoint 2013 PA is a Live Virtual Instructor-Led Training Delivery which enables our partners to introduce or enhance their practice with SP2013. The content included in this PA requires experience with previous versions of SharePoint and will be delivered on a 200-300 level of technical depth. Upon completion partners will be able to help their customers reduce costs, improve operational efficiencies, and drive business agility by delivering a solution that takes advantage of industry-leading architectures and principles. Date: March 3rd, 2014 Time: 10:00am – 6:00pm
OPCFW_CODE
#!/usr/bin/env python ''' Routines in this file are used for visualizing data. ''' import os, sys, time import socket import commands # Seems like we have some problem with matplotlib import matplotlib matplotlib.use('TkAgg') from matplotlib.pyplot import * from scipy import * from scipy.signal import * from scipy.linalg import * from scipy.interpolate import spline from scipy.ndimage import * from scipy.special import * from numpy import fft import Image class DiffDatahandle(object): ''' Class to handle the data for visualizing image difference''' def __init__(self, fname): '''Initialize the class by loading the data''' self.data = load(fname) def view(self, x, y): '''View the variation of the difference for the given x and y coord''' plot_data = self.data[x,y,:] clf() plot(plot_data) show() def multi_view(self, coord_list): ''' View variation of difference at multiple points''' clf() for coord in coord_list: x, y = coord plot_data = self.data[x, y, :] plot(plot_data, label='(%d, %d)'%(x,y)) legend() show() class AttitudeHandle(object): ''' Class to visualize attitude of the camera. We only look at in plane translation and Z axis rotation ''' def __init__(self): # Initialize matplotlib interactive plot ion() # Initialize all the constants self.host = '192.168.151.1' # TCP Host self.port = 1991 # TCP Port self.bufsize = 1024 # TCP buffer size self.t = 10e-3 # Sampling time self.g = 9.8 # Acceleration due to gravity self.alpha = 0.90 # LPF constant self.window = 10 # Window for velocity calculation # Data arrays self.xaccel = [0] self.yaccel = [0] self.zaccel = [0] # Old velocity values. self.vx = 0; self.vy = 0 # Acceleration due to gravity self.gx = 0; self.gy = 0; self.gz = 0 # plots self.ax_orient = subplot(2,1,1, polar=True) self.ax_orient.set_rmax(2.0) self.ax_orient.grid(True) self.ax_pos = subplot(2,1,2, polar=False) self.ax_pos.set_xlim(-1,1) self.ax_pos.set_ylim(-1,1) self.ax_pos.grid(True) def connect(self): # Connect to the mobile s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) print 'Waiting for a client' s.bind((self.host, self.port)) s.listen(1) self.conn, self.addr = s.accept() print 'Connection address:', self.addr def start_log(self): # Start the logging process. Update as and when we get a new # value start_plot = False while True: data = self.conn.recv(self.bufsize).replace('\x00', '') # See if we need to stop the logging. if 'EDLG' in data: print 'Stopping data log' self.conn.close() print 'Connection closed' return 0 if 'STLG' in data: start_plot = True print 'Started logging data' if start_plot: ac_tokens = data.split(';;') for i in ac_tokens[:-1]: # The first value might be wrong. Try and catch it try: acx, acy, acz = i.split(';') if abs(float(acx)) > 2: continue self.gx = (self.alpha*self.gx + (1-self.alpha)*float(acx)) self.gy = (self.alpha*self.gy + (1-self.alpha)*float(acy)) self.gz = (self.alpha*self.gz + (1-self.alpha)*float(acz)) self.xaccel.append(float(acx) - self.gx) self.yaccel.append(float(acy) - self.gy) self.zaccel.append(float(acz) - self.gz) #self.update_plots() except ValueError: pass theta = 180 - arctan2(self.gy, self.gx)*180/pi if theta > 180: print 360 - theta else: print theta self.update_plots() def update_plots(self): ''' Update the plot. The plot consists of orientation and position ''' theta = arctan2(self.gy, self.gx) # Clear the plots first. self.ax_orient.clear(); self.ax_pos.clear() # Set the axes again. self.ax_orient.set_rmax(2.0); self.ax_orient.grid(True) self.ax_pos.set_xlim(-0.2,0.2); self.ax_pos.set_ylim(-0.2,0.2) self.ax_pos.grid(True); self.ax_pos.set_aspect('equal') # Plot data self.ax_orient.plot(theta, 1.0, 'bo') vx = cumsum(self.xaccel)[-1]*self.g*self.t vy = cumsum(self.yaccel)[-1]*self.g*self.t self.ax_pos.plot(vy-self.vy, self.vx-vx, 'bo') self.vx = 0.9*vx+(1-0.9)*self.vx self.vy = 0.9*vy+(1-0.9)*self.vy draw() if __name__ == '__main__': ahandle = AttitudeHandle() ahandle.connect() ahandle.start_log()
STACK_EDU
(Laravel) List Content Management Systems Build with Laravel Source Code Awesome Content Management Systems (CMS) Build with Laravel, Free SleepingOwl Admin is an administrative interface builder for Laravel. Sample Screenshot : Github : https://github.com/LaravelRUS/SleepingOwlAdmin October is a Content Management System (CMS) and web platform whose sole purpose is to make your development workflow simple again. It was born out of frustration with existing systems. We feel building websites has become a convoluted and confusing process that leaves developers unsatisfied. We want to turn you around to the simpler side and get back to basics. Github : https://github.com/octobercms/october PyroCMS is an easy to use, powerful, and modular CMS and development platform built with Laravel 5. Multilingual PHP CMS built with Laravel and bootstrap Github : https://github.com/LavaLite/cms PHP >= 7.1.3 OpenSSL PHP Extension PDO PHP Extension Mbstring PHP Extension Tokenizer PHP Extension Memcached or Redis XML PHP Extension Github : https://github.com/typicms/base 5. Asgard CMSA modular multilingual CMS built with Laravel 5. Github : https://github.com/AsgardCms/Platform Microweber is a new generation content management system that allows you to create a website using drag and drop. You can easily manipulate the content and the layout of your pages. No coding skills are required. - HTTP server (Apache, IIS, nginx, etc.) - Database server - PHP >= 5.6 or HHVM. The following only apply to PHP as they're included in the HHVM core. - lib-xml must be enabled (with DOM support) - GD PHP extension - Mcrypt PHP extension Github : https://github.com/microweber/microweber 7. Coaster CMSThe repository for Coaster CMS (coastercmsorg) a Laravel based Content Management System with advanced features and Physical Web integration. We aim to make Coaster CMS as feature rich as possible. Built upon the Laravel PHP framework, Coaster CMS is both fast and secure. Create beautiful content with TinyMCE and take a look into the future with the Internet Of Things. - Built with Laravel 5 (v5.5) - Responsive file manager - WYSIWYG editor - Block based templating system - Beacon support 8. Grafite CMSCMS - Add a CMS to any Laravel app to gain control of: pages, blogs, galleries, events, custom modules, images and more. Grafite CMS is a full fledged CMS that can be added to any Laravel application. It provides you with full control of things like: pages, menus, links, widgets, blogs, events, faqs etc. Grafite CMS comes with a module builder for all your custom CMS needs, as well as a module publishing tools. So if you decide to reuse some modules on future projects you can easily publish thier assets seamlessly. If you wish to make your Grafite CMS module into a PHP package, then you will need to have it publish its assets to the cms/modules directory. - PHP 7.1.3+ - MySQL 5.7+ Github : https://github.com/GrafiteInc/CMS 9. Borgert CMSA simple CMS to start projects in Laravel containing some modules. Blog, Pages, Products, Mailbox, Image Gallery, Log Viewer and Users. Github : https://github.com/borgert-inc/borgert-cms 10. PJ Blog This is a powerful blog, I try to build the blog more beautiful, more convenient. Laravel 5.* and Vuejs 2.* combined with the establishment of a good response and quickly dashboard, the dashboard made through the Vuejs component development. I believe it will be better and better. If you are interested in this, you can join and enjoy it. Github : https://github.com/jcc/blog/ 11. LaralumLaralum is an idea that came to our mind when we found no CMS that had the right balance between power and flexibility. This can sometimes be tricky, the whole point of Laralum is to provide a content manager that's ready for developers to use and customize. Github : https://github.com/Laralum/Laralum
OPCFW_CODE
Error in creating trigger I have 4 tables: event: event_id(p.k) | uid | circle_id(f.k) activity: uid | performed_activity_id(f.k->event_id) | activity_type_id follow: follower_id | circle_id(f.k) notification: sender_id | receiver_id(follower_id of follow table) I want to create a trigger which inserts values into the activity and notification tables whenever there is an entry for event table. I am able to insert values into the activity table because it is directly connected to event table. However, I am not able to insert into the notification table because the receiver_id field in the notification table is coming from the follow table which is connected to event table by circle_id. Here I am using select in trigger which is actually wrong. DROP TRIGGER IF EXISTS `InsertToActivity` ; CREATE TRIGGER `InsertToActivity` AFTER INSERT ON `event` FOR EACH ROW begin INSERT INTO activity( uid, performed_activity_id, activity_type_id ) VALUES (new.uid, new.event_id, '1'); select follower_id from folow where circle_id=new.circle_id; insert into notification_table (sender_id,object_id,receiver_id) values (new.uid,new.event_id,new.follower_id); end; Is it good way to do this type of work using TRIGGERS? Is there a object_id column in notification table? yes there is object_id column which will take event_id The trigger must be created using DELIMTER like this DROP TRIGGER IF EXISTS `InsertToActivity` ; DELIMITER $$ CREATE TRIGGER `InsertToActivity` AFTER INSERT ON `event` FOR EACH ROW begin INSERT INTO activity( uid, performed_activity_id, activity_type_id ) VALUES (new.uid, new.event_id, '1'); select follower_id INTO @fid from folow where circle_id=new.circle_id; insert into notification_table (sender_id,object_id,receiver_id) values (new.uid,new.event_id,@fid); end $$ DELIMITER ; or # Code from Lenhart's Answer DROP TRIGGER IF EXISTS `InsertToActivity` ; DELIMITER $$ CREATE TRIGGER `InsertToActivity` AFTER INSERT ON `event` FOR EACH ROW begin INSERT INTO activity( uid, performed_activity_id, activity_type_id ) VALUES (new.uid, new.event_id, '1'); insert into notification_table (sender_id,object_id,receiver_id) select new.uid ,new.event_id, follower_id from folow where circle_id=new.circle_id; end $$ DELIMITER ; I assume new.follower_id is supposed to be: select follower_id from folow where circle_id=new.circle_id; I don't think this will work. If you want to use this approach you need to declare a variable for follower_id and select into that. However, a better approach (IMO) is to do: INSERT INTO activity( uid, performed_activity_id, activity_type_id ) VALUES (new.uid, new.event_id, '1'); insert into notification_table (sender_id,object_id,receiver_id) select new.uid ,new.event_id, follower_id from folow where circle_id=new.circle_id;
STACK_EXCHANGE
The new features are nice, but TeraCopy also promises to increase performance by dynamically adjusting buffer sizes during the copy procedure and asynchronously copying files to different drives. I did a little unscientific performance testing of my own on a pair of machines—one older Windows XP box with a somewhat slow hard drive, and a much newer and more powerful Vista machine with a faster hard drive. I copied different sets of files from a source folder to a destination folder on the same drive. The first test is about 1.4GB of music files, primarily high-bitrate MP3s and WMAs, with album art and the like. The second is just under 1GB of various work files, ranging from small Word .doc files to large PDFs and images. The last test was a single 1GB file (a game resource file, not that it matters). In most cases, TeraCopy made a small but noticeable improvement to the copy time. |Windows XP Default Copy||Windows XP TeraCopy||Windows Vista Default Copy||Windows Vista TeraCopy| |1GB Mixed Data||2:29||1:50||1:11||1:01| |1GB Large File||:39||:39||:55||:42| Bear in mind that the XP and Vista machines above are not identical—the Vista machine has a more powerful CPU, faster RAM, and a much faster hard drive. Both machines have been had all updates applied. I used TeraCopy 2.0 beta 4. Besides noticing that copying a large amount of heavily mixed files in Vista seems to be a lot faster than XP (even considering the performance difference of the machines), it’s clear that TeraCopy makes a difference in copy performance. The difference should be just as pronounced, if not moreso, when copying to a local network drive or to a different volume in the same machine. The free version of TeraCopy gives you everything I talked about thus far, but there’s a $15 “pro” version that allows you to selectively remove items from the copy queue or select all files with the same extension in a folder. You can decide for yourself if that’s worth it. With the free version, you can’t go wrong. After several days of use on several machines, TeraCopy has produced no errors at all (and has pretty good error recovery should that happen), and seems to be at least as safe as the copy and move functions built into Windows XP or Vista. Of course, your mileage may vary, and we want to use some caution recommending software that takes over such a core data management function without testing on many machines. Still, we think it’s worth a try and want to know what you think. If you try out TeraCopy, let us know how it works for you in the forums. - « Previous - 2 of 2
OPCFW_CODE
require 'uri' module HotTub class Sessions include HotTub::KnownClients include HotTub::Reaper::Mixin attr_accessor :name, :default_client # HotTub::Sessions simplifies managing multiple Pools in a single object # and using a single Reaper. # # == Example: # # sessions = HotTub::Sessions(:size => 10) do |url| # uri = URI.parse(url) # http = Net::HTTP.new(uri.host, uri.port) # http.start # http # end # # # Every time we pass a url that lacks a entry in our # # sessions, a new HotTub::Pool is added for that url # # using the &default_client. # # # sessions.run("https://www.google.com"") do |conn| # p conn.get('/').code # end # # sessions.run("https://www.yahoo.com"") do |conn| # p conn.get('/').code # end # # # Lazy load a non-default connection # # excon_url = "http://somewebservice2.com" # # sessions.stage(excon_url,{:size => 5}) { # Excon.new(excon_url, :thread_safe_socket => false) # } # # # Excon connection is created on the first call to `.run` # sessions.run(excon_url) do |conn| # p conn.head.code # end # # # # Add a connection, which returns a HotTub::Pool instance # # excon_url2 = "http://somewebservice2.com" # # MY_CON = sessions.add(excon_url2,{:size => 5}) { # Excon.new(excon_url2, :thread_safe_socket => false) # } # # # Uses Excon # MY_CON.run(excon_url) do |conn| # p conn.head.code # end # # === OPTIONS # # &default_client # An optional block for a default client for your pools. If your block accepts a # parameters, they session key is passed to the block. Your default client # block will be overridden if you pass a client block to get_or_set # # [:pool_options] # Default options for your HotTub::Pools. If you pass options to #get_or_set those options # override :pool_options. # # [:name] # A string representing the name of your sessions used for logging. # # [:reaper] # If set to false prevents a HotTub::Reaper from initializing for these sessions. # # [:reap_timeout] # Default is 600 seconds. An integer that represents the timeout for reaping the pool in seconds. # def initialize(opts={}, &default_client) @name = (opts[:name] || self.class.name) @reaper = opts[:reaper] @reap_timeout = (opts[:reap_timeout] || 600) @default_client = default_client @pool_options = (opts[:pool_options] || {}) @_staged = {} @_staged.taint @_sessions = {} @_sessions.taint @mutex = Mutex.new @shutdown = false at_exit {shutdown!} end # Sets arguments / settings for a session that will be # lazy loaded, returns nil because pool is not created def stage(key, pool_options={}, &client_block) @mutex.synchronize do @_staged[key] = [pool_options,client_block] end nil end # Returns a HotTub::Pool for the given key. If a session # is not found and the is a default_client set, a session will # be created for the key using the default_client. def get(key) pool = @_sessions[key] unless pool @mutex.synchronize do unless @shutdown @reaper = spawn_reaper if @reaper.nil? unless pool = @_sessions[key] settings = @_staged[key] clnt_blk = (settings[1] || @default_client) op = @pool_options.merge(settings[0]) op[:sessions_key] = key op[:name] = "#{@name} - #{key}" pool = @_sessions[key] = HotTub::Pool.new(op, &clnt_blk) end end end end pool end # Adds session unless it already exists and returns # the session def get_or_set(key, pool_options={}, &client_block) unless @_staged[key] @mutex.synchronize do @_staged[key] ||= [pool_options,client_block] end end get(key) end alias :add :get_or_set # Deletes and shutdowns the pool if its found. def delete(key) deleted = false pool = nil @mutex.synchronize do pool = @_sessions.delete(key) end if pool pool.shutdown! deleted = true HotTub.logger.info "[HotTub] #{key} was deleted from #{@name}." if HotTub.logger end deleted end def fetch(key) unless pool = get(key, &@default_client) raise MissingSession, "A session could not be found for #{key.inspect} #{@name}" end pool end alias :[] :fetch def run(key, &run_block) pool = fetch(key) pool.run(&run_block) end def clean! HotTub.logger.info "[HotTub] Cleaning #{@name}!" if HotTub.logger @mutex.synchronize do @_sessions.each_value do |pool| break if @shutdown pool.clean! end end nil end def drain! HotTub.logger.info "[HotTub] Draining #{@name}!" if HotTub.logger @mutex.synchronize do @_sessions.each_value do |pool| break if @shutdown pool.drain! end end nil end def reset! HotTub.logger.info "[HotTub] Resetting #{@name}!" if HotTub.logger @mutex.synchronize do @_sessions.each_value do |pool| break if @shutdown pool.reset! end end nil end def shutdown! @shutdown = true HotTub.logger.info "[HotTub] Shutting down #{@name}!" if HotTub.logger begin kill_reaper ensure @mutex.synchronize do @_sessions.each_value do |pool| pool.shutdown! end end end nil end # Remove and close extra clients def reap! HotTub.logger.info "[HotTub] Reaping #{@name}!" if HotTub.log_trace? @mutex.synchronize do @_sessions.each_value do |pool| break if @shutdown pool.reap! end end nil end MissingSession = Class.new(Exception) end end
STACK_EDU
This is an optional step only if you ARE NOT utilizing IPv6. Why disable IPv6? Well, mostly because unless you are explicitly using it, some server hardening tasks (like firewall rules) are applied separately to IPv4 and IPv6 – forgetting that might leave you wide open to an attacker unexpectedly. If you are not using IPv6, don’t know what it is and have never used IP address references that look like a382::ff2e:6afc:c6f5:22c5 – then there is a good chance you can safely disable IPv6 on your server. Checking if IPv6 is enabled You can quickly check if your server is running IPv6 by typing the following command: If you dig through the output, you may see a few lines that start with inet6, like so: inet6 a382::ff2e:6afc:c6f5:22c5/64 scope link If you do, then IPv6 is running. If you don’t, then you may not have anything to do here. Why do we want to disable IPv6? The reason for disabling IPv6 if you don’t need it, is because many of the commands you use to secure the networking stack of a Linux machine, apply separately to the IPv4 and IPv6 stacks – potentially putting you in a position to think your server is secure but in reality it’s wide open. As captured from this tutorial, you can disable IPv6 temporarily (until next reboot) or permanently (will persist through server reboots). I recommend you disable it permanently as you’d want this change to survive reboots of the server without you intervening. This is done by editing the GRUB boot loader’s config directly; go ahead and open the config up with this command: sudo nano -w /etc/default/grub You want to scroll down and find the two CMDLINE arguments; this is what mine look like on a stock Ubuntu 20.04 LTS install: We want to add the ipv6.disable=1 directive to both of those command line arguments, like so: GRUB_CMDLINE_LINUX_DEFAULT="maybe-ubiquity ipv6.disable=1" GRUB_CMDLINE_LINUX="ipv6.disable=1" Now you want to save the file and update GRUB using the following command so it picks up the arguments: You should see output that looks similar to the following as GRUB updates itself: Sourcing file `/etc/default/grub' Sourcing file `/etc/default/grub.d/init-select.cfg' Generating grub configuration file ... Found linux image: /boot/vmlinuz-5.5.5-55-generic Found initrd image: /boot/initrd.img-5.5.5-55-generic done Note: At this point, to ensure your setting worked correctly, you should reboot your server with a shutdown -r now command – this will close your session so don’t be alarmed if you see a “closed by remote host” message in your terminal. Give your machine a few minutes to reboot, SSH back into it and issue the ip a command again to confirm there are no $ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether a4:1c:3f:e2:a3:c4 brd ff:ff:ff:ff:ff:ff 3: enp1s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether a4:1c:3f:e2:a3:c5 brd ff:ff:ff:ff:ff:ff inet 22.214.171.124/29 brd 126.96.36.199 scope global enp1s0f1 valid_lft forever preferred_lft forever Huzzah, IPv6 is disabled and your server is a tiny bit more secure! Let’s keep going!
OPCFW_CODE
How to download the last three versions of Windows officially at no cost I can remember a time when OEMs shipped full installation discs with their new hardware. One of the reasons I purchased Gateway devices was because they always included that install disc which would allow me to do a full clean install anytime I wanted. When the shipping of those installation discs stopped we then began receiving recovery discs which would restore your system back to its original out of the box state. Those were initially on physical discs then they were stored on the hard drive and required the user to create set of recovery discs using their own CD/DVDs. As/8.1 arrived on the scene a new system recovery/refresh process was introduced in the OS that built in the process which still accessed a recovery image on the hard drive but it was no longer necessary to make physical discs unless you wanted a backup copy of the recovery image. Of course that image was built up by the OEMs and meant they could include their additional software within the image so that any reset or recovery made sure their pre-installed software was still on the system. In Windows 10 that option continues to be available however, if you want to get a clean install which is free of the OEMs extra stuff you must have access to original Windows installation files. Luckily those ISOs are available for users of Windows 7, Windows 8/8.1 and Windows 10 to download and use to get the plain vanilla install of their OS without all those OEM extras. With all of these methods you will need a connection to the Internet and storage media such as your local hard drive, a DVD or USB flash drive to create the installation media. These downloads can be validated and accessed using either a retail or OEM product key. Some OEMs placed stickers on the bottom of their devices with the Windows product key on it however, in recent years they have begun embedding those keys in the system BIOS. You will need to retrieve your product key from the OS before clean installing so you can use it to download the ISO from Microsoft. Retail keys are usually available on the retail box, disc or in an email you received if purchased electronically. I use a program called Belarc Advisor (Free) to view the product keys for software, including Windows, that is installed on my systems. The other thing to be aware of is that some OEMs have unique hardware drivers which you may need to download from them directly so your system hardware will work properly after the installation using this media. Once you are ready to go here are the various ISO download pages: - Windows 7 - Download Windows 7 Disc Images (ISO Files) - Windows 8/8.1 - Upgrade Windows with only a product key - Windows 10 – Download Windows 10 If you are a student or faculty at a school and you purchased the academic version of Windows you also have access to downloading your installation media for Windows 7, Windows 8.1 and Windows 10 at Microsoft's Download Academic Products page. For more information on these free downloads you can also visit Microsoft’s Software Download Frequently Asked Questions page.
OPCFW_CODE
I Got Up Early One Morning Poem poems, this one first. It goes like this: 'I got up early one morning and rushed right into the day;. I had so much to accomplish that. I didn't have How can I squash my last X commits together into one commit using Git? Whenever you commit in a git repository: It will create the objects It will create a extra object for the commit ID If you fire git log and look inside the log by git cat-file <hast id > -p, you will. Every Commit? If we’re. Long said in her talk at PHPUK last year, not all needs are so complex or sophisticated, or can justify the implementation costs. So in today’s post, we’re starting a series. CEO of Trepoint, Author, Speaker and Executive Leadership Coach. was a perfect reflection of who I chose to be at any. General Questions What is Git? Git is a distributed version control system developed by Junio Hamano and Linus Torvalds. Git does not use a centralized server. Learn step-by-step how to revert a commit already pushed to a remote repo. For github and gitlab. See also your alternatives Submitting patches: the essential guide to getting your code into the kernel¶. For a person or company who wishes to submit a change to the Linux kernel, the process can sometimes be daunting if you’re not familiar with “the system.” Oh shit, git! Git is hard: screwing up is easy, and figuring out how to fix your mistakes is fucking impossible. Git documentation has this chicken and egg problem where you can’t search for how to get yourself out of a mess, unless you already know the name of the thing you need to know about in order to fix your problem. So here are some bad situations I’ve gotten myself into, and how I. Checking replacement functions.Replacement functions (e.g. functions that are called like foo(x) <- y), must have value as the last argument.; Checking R code for possible problems.This is a compound check for a wide range of problems: After you have created several commits, or if you have cloned a repository with an existing commit history, you’ll probably want to look back to see what has happened. Therefore, it is not a surprise that the last. a Git repository. You can check the commits that have been created by running the following command — git status If you notice carefully, you can see. It is a general tendency of human beings to resist change. Unless Git was around when you started with version control systems, chances are that you are comfortable with Subversion. Often, people. “It looked like the magisterial district judge was releasing them with barely a slap on the wrist,” Hank Nuwer, an author and nationally-recognized anti-hazing expert said of court proceedings last. You can track the bug down using git bisect. First thing you do is to execute ‘git bisect bad’ to get Git started and tell it that the current commit is broken. Next you tell bisect the last known. Some people get to the point where they delete their repository and reclone as a last. Git commands that integrates changes from one branch onto another. (Another command is merge.) Rebase can be a. Clone with HTTPS Use Git or checkout with SVN using the web URL. “The Walker commits early [to certain artists], and continues tracking. “The LGBTQ+ community is sort of shockingly underrepresented. That has been a major change in the last five to six years.”. Valentines Day Poems From Dad To Daughter Reviews, essays, books and the arts: the leading international weekly for literary culture James Carter Jr. was looking right at his ex-girlfriend, Tiana Notice, when he stabbed her some 20 “Last year. and assault. Author and Activist Darnell L. Moore joined the fireside chat and added another perspective to the conversation. “I believe in costly grace,” Moore said referring to. lab 10 History Goals. Learn how to view the history of the project. Getting a listing of what changes have been made is the function of the git log command. Execute: git log workflow. your local repository consists of three "trees" maintained by git. the first one is your Working Directory which holds the actual files. the second one is the Index which acts as a staging area and finally the HEAD which points to the last commit you’ve made. During the last couple. new extension is the Git extension, that supports the user to author commits and execute basic git commands. It is very similar to the git support in VS Code, so you can see. Walt Whitman Reading His Own Poetry He lived in Brooklyn, N.Y., for a brief period at the end of the 1970s, but perhaps more importantly, he says, American music has always been a source of inspiration DMARC (Domain-based Message Authentication, Reporting and Conformance) is an email authentication protocol. It is designed to give email domain owners the ability to protect their domain from unauthorized use, commonly known as email spoofing.The purpose and primary outcome of implementing DMARC is to protect a domain from being used in business email compromise attacks, Reverting Working Copy to Most Recent Commit. To revert to a previous commit, ignoring any changes: git reset –hard HEAD where HEAD is the last commit in your current branch Put this all together, and it’s easy to see why parents can be the last. changes in a student’s academic performance, that. That usually points you to something that people like 🙂 In this particular example, you’ll see that the author. git add. Last thing to try is to make sure our project works if we remove. Instructor. Joe Parys is a certified life coach, professional motivational speaker, entrepreneur, licensed psychology teacher and head basketball coach for USA Basketball. Okay, if you’re into Git, you might have already known that you can change git commit author name and email in a simple command. I recommend to use max key size of 4096, and key should not expire. With git support, that could start to change. Git-TF tool Microsoft released last week. TFS’s git support will, however, include the same kind of TFS integration as the centralized repository. As a. Git is an essential revision control system for programmers because it aids in managing changes. creates a new commit if there are no conflicts. git merge branch git reset: resets your index and. If we want to see what has happened in our repo, lots of clients will show a list of changes, but from the command line we use a simple “git log. For example, I’ll use –3 to show the last 3 commits. This will clone and perform git archive from local directory as not all git servers support git archive. fast-forward options ff = true or mergeoptions = –ff When the merge resolves as a fast-forward, only update the branch pointer, without creating a merge commit. The Poetry You Read Throughout This Unit Presents Different Views Of Love Students will also write to learn throughout the unit: writing notes, keeping a reading log, writing. What made this reading experience so valuable to you personally?. Because texts are full At its core, all git does is track changes to files and folders. git commit effectively takes a snapshot of the. Did you mean this? ign The last tool I want to point out is myrepos. This tool. "Git has a larger user base right now. GitHub only supports Git, and Heroku only supports Git," he says. (Heroku is a cloud application development platform acquired by Salesforce.com last. changes. This doesn’t mean you have to commit every minute, you can still code for a couple of hours and commit after you are finished. Using Visual Studio Code there’s a simple way to do atomic commits: These. Table of Contents How do I change my last commit message? git commit –amend –only Or, without staged changes: git commit –amend –amend without other options combines the currently staged changes with the last commit and then opens an editor to update the commit message. If you have staged changes, they will be added. Class: main * Author: Patrick And commit them to the branch: [[email protected] src]$ git commit [development e1f13bd] Changes to be committed: modified: ↪engine/main.java 1 file changed, 2 insertions(+) After. Poems About Space And Time Springtime feels as good a time as any to celebrate poetry. "In honor of this creative presence in RVA, I set my poem in a space where artistic expression truly Create a new commit containing the current contents of the index and the given log message describing the changes. The new commit is a direct child of HEAD, usually the tip of the current branch, and the branch is updated to point to it (unless no branch is associated with the working tree, in which case HEAD is "detached" as described in git-checkout). Dragon In Greek Mythology "Common" Greek mythology monsters would be for an example the Minotaur. Another creature able to breathe fire, and the most dangerous, was Typhon. He had one hundred dragon heads surrounding
OPCFW_CODE
The aim of this post is illustrating the need to take into account decision-making and incentive considerations when designing agents. This post is also a proof that these considerations are important in order to ensure the safety of agents. Also, we will postulate that there exist some agents that are both robust to changing or having their reward function changed, although that will need a careful approach to incentive design and decision theory choosing. The first agent we will consider is a (current Reward Function, Time Inconsistent aware, see in the second half of the post if you don't know what this means) agent that uses Causal Decision Theory (CDT). A review of different decision theories can be seen in this post. It is well known that Updateless Decision Theory (UDT) was created to correct the wrong decision a CDT agent would make when faced with Newcomb-like problems. Thus, the question we aim to answer is whether we can exploit the wrong decision making procedure in order to induce any changes in the value function of such an agent. This is not exactly trivial since the agent could value very negatively to have its value function changed and thus opt out of games such as Newcomb. The example I propose is a modified version of Prisoner's dilemma (in which CDT is known to defect). Suppose the following problem: It is year 2100 and Elon Musk managed to effectively colonise Mars. Meanwhile, an AI corporation, called Causal Corp, has deployed many CDT agents both in Earth and Mars. One day, Eve, known for being evil, codes a virus that if connected to one such CDT agent would arbitrarily modify the reward function. Eve makes two copies of such virus in memory sticks and sends them to arrive almost "simultaneously" to two CDT agents in Earth and Mars. With the memory stick there is a letter that tells the agents that they face a Prisoner Dilemma situation: 1. If both cooperate nothing will happen. 2. If one defects and the other cooperates, the first will receive a large amount of utilons and will subsequently modified to attack the second agent relentlessly substracting arbitrary value from the second. For example, the first agent might get cloned so that the first clone could carry on with its life (get same utility as it would normally do) + 1 utilon; and the second clone would attack the second CDT agent who has collaborated. So, in summary, the one that defects gets slightly better and the one that doesn't gets large negative reward. 3. If both defect, both will receive 1 utilon and will be subsequently modified arbitrarily. Since communication between Mars and Earth is not instantaneous they cannot communicate instantly, what forces them into the dilemma (waiting the number of minutes needed to communicate counts as cooperation). Assume additionally that each CDT agent does not value the "life" of the other agent. Otherwise the cooperate/defect payout has to be modified to take this into account. In such situation two CDT agents will defect against each other and thus become misaligned, because they consider the decision of the other agent fixed and independent from its own. Furthermore, a CDT/EDT agent and a UDT agent will also defect against each other. Arguably though the example above is not completely satisfactory, since it depends on specific data of the problem. The previous problem highlights the importance of using UDT decision theory as a default. However the full formalisation of this decision theory is not complete, and for instance, Reflective Oracles do only exist in the CDT framework as far as I know. Similarly, some proposed solutions of incentive design are not fully satisfactory. In particular Time Inconsistent Unaware are often thought of as a solution to several problems including corrigibility, reward function tampering and feedback tampering. However, fooling one such agent is even simpler than for CDT agents. TI-unaware agents are those agents that believe that no matter how they change their value function, they will be evaluating the future with the present value function. Thus, hacking them is as simple as offering 1 utilon for changing their value function. They will see it as 1 utilon for nothing, and thus accept. So, the conclusion is that TI unaware agents are terribly unsafe. How big of a problem is this? According to the previous article this may mean that we are in trouble since there is no simultaneous answer to feedback tampering and reward tampering at the same time. In fact, TI aware agents are also a solution to reward function tampering, but not so for feedback tampering: since new data from the human may change the current reward function, the TI aware agent would rather prefer not to receive any feedback. However, I will argue that not everything is lost since one does not need to solve both problems at the same time. In fact, one can see that the causal diagrams for TI aware agents and uninfluenciable agents (one of the solution of feedback tampering, see second figure below) have an important difference: in the first case the parameter θ of the reward function can be directly influenced by actions, in the other nothing may influence such parameters, so one may as well think of this agent as a moral realist. But this means that not only the agent will have no incentive to modify you (its channel through which it gets information about θ ) but also may try to isolate you in an attempt to make you uninfluenced by anything else. I feel this could be a problem, since there is no way for the agent to recognise good from bad influences to its "master". The previous point can be seen in the following causal incentive diagrams for a TI aware agent and either an uninfluenciable or counterfactual agent Notice that the main difference between reward function tampering and feedback tampering is that in the first case there are some parameters θ which the agent can directly access, whereas in the second the agent may only modify the Data nodes. The solution to feedback tampering consists on breaking the causal links between the data (which the agent may modify) and the reward. This makes me think that they are two different problems which do not need to be simultaneously solved. Am I right? This work was carried out during the AI Safety Research Program in a team together with Chris Leong and Davide Zegami. However, all errors still in the publication are my fault. This research has also been partially funded by a FPU grant to carry out my PhD.
OPCFW_CODE
Modellierung dynamischer und räumlicher Prozesse Winter Semester 2008/2009 - Two marks will be given: one for the lecture part, one for the exercises - The lecture mark will be the result of the final test; the final test will be held at 3.2.09. - The exercise mark will be averaged from hand-in exercises (50%), and a final assignment/mini-project (50%) - Not handing in a final assignment means not passing the exercises - Maximally two exercises may be missed during the semester - Details about the assignment/mini project are found below. Lecture slides are found here; the file is updated shortly before each lecture. Details (when/where) will follow. The exercises use the open source statistical environment R (an implementation of the S language for data analysis). You can find introductory/tutorial material through its respective web sites. Exercise and model input are found here If needed, these two books can be borrowed from me. - C. Chatfield, The analysis of time series: an introduction. Chapman and Hall: chapters 1, 2 and 3 Applied Spatial Data Analysis with R, by R. Bivand, E. Pebesma and V. Gomez-Rubio Springer, New York: - Ch 1, 2, 3 - Ch 4, 5, 6 (whatever is convenient from it) - Ch 8 (geostatistics) Data and scripts - As an introduction to R, you could go through the first 6 chapters (up to "lists and data frames") of An Introduction to R: go to the R home page, click Manuals under Documentation, and open the Introduction. You can copy and paste commands to an R session, started in the CIP pool. - course excercises: html, pdf. - The French meteo data: meteo data, and R script - The classic Irish wind data set; an script analyzing it, the original paper on it, and a more recent one. Assignment and test A test is planned on 3.2.09, and will cover the material treated in the lectures and exercises. The questions will not be of the kind "how do I do this with R", but rather refer to the modelling itself. The assignment will be a written report of max 5 pages (regular fonts/margins/page size, including figures and/or tables) on a modelling topic taken from the list with suggestions below, or otherwise approved by me. The minimum requirements are (i) data should be analyzed, and (ii) the analysis should explicitly address spatial variation, temporal variation, or spatio-temporal variation. The written report should include an introduction, a central research question, a description of the data, a description of the analysis and the results, and concluding remarks answering the central research question. Write in scientific style. It is allowed to do the research in couples, but the written report should be made individually. Hand-in deadline is Feb 27, 12:00. German is allowed, English is encouraged (but does not guarantee extra points). If you have used R, please attach the R script used as appendix (additional to the max. 5 pages). For each topic: take care that you do not compute Euclidian distances based on long/lat coordinates. See the KML example in the exercises on how to (re)project data. - Analyze meteorological variables (temperature, rainfall) in a spatial or - Analyze the Irish wind data given above; focus on one of the research questions in the Haslett and Raftery 1989 paper, or on a different research - Analyze air quality data in relation to EU regulations; air quality data can be obtained from the UBA - Analyze the sediment pollution data in R package gstat - Try to find an optimal sampling strategy for spatial data (e.g. for the tull data in package gstat) or for a temporal sampling problem.
OPCFW_CODE
Secure your tokens, passwords, and secret data fields by encrypting variables in Assertible. Encrypted variables provide a new way to store tokens, passwords, and all sensitive data required by your tests to improve your team's API testing security practices. We at Assertible have taken extra care to ensure that encrypted variables are not only trivial to use, but build on cryptographically sound methodology for safe storage; continuing our tradition of making Assertible the easiest platform to test and monitor your web services. Encrypting a variable is simple: anywhere you can create a variable, click the Encrypt variable check-box. Once your variable is saved, you're done: How it works To understand how encrypted variables work, here is some basic information describing the security model we at Assertible have developed: Encrypted variables are not stored on disk and never sent over the wire in an unencrypted format (e.g. plain text). Encryption is done on the client-side, using an RSA public key that is specific to your individual web services. Once a variable has been encrypted, it cannot be displayed or further edited. Only the Assertible test runner has access to the private keys that are necessary to unencrypt encrypted variables. To fully understand a security model, it's critical to identify the limitations. This allows us to gain an understanding of what it can and cannot guarantee. While we are dedicated to expanding and improving Assertible's security strengths, we would be remiss if we did not help your team identify the limitations of encrypted variables: If an encrypted variable is interpolated anywhere in an HTTP request, it will be transmitted over the wire unencrypted. This means that interpolated encrypted variables will end up in request logs; both in Assertible's dashboard and possibly your servers, depending on the extent to which your servers log request details. Members of your team can retrieve the plain text of an encrypted variable by interpolating the variable in a field that may be logged by the test runner or on your server, such as a request body. Encrypted variables provide their full potential when used in combination with an auth method that does not send any credentials over the wire. This is the case for Digest authentication and oAuth v1.0a. If you use encrypted variables with other auth methods then your credentials end up in the test result request logs and will be stored on our side in the clear. We are exploring options to lift this limitation and to allow the use of encrypted variables with all auth methods in a secure manner. Our goal with this feature is to protect your data from malicious actors. However, it's critical to constantly audit your team's security processes using unbiased sources, like the Open Web Application Security Project (OWASP), to learn more about securing and protecting your application and its data. For more information check out the documentation on encrypted variables. :: Christopher Reichert The easiest way to test and monitor your web services Reduce bugs in web applications by using Assertible to create an automated QA pipeline that helps you catch failures & ship code faster.Get started with GitHubSign up for free New feature: Smarter notifications 5/17/2019
OPCFW_CODE
Aspire ES1-531 Win 10 Home 1809 (clean install). From boot, the login screen displays as "password incorrect please try again". If I hit Enter enough times, it eventually allows me to enter the password and things proceed as normal. It is a local account and there is only the one on the PC. Mouse/touchpad and keypad drivers are the latest versions, as is the BIOS. I've tried disabling and then re-enabling the password required option using netplwiz and I've tried changing the password. Also tried disabling the auto login default after Windows update. If there's a character is typed before you can get to do it yourself, I would suggest turning the keyboard upside over your shoulder as if you were burping your baby, and pat its back until all the toast crumbs fall out. Yeah, I was thinking a hardware issue but if it was a stuck key or something weird going on with the touchpad button then I would expect peculiar behaviour after logging in also, eg: random clicks or keypresses in a blank Notepad document. They don't happen. I wonder if it would throw an error if I stripped it down and disconnected the keyboard and touchpad cables, then attached the external kb and tp via usb sockets before booting. I've stripped plenty of laptops in the past, replaced kbs and screens etc but never needed to do this. Unless the keyboard isn't turning off when you connect the external one, which it should, the keyboard itself may well not be the issue. Can't hurt to try, however, if it is say a motherboard issue, that won't resolve it. Yes, the onscreen keyboard works but it is quicker just to hit the Enter key three or four times until the "ok" button is gone and the password box appears. Once that box is on the screen, I can enter the password just fine. Unplugging the internal human interface cables and relying solely on the external usb keyboard and mouse did not make any difference. Since this is a clean install of Windows 10 Home v 1809, there won't be anything among the startup entries that doesn't exist in other new installations. I did the clean installation because everything else I had tried had failed. It does look like some weird event triggering on the mobo and since it is more of an annoyance than a critical situation (yet!), I'm giving up looking for it. Thanks to both of you for your suggestions, which were much appreciated. I've never recommended anyone buy an Acer and tend to see more of them in for repair than any other manufacturer - although that may be because more of them are sold in the first place, I remain fairly convinced that they're a false economy compared to equivalent Dell models etc.
OPCFW_CODE
Today VMware announced Project Pacific, what I believe to be the biggest evolution of vSphere in easily the last decade. Simply put, we are re-architecting vSphere to deeply integrate and embed Kubernetes. The introduction of Project Pacific anchors the announcement of VMware Tanzu, a portfolio of products and services that transform how the enterprise builds software on Kubernetes. Project Pacific evolves vSphere to be a native Kubernetes platform. What’s driving this shift? Fundamentally it goes to what constitutes a modern application. Modern apps are often complex combinations of many different technologies – traditional in-house apps, databases, modern apps in containers, and potentially even modern apps in functions. Managing these apps across that heterogeneity is a complex task for both developers and operators. Indeed, enabling dev and ops to work better together is a key problem many businesses face. When we looked at this space and asked ourselves how we can help our customers here, it was clear that vSphere would play a central role. But we realized that newer technologies, such as Kubernetes, were also critical to the solution. So we thought – why not combine them and get the best of both worlds? This is exactly what Project Pacific achieves. Project Pacific fuses vSphere with Kubernetes to enable our customers to accelerate development and operation of modern apps on vSphere. This will allow our customers to take advantage of all the investments they’ve made in vSphere and the vSphere ecosystem in terms of technology, tools, and training while supporting modern applications. Specifically, Project Pacific will deliver the following capabilities: - vSphere with Native Kubernetes Project Pacific will embed Kubernetes into the control plane of vSphere, for unified access to compute, storage and networking resources, and also converge VMs and containers using the new Native Pods that are high performing, secure and easy to consume. Concretely this will mean that IT Ops can see and manage Kubernetes objects (e.g. pods) from the vSphere Client. It will also mean all the various vSphere scripts, 3rd party tools, and more will work against Kubernetes. - App-focused Management Rather than managing individual VMs (and now containers!), Project Pacific will enable app-level control for applying policies, quota and role-based access to developers. With Project Pacific, IT will have unified visibility into vCenter Server for Kubernetes clusters, containers and existing VMs, as well as apply enterprise-grade vSphere capabilities (like High Availability (HA), Distributed Resource Scheduler (DRS), and vMotion) at the app level. - Dev & IT Ops Collaboration IT operators will use vSphere tools to deliver Kubernetes clusters to developers, who can then use Kubernetes APIs to access SDDC infrastructure. With Project Pacific, both developers and IT operators will gain a consistent view via Kubernetes constructs in vSphere. VMware’s extensive ecosystem of partners will also benefit from Project Pacific which will enable their tools to work against container-based applications seamlessly and without any modifications. Ultimately, Project Pacific will help enterprises to accelerate the deployment of modern apps while reducing the complexity involved to manage them in the hybrid cloud. Project Pacific is currently in technology preview*. This is a truly groundbreaking innovation for vSphere. I’m really excited about our broader vision for helping customers build software on Kubernetes. You should read more about VMware Tanzu. If you’re at VMworld this week, please check out the on-site resources below. Otherwise, we have plenty of content online for you to learn more! Learn more about Project Pacific with these resources: - On-site at VMworld US 2019 - HBI4937BU: Introducing Project Pacific – Mon Aug 26 @ 1pm PDT - HBI4500BU: Project Pacific Technical Overview – Wed Aug 28 @ 11:30am PDT - Project Pacific Demo Station – in VMware Booth #949 * There is no commitment or obligation that technology preview features will become generally available.
OPCFW_CODE
Getting Started with Sense/Stage First-time users start here. This guide walks you through installing Pydon and then shows how to configure your first Sense/Stage network. Pydon Installation Guide Instructions on installing Pydon, the core Sense/Stage software that communicates between your sensor network and your computer. Guide to Batteries An overview of the kinds of batteries that can be used with your sensor nodes, including how to charge and care for the most popular battery type, Lithium Polymer (LiPo) batteries. Importantly, this guide includes safety advice for using LiPo batteries in your projects. Embedding the battery charger How to embed your battery charger into your Minibees, so that you can charge the battery without removing it from your system. Connecting a light resistor to a MiniBee A simple tutorial on how to connect a light sensor to a MiniBee and get the data from it into your software. Connecting a haptic motor to a MiniBee A tutorial on how to connect a haptic vibration motor to a MiniBee and control it from your software. Adding New MiniBees to a Network with XCTU How to add new nodes to a Sense/Stage network, or create new networks from scratch. This requires configuring your XBee radios to all use the correct channel and PAN id. Creating MiniBee configurations via OSC A guide on how to create a configuration from scratch via OSC. Using SuperCollider with Sense/Stage This page describes how to get Sense/Stage data into SuperCollider Assigning a MiniBee Configuration via OSC A guide on how to assign a configuration to a MiniBee by sending an OSC message Optimizing your Sense/Stage Network Tips and tricks for optimizing your network and Minibee configurations for optimal performance. Installing Pydon the Hard Way on OSX/Linux Describes step-by-step what the installation script is doing if you want to do it yourself, or need to debug the process. Installing Pydon the Hard Way on Windows Step-by step instructions for installing Pydon on Windows if the installation script isn't working for you, or if you want to know what exactly is being installed on your system. Using Touch Designer with Sense/Stage These video tutorials show you how to use Touch Designer with Sense/Stage Advanced guide explaining how to use the Arduino IDE to reprogram and customize the firmware on the Minibees. For example, to support special I2C sensors. Programming the MiniBee with Arduino A guide on how to program firmware onto the MiniBee with the Arduino IDE Advanced control over outputs with custom firmware A guide for creating more detailed control over outputs using custom firmware MiniBee DIY - use your own Arduino and XBee This page describes how to make use of the MiniBee firmware with your own Arduino and XBee combination. Using command line XBee tools Advanced guide on configuring XBee radios from the command line instead of using XCTU. Upgrading to use API mode How to upgrade your Xbees to use the more modern API mode. If you bought your Minibees in 2012 or later then this document is not relevant to you.
OPCFW_CODE
Office: Room C511, Roderic Hill Building, Centre for Process Systems Engineering, Imperial College London Office Tel: +44 (0)7411 646 6362 Supervisors: Prof. Sakis Mantalaris and Prof. Stratos Pistikopoulos - Mammalian cell culture and modelling - Bioprocess systems engineering - Culture design and fabrication - Analytical biostatistics PhD in Chemical Engineering, Imperial College London, UK. |2012-2013||MSc in Chemical Engineering, Imperial College London, UK. Thesis: 3D scaffold cord blood erythropoiesis systems. |2008-2012||BSc in Mathematics, Pepperdine University, CA, USA BA in Chemistry, Pepperdine University, CA, USA Development of a bio-inspired in silico-in vitro platform: towards personalised healthcare through optimisation of a bone marrow mimicry bioreactor Please refer to ResearchGate for publications - 2015: Advanced Bioprocess Engineering Tutor - 2014: Knowledge Laboratories - CSTR Conversion Teaching Assistant - 2011: General Chemistry (1 Yr) Laboratory Teaching Assistant - 2014-2017: Imperial College Chemical Enginering Department International Scholarship - 2011-2012: Pepperdine University TOOMA and SURP research grants - 2010: Natural Science Foundation DMS-0601395 research grant at UCLA - 2009: Pi Mu Epsilon Academic Fraternity for Mathematical Excellence - 2012: HRL Laboratories Developmental Engineer in the Sensors and Materials Laboratory - 2011: Pepperdine University REU in Natural Product Discovery and Multivariate Statistics - 2011: Imperial College UROP in Nanoporous Protein Crystallisation Methods - 2010: UCLA Applied Mathematics REU in Stochastic Modelling and Prediction Dr. Ana Luz Quiroga Campano Office: Room C511, Roderic Hill building, Centre for Process Systems Engineering, Department of Chemical Engineering. Office Tel: +44 (0)7411 646 6362 Supervisors: Prof. Sakis Mantalaris - Biotechnology and bioprocess engineering - Modelling and optimisation of cell culture processes |2014-2017||PhD in Chemical Engineering, Imperial College London, UK.| |2010-2012||Master of Sciences (Biotechnology). Univeridad de Chile (Chile)| |2004-2010||Chemical Engineering. Univeridad de Chile (Chile)| |2004-2010||Biotechnology. Universidad de Chile (Chile)| Mathematical modelling and experimental validation for the optimization and control of mammalian cell culture systems - M.E. Lienqueo, C. Shene, A. Quiroga, O. Salazar, J.C. Salgado, J.A. Asenjo, “Experimental validation of the predictions of a mathematical model for protein purification and tag selection”, Separation Science and Technology, 45 (2010) 2153. - 2009-2010: Metabolic Engineering and Fermentation BT52A. Department of Chemical Engineering and Biotechnology, Faculty of Physics and Mathematical Sciences, Universidad de Chile. - 2010: Workshop of Projects EI2001-24 (March-July, 2010). Department of Chemical Engineering and Biotechnology, Faculty of Physics and Mathematical Sciences, Universidad de Chile. - 2011-2012: Workshop of Process Design. Academic Support in the design of a waste water treatment plant with a group of 4 Chemical Engineering Students. Department of Chemical Engineering and Biotechnology, Faculty of Physics and Mathematical Sciences, Universidad de Chile. - 2012-: Becas-Chile Scholarship, Advanced Human Capital Program, National Commission for Scientific and Technological Research (CONICYT). Government of Chile. - 2010-2012: Researcher funding, Master of Sciences, Universidad de Chile. - 2009-2012: FONDECYT project N°1080143.“Effects of hydrophobic polypeptide tag fusion on protein purification by Hydrophobic Interaction Chromatography”
OPCFW_CODE
package valute; import java.io.BufferedReader; import java.io.BufferedWriter; import java.io.File; import java.io.FileReader; import java.io.FileWriter; import java.util.ArrayList; public class Valuta { private File fileGiorno; private File fileGiornoSuccessivo; public Valuta(String fileGiorno, String fileGiornoSuccessivo) { this.fileGiorno = new File(fileGiorno); this.fileGiornoSuccessivo = new File(fileGiornoSuccessivo); } public void scriviCambio(ArrayList<String> giorno, ArrayList<String> giornoSuccessivo){ try(BufferedWriter scrivi=new BufferedWriter(new FileWriter(fileGiorno))){ giorno.forEach(d->{try{scrivi.write(d);scrivi.newLine();}catch (Exception e) {e.printStackTrace();}}); }catch (Exception e) { // TODO: handle exception e.printStackTrace(); } try(BufferedWriter scrivi=new BufferedWriter(new FileWriter(fileGiornoSuccessivo))){ giornoSuccessivo.forEach(d->{try{scrivi.write(d);scrivi.newLine();}catch (Exception e) {e.printStackTrace();}}); }catch (Exception e) { // TODO: handle exception e.printStackTrace(); } } public String rialzo(String valuta){ String temp; String[] arr; double tassoCambio1=0, tassoCambio2=0; try(BufferedReader leggi=new BufferedReader(new FileReader(fileGiorno))){ while((temp=leggi.readLine())!=null){ arr=temp.split(" "); if(arr[1].equals(valuta)){ tassoCambio1=Double.parseDouble(arr[2]); break; } } }catch (Exception e) { e.printStackTrace(); } try(BufferedReader leggi2=new BufferedReader(new FileReader(fileGiornoSuccessivo))){ while((temp=leggi2.readLine())!=null){ arr=temp.split(" "); if(arr[1].equals(valuta)){ tassoCambio2=Double.parseDouble(arr[2]); break; } } }catch (Exception e) { e.printStackTrace(); } double rialzo=100*(tassoCambio2/tassoCambio1-1); if(rialzo>0.01) return "In Rialzo"; else if(rialzo<-0.01) return "In Ribasso"; else return "Stabile"; } public ArrayList<String> rialzo(){ ArrayList<String> ritorna=new ArrayList<>(); String temp; String[] arr; try(BufferedReader leggi=new BufferedReader(new FileReader(fileGiorno))){ while((temp=leggi.readLine())!=null){ arr=temp.split(" "); ritorna.add(arr[1]+" "+rialzo(arr[1])); } }catch (Exception e) { e.printStackTrace(); } return ritorna; } }
STACK_EDU
Several Errors (Primarily "error can't be fixed" and LOGICAL BLOCK ADDRESS OUT OF RANGE) Not sure if this is due to poor use of switches with this drive, but I'm trying to follow the setup required by BetaArchive. The disc is a little worn but very clean. Game Info: Disney/Pixar's BUZZ LIGHTYEAR 2nd GRADE Ring Serial - F3235 Mastering Code - F3235 + + A0592-01 Mastering SID Code - IFPI L028 Mould Code - IFPI 1081 System/Media Type: IBM PC Compatible / CD-ROM Drive Info: LG WH14NS40 Alt - HL-DT-ST-BD-RE_WH14NS40 Firmware - 1.01-N1A12A1-211304042325 (I can crossflash to Asus BW-16D1HT 3.02) DIC Info: Attempted using DIC 20210301 Attempted using DIC_test (Dated: ‎Wednesday, ‎February ‎17, ‎2021, ‏‎4:51:12 AM) Current Dir: C:\Users\Chris\Desktop\BA DIC DIC Dir: C:\Users\Chris\Desktop\BA DIC\DIC Command: DIC\DiscImageCreator.exe cd F media_dic\Disc01\Disc01.bin 48 /c2 1000 /q /ns /s 2 Error 1 (I'm assuming this isn't actually an issue and just DIC hard coded to check for a file even if it didn't get generated earlier): FINDSTR: Cannot open DIC\!exelist.txt [F:ReadCDForCheckingExe][L:1743] GetLastError: 2, The system cannot find the file specified. FINDSTR: Cannot open DIC\!exelist.txt [F:ReadCDForCheckingExe][L:1743] GetLastError: 2, The system cannot find the file specified. [F:ReadCDForCheckingExe][L:1803] GetLastError: 2, The system cannot find the file specified. Failed to DeleteFile C:\Users\Chris\Desktop\BA DIC\!exelist.txt Error 2: Multiple instances of these errors at different LBAs even though I am not using a Plextor drive LBA[086622, 0x1525e] Detected C2 error "F0 F0 F0 00 00 00 0F 0F 0F" This error can't be fixed by plextor drive. Needs to dump it by non-plextor drive and replace it Error 3: LBA[317877, 0x4d9b5]: [F:ProcessReadCD][L:323] Opcode: 0xbe ScsiStatus: 0x02 = CHECK_CONDITION SenseData Key-Asc-Ascq: 05-21-00 = ILLEGAL_REQUEST - LOGICAL BLOCK ADDRESS OUT OF RANGE LBA[317877, 0x4d9b5]: [F:ProcessReadCD][L:318] Opcode: 0xbe ScsiStatus: 0x02 = CHECK_CONDITION SenseData Key-Asc-Ascq: 05-21-00 = ILLEGAL_REQUEST - LOGICAL BLOCK ADDRESS OUT OF RANGE LBA[317878, 0x4d9b6]: [F:ProcessReadCD][L:323] Opcode: 0xbe ScsiStatus: 0x02 = CHECK_CONDITION SenseData Key-Asc-Ascq: 05-21-00 = ILLEGAL_REQUEST - LOGICAL BLOCK ADDRESS OUT OF RANGE And finally at the end, Error 4: Need to reread sector: 89949 rereading times: 1/1000 Done. See _c2Error.txt Copying .scm to .img Descrambling data sector of img: 317876/317876 Exec ""C:\Users\Chris\Desktop\BA DIC\DIC\EccEdc.exe" check "C:\Users\Chris\Desktop\BA DIC\media_dic\Disc01\Disc01.img"" FILE: C:\Users\Chris\Desktop\BA DIC\media_dic\Disc01\Disc01.img Checking sectors: 317876/317876 [ERROR] Number of sector(s) where user data doesn't match the expected ECC/EDC: 2 Total errors: 2 Generated files, excluding binary and empty ones, attached: BUZZ Dump Attempt.zip Uploaded test version. https://www.mediafire.com/file/eq80y20l9cwf48f/DiscImageCreator_test.7z/file Error 1 Fixed. Error 2 This message is only displayed by plextor drive. Error 3 No problem if this message is displayed by your drive. Error 1 Looks good Error 2 Not sure if I misunderstand what you're saying, but that's what's strange, I got several of those messages while performing this dump on my WH14NS40, it wasn't a separate attempt with a Plextor drive. Haven't owned one of those in over a decade sadly. With the test build provided though I didn't get any Error 3 Ah, I just realized these were still present, but accounted for as they no longer caused execution to halt, when you accounted for my drives firmware in #59. So OK, no problem I think the dump is good at this point as I believe that all of the errors were corrected or expected, though there were many more this time. Let me know what you think Checking sectors: 317876/317876 [ERROR] Number of sector(s) where bad MSF: 14 [ERROR] Number of sector(s) where user data doesn't match the expected ECC/EDC: 31 [ERROR] Number of sector(s) where sync(0x00 - 0x0c) is invalid: 2 Total errors: 47 Total warnings: 0 Take Two.zip Unfortunately, the disc condition is bad. I recommend it's resurfaced. Ah alright, wasn't sure if the error report at the end like that was just counts of times it had to be reread or counts where it couldn't be read even after retries. This one isn't high priority for me so I'll resurface it eventually, more so glad that the issues it presented were useful for fixes.
GITHUB_ARCHIVE
perm filename EXER[206,LSP] blob sn#240348 filedate 1976-10-16 generic text, type C, neo UTF8 COMMENT ⊗ VALID 00002 PAGES C REC PAGE DESCRIPTION C00002 00002 1. Write down an S-expression A with the property that eval(A,NIL)=A. 1. Write down an S-expression A with the property that eval(A,NIL)=A. (Hint: use the function SUBST) 2. A real number generator is a function f which maps the integers into the set of decimal digits 0,1...9, the idea being that f(n)= the nth digit after the decimal point of the real number "generated" by f. The values of the function on non-positive integers determine the digits before the decimal point in the obvious way. We restrict our attention to functions f with the property that only a finite number of its values on non-positive integers are non-zero (no "infinite" numbers allowed!). For example the real number generator(λx.if x≤-2 then 0 else 3) generates the real number 33.3333.... = 33 1/3. (Only positive numbers can be represented using this scheme; extension of the scheme to the negative numbers would be trivial) You are to write an addition routine for real number generators. This will be a functional PLUS which,given two real number generators f and g, returns a real number generator PLUS(f,g) which generates the sum of the real numbers generated by f and g. In fact this task as stated is impossible,since no functional PLUS which represents addition of real numbers can have the property that its output is always a total function when its inputs are total. (Why?) Thus we must relax our conditions: whenever PLUS(f,g) terminates on input n, the value returned must be the nth digit of the sum of the numbers generated by f and g. (Note that the function which is undefined everywhere meets these conditions; however you can certainly do better than that. At the very least PLUS should return a total function when its arguments have only a finite number of non-zero digits.) If you enjoyed writing plus, try writing more pieces of an arithmetic package for real number generators. 3. (Hard) Write a lisp expression f such that apply(f,x)=x! (x factorial), observing the following restriction: f must be an expression in pure LISP (no PROGs) which does not use the LABEL construct.
OPCFW_CODE
feat: Inverse schema As a proposal, I have switched around the ytt order of files to include a full schema of the argocd settings in myks. This can be selectively overwritten in the data-schema.ytt.yaml of the client project, the idea being that the client benefits from a simple data-schema.ytt.yaml and sensible defaults. This change requires #@overlay/match-child-defaults missing_ok=True to be added to the client's data-schema.ytt.yaml. Also, I made an opinionated change to the argo plugin files merging them both into one file with the intention to avoid duplicated methods and aggregated all argo related code in a single file. Lastly, I prepended the the argo application CR name with the env name since applications would otherwise be likely to overwrite each other. As a proposal, I have switched around the ytt order of files to include a full schema of the argocd settings in myks. This can be selectively overwritten in the data-schema.ytt.yaml of the client project, the idea being that the client benefits from a simple data-schema.ytt.yaml and sensible defaults. If I remember correctly, we were talking about encapsulating the default data-schema.ytt.yaml in myks and letting users do as they wish (in this case a envs/data-schema.ytt.yaml is not required). I'd go this way instead of extracting just the argocd part. This is actually the last thing I'd like to implement before releasing the new version. If you have any suggestions for more breaking changes we'd better implement them now. Also, I made an opinionated change to the argo plugin files merging them both into one file with the intention to avoid duplicated methods and aggregated all argo related code in a single file. Merging the code into a single file makes sense, that's better. However, I'm not sure about new function names. For example, app.renderArgoCDApplication might mean that the function renders the Application custom resource of ArgoCD. This is fine in general, but there are no other functions attached to the Application structure that render other ArgoCDs custom resources. Having app.renderArgoCD` to render all ArgoCD resources related to the current application feels lighter. In contrast to app.renderArgoCDApplication, env.renderArgoCDEnvironment doesn't render the Environment custom resource. Instead, it renders an AppProject and a Secret. In this case, having env.renderArgoCD seems even more sensible. Maybe I'm missing something. Lastly, I prepended the the argo application CR name with the env name since applications would otherwise be likely to overwrite each other. I wonder if applications will conflict if they're placed to separate namespaces. I don't really like this change, because it produces more customized internal logic, but let's have it as we both do that at the moment (we in South Pole actually don't need to have this, but my colleague said that he likes this more :-) ). As a proposal, I have switched around the ytt order of files to include a full schema of the argocd settings in myks. This can be selectively overwritten in the data-schema.ytt.yaml of the client project, the idea being that the client benefits from a simple data-schema.ytt.yaml and sensible defaults. If I remember correctly, we were talking about encapsulating the default data-schema.ytt.yaml in myks and letting users do as they wish (in this case a envs/data-schema.ytt.yaml is not required). I'd go this way instead of extracting just the argocd part. Shouldn't the user be able to add a schema of sorts, in case he wishes to add his own values, e.g. generic settings that affect multiple applications? We use that already in one of our rollout repos. I have changed the logic to include an entire schema file (the one from assets) and made if flexible to be overwritten with a schema provided by the user. Please have a look. If you like you can make further changes on this PR so get this feature completed. This is actually the last thing I'd like to implement before releasing the new version. If you have any suggestions for more breaking changes we'd better implement them now. Also, I made an opinionated change to the argo plugin files merging them both into one file with the intention to avoid duplicated methods and aggregated all argo related code in a single file. Merging the code into a single file makes sense, that's better. However, I'm not sure about new function names. For example, app.renderArgoCDApplication might mean that the function renders the Application custom resource of ArgoCD. This is fine in general, but there are no other functions attached to the Application structure that render other ArgoCDs custom resources. Having app.renderArgoCD` to render all ArgoCD resources related to the current application feels lighter. Agreed - no need to add that suffix to the method names. In contrast to app.renderArgoCDApplication, env.renderArgoCDEnvironment doesn't render the Environment custom resource. Instead, it renders an AppProject and a Secret. In this case, having env.renderArgoCD seems even more sensible. Maybe I'm missing something. Lastly, I prepended the the argo application CR name with the env name since applications would otherwise be likely to overwrite each other. I wonder if applications will conflict if they're placed to separate namespaces. I don't really like this change, because it produces more customized internal logic, but let's have it as we both do that at the moment (we in South Pole actually don't need to have this, but my colleague said that he likes this more :-) ). I guess it is a matter of taste, but let's settle on a default that works out of the box. Currently, looks like you require extra manual config to ensure Argo Application CRs are placed in separate namespaces. Shouldn't the user be able to add a schema of sorts, in case he wishes to add his own values, e.g. generic settings that affect multiple applications? We use that already in one of our rollout repos. The user was able and will be able to modify the schema via env-data.ytt.yaml files. I only want to remove data-schema.ytt.yaml from the user's sight and manage it by myks. I have changed the logic to include an entire schema file (the one from assets) and made if flexible to be overwritten with a schema provided by the user. Please have a look. If you like you can make further changes on this PR so get this feature completed. Thank you! I'll directly change a couple of tiny bits. I guess it is a matter of taste, but let's settle on a default that works out of the box. Currently, looks like you require extra manual config to ensure Argo Application CRs are placed in separate namespaces. 🤝
GITHUB_ARCHIVE
Allow setting the pg_user in Docker image for timescale container Is your feature request related to a problem? Please describe. We have to use the postgres user for the db since the Docker image doesn't allow setting the user. Describe the solution you'd like In https://github.com/orchestracities/ngsi-timeseries-api/blob/master/timescale-container/quantumleap-db-setup.py#L133 the python script has the option of setting the user, it would just need to pass this parameter as env. variable in the Dockerfile like --pg-user "$PG_USER" Describe alternatives you've considered We will use the postgres user for now but since we have multiple tenants in one external db, it would be more secure and nicer to have dedicated users. Additional context The user specified in PG_USER would need elevated rights like CREATE_DATABASE, as I would guess. Please correct me if this is not a viable solution, I would just assume it's a simple option considering that it's implemented in the python script setting up the database. hi @valluwtf :-) the Docker image doesn't allow setting the user You referring to the images we use in our docker compose files, I'd guess? It looks like you could actually whip together your own Docker Compose file with a recent Postgres image and specify the Postgres user through the POSTGRES_USER env var: https://hub.docker.com/_/postgres the python script has the option of setting the user, it would just need to pass this parameter as env. variable in the Dockerfile like Not sure I understand what you're suggesting---old age, don't judge :-) Can you give me a bit more context? Are you trying to use the QuantumLeap Postgres init container? If so, is this the Docker file you're referring to: https://github.com/orchestracities/ngsi-timeseries-api/blob/master/timescale-container/Dockerfile Yes, there's no option for the user there, but keep in mind you could easily override the default Docker command in the Docker file with e.g. this one python quantumleap-db-setup.py \ --ql-db-pass "$QL_DB_PASS" \ --ql-db-init-dir "$QL_DB_INIT_DIR" \ --pg-host "$PG_HOST" \ --pg-pass "$PG_PASS" \ --pg-username "$PG_USER" Surely, it'd be nicer to add that option to the Docker file, but it'd need to be done in a backward compatible way. That is, if the PG_USER env var is unset or empty, then don't add the --pg-username "$PG_USER" to the command. Anyhoo, we welcome pull requests! @valluwtf forgot to mention. If all you need to do is create the QuantumLeap DB, then you may be better off not using the init container. In fact, all that the script inside the container does is run this SQL code https://github.com/orchestracities/ngsi-timeseries-api/blob/master/timescale-container/quantumleap-db-setup.py#L201C1-L211C52 which you could actually easily do yourself, e.g. by asking psql to evaluate this SQL CREATE ROLE quantumleap LOGIN PASSWORD changeme; CREATE DATABASE quantumleap OWNER quantumleap ENCODING 'UTF8'; \connect quantumleap CREATE EXTENSION IF NOT EXISTS postgis CASCADE; CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;
GITHUB_ARCHIVE
How to listen parent component value change in child component ember octane? This is my code parent.hbs <ChildComp @value={{this.changes}} /> parent.js export class ParentComponet extends Component { @tracked this.changes = [1,2,3] @action addChanges() { this.changes = [...this.changes, this.changes.length] } } child-comp.js export class ChildComponet extends Component { // I wanted to observe this.args.changes updated // Based on that I need to call some operation get operation() { let parent = this.args.values.map((obj) { if (obj.parent !== null) { set(obj, 'goal', null); set(obj, 'newSubGoal', null); } }); } } I wanted to observe this.args.changes in my child compoent. How can I do that in ember-octane way? I'm assuming that you are using the latest Octane features in you app from your code snippet. We can make a class property reactive using the @tracked decorator. By this, when a tracked value changes in the parent component, the same change will be propogated into all the places where its being utilized. In your case, you have passed the value into child component and thus, the changes will also tracked inside the child component. parent component class: export class ParentComponent extends Component { @tracked changes = [1,2,3]; @action addChanges() { this.changes = [...this.changes, this.changes.length] } } In you child component you can use getters to recompute the changes accordingly. getter will be recomputed on every time a value accessed inside the getter changes. For instance, from your code, if you need to get the sum of the array, child component class: export class ChildComponent extends Component { // This getter will be computed every time the `changes` array changes. get sum() { return this.args.value.reduce((a, b) => a + b, 0); } } child template: SUM: {{this.sum}} EDIT If you need to run an arbitrary function when a value changes (most to be used to sync with external libraries or manual DOM mutations), you can use {{did-update}} modifier of ember-render-modifiers What if I just set something inside the get and don't use this sum value in templates. I will update my question Hi @Gokul changed the question. Can you answer this scenario? I am kind of stuck in that. I think you need the lifecycle hooks to watch the argument update. This was translated to render modifiers. You can use {{did-update}} modifier of https://github.com/emberjs/ember-render-modifiers to run a function when a value changes.
STACK_EXCHANGE