The Best Binary Options Signals Providers for 2020 • Benzinga

Best Binary Options Autotrading Robot for 2017, Software To Use?

submitted by mykosonai to binaryreviewpanther [link] [comments]

7 Figure Club Review 2015 - Is 7 Figure Club SCAM Or LEGIT? How Does 7 Figure Club Software Work?? Best Binary Options Trading System Is 7figureclub.co For Real? The Truth About 7 Figure Club By Martin Taylor Review

7 Figure Club Review 2015 - 7 FIGURE CLUB?? Learn the SECRETS about 7 Figure Club in this 7 Figure Club review! So What is 7 Figure Club Software all about? Does 7 Figure Club Actually Work? Is 7 Figure Club scam or does it really work?
To find answers to these questions continue reading my in depth and truthful 7 Figure Club Review below.
7 Figure Club Description:
Name: 7 Figure Club
Niche: Binary Options.
This is the ONLY solution to beat the brokers at their own game and for the first time you have the opportunity to see that done LIVE in front of your eyes and all you need to do is COPY exactly what you are being shown here:
Official Web site: CLICK HERE NOW!!!
Exactly what is 7 Figure Club?
7 Figure Club is essentially a binary options trading software that is designed to help traders win and forecast the market trends with binary options. The software application likewise provides analyses of the market conditions so that traders can know exactly what should be your next step. It provides various secret methods that ultimately assists. traders without using any complex trading indicators or follow graphs.
7 Figure Club Binary Options Trading Strategy
Base the 7 Figure Club trading strategy. After you see it working, you can start to implement your method with routine sized lots. This method will certainly pay off in time. Every Forex binary options trader should choose an account type that is in accordance with their needs and expectations. A larger account does not indicate a bigger earnings potential so it is a fantastic concept to start little and slowly add to your account as your returns increase based upon the trading choices you make.
Binary Options Trading
To help you trade binary options properly, it is necessary to have an understanding behind the basics of Binary Options Trading. Currency Trading, or foreign exchange, is based on the viewed value of 2 currencies pairs to one another, and is affected by the political stability of the country, inflation and interest rates to name a few things. Keep this in mind as you trade and find out more about binary options to maximize your learning experience.
7 Figure Club Summary
In summary, there are some apparent ideas that have actually been checked with time, along with some more recent techniques. that you may not have considered. Hopefully, as long as you follow what we suggest in this article you can either get started with trading with 7 Figure Club or enhance on exactly what you have currently done.
This fortune 500 trader is showing people LIVE on camera how he uses this fully automated trading tool to Generate Massive Profits!
There Is Only A Very Limited Spaces Available
So Act Now Before It's Too Late
Click Here To Claim Your 7 Figure Club LIFETIME User License!!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Tags: 7 Figure Club app, 7 Figure Club information, 7 Figure Club url, 7 Figure Club website, 7 Figure Club youtube video, 7 Figure Club trading software, get 7 Figure Club, article about 7 Figure Club, 7 Figure Club computer program, 7 Figure Club the truth, 7 Figure Club support, 7 Figure Club support email address, 7 Figure Club help desk, similar than 7 Figure Club, better than 7 Figure Club, 7 Figure Club contact, 7 Figure Club demo, 7 Figure Club video tutorial, how does 7 Figure Club work, is 7 Figure Club the best online is 7 Figure Club a scam, does 7 Figure Club really work, does 7 Figure Club actually work, 7 Figure Club members area, 7 Figure Club login page, 7 Figure Club verification, 7 Figure Club software reviews, 7 Figure Club no fake review, 7 Figure Club Live Broadcast, is 7 Figure Club real, 7 Figure Club forex trading, 7 Figure Club binary options trading, 7 Figure Club automated app, the 7 Figure Club review, 7 Figure Club signals, 7 Figure Club mac os x, 7 Figure Club broker sign up, 7 Figure Club free download, reviews of 7 Figure Club, 7 Figure Club live results, 7figureclub.co,7figureclub.co review,7figureclub.co reviews,7 Figure Club bonus, 7 Figure Club honest review, 7 Figure Club 2015, is 7 Figure Club worth the risk, 7 Figure Club pc desktop, 7 Figure Club free trial,7 Figure Club testimonial, 7 Figure Club scam watch dog, 7 Figure Club warrior forum, 7 Figure Club web version, 7 Figure Club open a account, 7 Figure Club laptop, 7 Figure Club revised Method 2015, 7 Figure Club Unbiased review, is 7 Figure Club all hype?, real people invested in 7 Figure Club, is 7 Figure Club a shame, 7 Figure Club discount, 7 Figure Club binary option watch dog review, 7 Figure Club youtube, seriously will 7 Figure Club work, 7 Figure Club facebook, 7 Figure Club activation code, 7 Figure Club 2015 Working, 7 Figure Club twitter, 7 Figure Club currency trading, 7 Figure Club real person review, 7 Figure Club example trade, will 7 Figure Club work on mobile phone, Completely New 7 Figure Club, 7 Figure Club customer service, new 7 Figure Club, 7 Figure Club webinar, 7 Figure Club webinar replay, 7 Figure Club anybody using this, 7 Figure Club real or fake, is 7 Figure Club live trades real, 7 Figure Club is this a scam, is 7 Figure Club reliable?, 7 Figure Club honest reviews, 7 Figure Club is it a scam, 7 Figure Club download software, 7 Figure Club app review, 7 Figure Club software download, 7 Figure Club forum, 7 Figure Club signals, 7 Figure Club download page, 7 Figure Club software demo somebody using it, 7 Figure Club binary software, 7 Figure Club binary options review, 7 Figure Club members, 7 Figure Club scam or legit,7 Figure Club comments, minimum deposit for 7 Figure Club, 7 Figure Club reviews, 7 Figure Club binary today, 7 Figure Club pro review, 7 Figure Club windows 7, 7 Figure Club windows 8 and windows XP, 7 Figure Club scam or real, 7 Figure Club login, 7 Figure Club has anybody out there made any money out of it?, 7 Figure Club vip membership pass, does 7 Figure Club work on autopilot?, 7 Figure Club price, is 7 Figure Club a scam or not, will 7 Figure Club help me, real truth about 7 Figure Club, 7 Figure Club System, 7 Figure Club inside members page, 7 Figure Club software downloads, how to download 7 Figure Club, how to access 7 Figure Club, 7 Figure Club Robot, how to use 7 Figure Club, how to trade with 7 Figure Club, 7 Figure Club NEWS Update and details, 7 Figure Club sign in, the 7 Figure Club trading options, 7 Figure Club info, 7 Figure Club information, 7 Figure Club searching for new winning trades, 7 Figure Club today, 7 Figure Club feedback, 7 Figure Club real user review, 7 Figure Club customer reviews, 7 Figure Club consumer review, 7 Figure Club Review 2015, insider john 7 Figure Club review, george s 7 Figure Club review, 7 Figure Club doesn't work, is 7 Figure Club another scam or legit, 7 Figure Club refund, Activate 7 Figure Club, review of 7 Figure Club, log on to 7 Figure Club, is 7 Figure Club manual binary trading, 7 Figure Club By Martin Taylor Review,7 Figure Club Martin Taylor Reviews,7 Figure Club bot review, 7 Figure Club test, 7 Figure Club explanation, what brokers work with 7 Figure Club software, what is 7 Figure Club, 7 Figure Club news, new version of 7 Figure Club, 7 Figure Club fan Page, 7 Figure Club breaking news, 7 Figure Club Register, 7 Figure Club sign up, 7 Figure Club broker sign up, 7 Figure Club real proof, how to activate auto trading on 7 Figure Club,7 Figure Club robot, 7 Figure Club members area, 7 Figure Club sign in, web version 7 Figure Club, should i use 7 Figure Club, 7 Figure Club yes or no, do i need trading experience, 7 Figure Club create account, 7 Figure Club instructions, how to get a 7 Figure Club demo, 7 Figure Club special, desktop 7 Figure Club, 7 Figure Club Secret method, Join 7 Figure Club, 7 Figure Club ea trading app, 7 Figure Club limited time, 7 Figure Club pros and cons, 7 Figure Club bad reviews, is 7 Figure Club software automatic binary trading, 7 Figure Club negative and positive review, 7 Figure Club Author, 7 Figure Club creator, who made 7 Figure Club, what is the 7 Figure Club, 7 Figure Club real review, 7 Figure Club broker, 7 Figure Club sign up broker, 7 Figure Club sign up broker review, 7 Figure Club fund broker, 7 Figure Club how to fund broker,7 Figure Club deposit funds into broker, how does 7 Figure Club trade, 7 Figure Club trading bot, what is 7 Figure Club and cost?, 7 Figure Club strategy, 7 Figure Club password reset, 7 Figure Club beta tester, 7 Figure Club comparison, 7 Figure Club questions and answers, rate & review 7 Figure Club, rate and reviews 7 Figure Club, is 7 Figure Club site legit?, 7 Figure Club reviews online, is 7 Figure Club for real, 7 Figure Club login page, 7 Figure Club results, 7 Figure Club winning and losing trades, 7 Figure Club overview, 7 Figure Club training, how to setup 7 Figure Club, 7 Figure Club home, real testimonial on 7 Figure Club system, 7 Figure Club real time trading, start trading with 7 Figure Club, 7 Figure Club proof, 7 Figure Club the truth, Get 7 Figure Club, 7 Figure Club Review
7 Figure Club is 100% LEGIT and easy way to generate massive profits from trading using the very SAME system the big dogs in Wall Street are using.
It's very quick and easy and NOTHING to do with any of the usual markets:
Click Here To Download 7 Figure Club Right NOW!
submitted by GarafanoRoehr55 to GarafanoRoehr [link] [comments]

Certified Profits Review 2015 - Is Certified Profits SCAM Or LEGIT? How Does Certified Profits Software Work?.. Best Binary Options Trading System Is Certifiedprofits.com For Real? The Truth About Certified Profits By Philip Diamond Review

Certified Profits Review 2015 - CERTIFIED PROFITS?? Discover the SECRETS about Certified Profits in this Certified Profits review! So What is Certified Profits Software all about? So Does Certified Profits Actually Work? Is Certified Profits Software application scam or does it really work?
To find answers to these questions continue reading my in depth and honest Certified Profits Review below.
Certified Profits Description:
Name: Certified Profits
Niche: Binary Options.
You'll be pretty blown away when you see how The Certified Profits Software Works!.
But the amazing thing is its so damn easy!
Official Web site: Access The NEW Certified Profits Software!! CLICK HERE NOW!!!
What is Certified Profits?
Certified Profits is generally a binary options trading software application that is created to help traders win and forecast the marketplace trends with binary options. The software application also offers evaluations of the market conditions so that traders can know what should be your next step. It provides different secret methods that ultimately helps. traders without using any complicated trading indications or follow graphs.
Certified Profits Binary Options Trading Method
Base the Certified Profits trading technique. After you see it working, you can start to implement your method with regular sized lots. This method will pay off with time. Every Forex binary options trader should select an account type that is in accordance with their requirements and expectations. A bigger account does not imply a bigger revenue potential so it is a terrific idea to start little and quickly add to your account as your returns increase based upon the trading choices you make.
Binary Options Trading
To help you trade binary options properly, it is important to have an understanding behind the basics of Binary Options Trading. Currency Trading, or forex, is based on the viewed value of 2 currencies pairs to one another, and is impacted by the political stability of the country, inflation and interest rates among other things. Keep this in mind as you trade and learn more about binary options to optimize your learning experience.
Certified Profits Summary
In summary, there are some apparent concepts that have actually been checked
with time, as well as some newer methods. that you might not have actually considered. Hopefully, as long as you follow what we recommend in this post you can either start with trading with Certified Profits or improve on exactly what you have already done.
Every now and then something comes along that really shocks you.
I mean REALLY shocks you.
And I have to say this is definitely one of those moments,
I really think you're going to take action immediately once you've seen this.
Not only that, I KNOW you're going to be so much happier too.
Don't wait another second though, this might not be up for much longer.
So Act Now Before It's Too Late
Click Here To Claim Your Certified Profits LIFETIME User License!!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Tags: Certified Profits app, Certified Profits information, Certified Profits url, Certified Profits website, Certified Profits youtube video, Certified Profits trading software, get Certified Profits, article about Certified Profits, Certified Profits computer program, Certified Profits the truth, Certified Profits support, Certified Profits support email address, Certified Profits help desk, similar than Certified Profits, better than Certified Profits, Certified Profits contact, Certified Profits demo, Certified Profits Phillip Diamond,Certified Profits video tutorial, how does Certified Profits work,Certifiedprofits.com,Certifiedprofits.com Review,Certifiedprofits.com Reviews, is Certified Profits the best online is Certified Profits a scam, does Certified Profits really work, does Certified Profits actually work, Certified Profits members area, Certified Profits login page, Certified Profits By Philip Diamond Review, Certified Profits Philip Diamond Reviews,Certified Profits verification, Certified Profits software reviews, Certified Profits no fake review, Certified Profits Live Broadcast, is Certified Profits real, Certified Profits forex trading, Certified Profits binary options trading, Certified Profits automated app, the Certified Profits review, Certified Profits signals, Certified Profits mac os x, Certified Profits broker sign up, Certified Profits free download, reviews of Certified Profits, Certified Profits live results, Certified Profits bonus, Certified Profits honest review, Certified Profits 2015, is Certified Profits worth the risk, Certified Profits pc desktop, Certified Profits free trial,Certified Profits testimonial, Certified Profits scam watch dog, Certified Profits warrior forum, Certified Profits web version, Certified Profits open a account, Certified Profits laptop, Certified Profits revised Method 2015, Certified Profits Unbiased review, is Certified Profits all hype?, real people invested in Certified Profits, is Certified Profits a shame, Certified Profits discount, Certified Profits binary option watch dog review, Certified Profits youtube, seriously will Certified Profits work, Certified Profits facebook, Certified Profits activation code, Certified Profits 2015 Working, Certified Profits twitter, Certified Profits currency trading, Certified Profits real person review, Certified Profits example trade, will Certified Profits work on mobile phone, Completely New Certified Profits, Certified Profits customer service, new Certified Profits, Certified Profits webinar, Certified Profits webinar replay, Certified Profits anybody using this, Certified Profits real or fake, is Certified Profits live trades real, Certified Profits is this a scam, is Certified Profits reliable?, Certified Profits honest reviews, Certified Profits is it a scam, Certified Profits download software, Certified Profits app review, Certified Profits software download, Certified Profits forum, Certified Profits signals, Certified Profits download page, Certified Profits software demo somebody using it, Certified Profits binary software, Certified Profits binary options review, Certified Profits members, Certified Profits scam or legit,Certified Profits comments, minimum deposit for Certified Profits, Certified Profits reviews, Certified Profits binary today, Certified Profits pro review, Certified Profits windows 7, Certified Profits windows 8 and windows XP, Certified Profits scam or real, Certified Profits login, Certified Profits has anybody out there made any money out of it?, Certified Profits vip membership pass, does Certified Profits work on autopilot?, Certified Profits price, is Certified Profits a scam or not, will Certified Profits help me, real truth about Certified Profits, Certified Profits System, Certified Profits inside members page, Certified Profits software downloads, how to download Certified Profits, how to access Certified Profits, Certified Profits Robot, how to use Certified Profits, how to trade with Certified Profits, Certified Profits NEWS Update and details, Certified Profits sign in, the Certified Profits trading options, Certified Profits info, Certified Profits information, Certified Profits searching for new winning trades, Certified Profits today, Certified Profits feedback, Certified Profits real user review, Certified Profits customer reviews, Certified Profits consumer review, Certified Profits Review 2015, insider john Certified Profits review, george s Certified Profits review, Certified Profits doesn't work, is Certified Profits another scam or legit, Certified Profits refund, Activate Certified Profits, review of Certified Profits, log on to Certified Profits, is Certified Profits manual binary trading, Certified Profits bot review, Certified Profits test, Certified Profits explanation, what brokers work with Certified Profits software, what is Certified Profits, Certified Profits news, new version of Certified Profits, Certified Profits fan Page, Certified Profits breaking news, Certified Profits Register, Certified Profits sign up, Certified Profits broker sign up, Certified Profits real proof, how to activate auto trading on Certified Profits,Certified Profits robot, Certified Profits members area, Certified Profits sign in, web version Certified Profits, should i use Certified Profits, Certified Profits yes or no, do i need trading experience, Certified Profits create account, Certified Profits instructions, how to get a Certified Profits demo, Certified Profits special, desktop Certified Profits, Certified Profits Secret method, Join Certified Profits, Certified Profits ea trading app, Certified Profits limited time, Certified Profits pros and cons, Certified Profits bad reviews, is Certified Profits software automatic binary trading, Certified Profits negative and positive review, Certified Profits Author, Certified Profits creator, who made Certified Profits, what is the Certified Profits, Certified Profits real review, Certified Profits broker, Certified Profits sign up broker, Certified Profits sign up broker review, Certified Profits fund broker, Certified Profits how to fund broker,Certified Profits deposit funds into broker, how does Certified Profits trade, Certified Profits trading bot, what is Certified Profits and cost?, Certified Profits strategy, Certified Profits password reset, Certified Profits beta tester, Certified Profits comparison, Certified Profits questions and answers, rate & review Certified Profits, rate and reviews Certified Profits, is Certified Profits site legit?, Certified Profits reviews online, is Certified Profits for real, Certified Profits login page, Certified Profits results, Certified Profits winning and losing trades, Certified Profits overview, Certified Profits training, how to setup Certified Profits, Certified Profits home, real testimonial on Certified Profits system, Certified Profits real time trading, start trading with Certified Profits, Certified Profits proof, Certified Profits the truth, Get Certified Profits, Certified Profits Review
This "Hands-Free" Auto-Profits Software Actually Works!
Click Here To Download Certified Profits Software Right NOW!
submitted by PiaSotlo to RenkoPipScalper [link] [comments]

Copyop Review - NEW Copy OP Trading Platform By Dave BEST Forex Binary Option Social Trading Network 2015 For Currency Pairs Without Using Automated Signals Software Bots Copy Professional Traders Copy-OP From Anyoption Binary Brokerage Reviewed

Copyop Review - NEW Copy OP Trading Platform By Dave BEST Forex Binary Option Social Trading Network 2015 For Currency Pairs
Copy Professional Traders Copy-OP From Anyoption Binary Brokerage Reviewed Start Copying The Most Successful Traders! Stop losing money on Trading Bots and Systems! Copy the BEST Traders on the market Now and start for FREE!
CLICK HERE!!
So What Is The CopyOp?
CopyOp is binary options Social Trading Network. CopyOp will allow you to copy the trades from professional traders with years of traing experience. The interface is sleek and easy on the eyes, and care has obviously been taken to allow for navigating and comprehending trades as simple as possible. It basically operates on the idea that an asset's financial worth is either going to rise or fall it gives you a complete overview of the trade, and the indicators which will advise you on how to proceed with the trade. This is so much easier than need to hunt down the trading information you need from numerous different trading websites. Instead, you'll have all the info you need in one place!
Click Here And Watch This Video!
CopyOp Review
Copy Op is a web based software built for the real world there's no assurances here that users are going to suddenly be raking in millions. No binary options trading software is going to provide easy fortunes overnight, so instead all it offers is helpful advice so that you can make the trade. Each trade will take place at a separate time period over the course of the day, This is especially useful to those working with limited time. The amazing thing about the Copy-Op platform is that there is a particular sum that you can use for a trade, This means that you can trade whatever you're comfortable with. CopyOp, we were extremely reluctant to be taken in by the claims of CopyOp. We were actually put off by what the creators had touted as its benefits. Basically The CopyOp is a straight forward and convenient software. All that's required are a few clicks and you'll be investing right away!
CopyOp Binary Options Social Trading Platform
Click Here For More Information About Copyop!
submitted by QueletteBasta9 to CopyOp [link] [comments]

What's new in macOS 11, Big Sur!

It's that time of year again, and we've got a new version of macOS on our hands! This year we've finally jumped off the 10.xx naming scheme and now going to 11! And with that, a lot has changed under the hood in macOS.
As with previous years, we'll be going over what's changed in macOS and what you should be aware of as a macOS and Hackintosh enthusiast.

Has Nvidia Support finally arrived?

Sadly every year I have to answer the obligatory question, no there is no new Nvidia support. Currently Nvidia's Kepler line is the only natively supported gen.
However macOS 11 makes some interesting changes to the boot process, specifically moving GPU drivers into stage 2 of booting. Why this is relevant is due to Apple's initial reason for killing off Web Drivers: Secure boot. What I mean is that secure boot cannot work with Nvidia's Web Drivers due to how early Nvidia's drivers have to initialize at, and thus Apple refused to sign the binaries. With Big Sur, there could be 3rd party GPUs however the chances are still super slim but slightly higher than with 10.14 and 10.15.

What has changed on the surface

A whole new iOS-like UI

Love it or hate it, we've got a new UI more reminiscent of iOS 14 with hints of skeuomorphism(A somewhat subtle call back to previous mac UIs which have neat details in the icons)
You can check out Apple's site to get a better idea:

macOS Snapshotting

A feature initially baked into APFS back in 2017 with the release of macOS 10.13, High Sierra, now macOS's main System volume has become both read-only and snapshotted. What this means is:
However there are a few things to note with this new enforcement of snapshotting:

What has changed under the hood

Quite a few things actually! Both in good and bad ways unfortunately.

New Kernel Cache system: KernelCollections!

So for the past 15 years, macOS has been using the Prelinked Kernel as a form of Kernel and Kext caching. And with macOS Big Sur's new Read-only, snapshot based system volume, a new version of caching has be developed: KernelCollections!
How this differs to previous OSes:

Secure Boot Changes

With regards to Secure Boot, now all officially supported Macs will also now support some form of Secure Boot even if there's no T2 present. This is now done in 2 stages:
While technically these security features are optional and can be disabled after installation, many features including OS updates will no longer work reliably once disabled. This is due to the heavy reliance of snapshots for OS updates, as mentioned above and so we highly encourage all users to ensure at minimum SecureBootModel is set to Default or higher.

No more symbols required

This point is the most important part, as this is what we use for kext injection in OpenCore. Currently Apple has left symbols in place seemingly for debugging purposes however this is a bit worrying as Apple could outright remove symbols in later versions of macOS. But for Big Sur's cycle, we'll be good on that end however we'll be keeping an eye on future releases of macOS.

New Kernel Requirements

With this update, the AvoidRuntimeDefrag Booter quirk in OpenCore broke. Because of this, the macOS kernel will fall flat when trying to boot. Reason for this is due to cpu_count_enabled_logical_processors requiring the MADT (APIC) table, and so OpenCore will now ensure this table is made accessible to the kernel. Users will however need a build of OpenCore 0.6.0 with commit bb12f5f or newer to resolve this issue.
Additionally, both Kernel Allocation requirements and Secure Boot have also broken with Big Sur due to the new caching system discussed above. Thankfully these have also been resolved in OpenCore 0.6.3.
To check your OpenCore version, run the following in terminal:
nvram 4D1FDA02-38C7-4A6A-9CC6-4BCCA8B30102:opencore-version
If you're not up-to-date and running OpenCore 0.6.3+, see here on how to upgrade OpenCore: Updating OpenCore, Kexts and macOS

Broken Kexts in Big Sur

Unfortunately with the aforementioned KernelCollections, some kexts have unfortunately broken or have been hindered in some way. The main kexts that currently have issues are anything relying on Lilu's userspace patching functionality:
Thankfully most important kexts rely on kernelspace patcher which is now in fact working again.

MSI Navi installer Bug Resolved

For those receiving boot failures in the installer due to having an MSI Navi GPU installed, macOS Big Sur has finally resolved this issue!

New AMD OS X Kernel Patches

For those running on AMD-Based CPUs, you'll want to also update your kernel patches as well since patches have been rewritten for macOS Big Sur support:

Other notable Hackintosh issues

Several SMBIOS have been dropped

Big Sur dropped a few Ivy Bridge and Haswell based SMBIOS from macOS, so see below that yours wasn't dropped:
If your SMBIOS was supported in Catalina and isn't included above, you're good to go! We also have a more in-depth page here: Choosing the right SMBIOS
For those wanting a simple translation for their Ivy and Haswell Machines:

Dropped hardware

Currently only certain hardware has been officially dropped:

Extra long install process

Due to the new snapshot-based OS, installation now takes some extra time with sealing. If you get stuck at Forcing CS_RUNTIME for entitlement, do not shutdown. This will corrupt your install and break the sealing process, so please be patient.

X79 and X99 Boot issues

With Big Sur, IOPCIFamily went through a decent rewriting causing many X79 and X99 boards to fail to boot as well as panic on IOPCIFamily. To resolve this issue, you'll need to disable the unused uncore bridge:
You can also find prebuilts here for those who do not wish to compile the file themselves:

New RTC requirements

With macOS Big Sur, AppleRTC has become much more picky on making sure your OEM correctly mapped the RTC regions in your ACPI tables. This is mainly relevant on Intel's HEDT series boards, I documented how to patch said RTC regions in OpenCorePkg:
For those having boot issues on X99 and X299, this section is super important; you'll likely get stuck at PCI Configuration Begin. You can also find prebuilts here for those who do not wish to compile the file themselves:

SATA Issues

For some reason, Apple removed the AppleIntelPchSeriesAHCI class from AppleAHCIPort.kext. Due to the outright removal of the class, trying to spoof to another ID (generally done by SATA-unsupported.kext) can fail for many and create instability for others. * A partial fix is to block Big Sur's AppleAHCIPort.kext and inject Catalina's version with any conflicting symbols being patched. You can find a sample kext here: Catalina's patched AppleAHCIPort.kext * This will work in both Catalina and Big Sur so you can remove SATA-unsupported if you want. However we recommend setting the MinKernel value to 20.0.0 to avoid any potential issues.

Legacy GPU Patches currently unavailable

Due to major changes in many frameworks around GPUs, those using ASentientBot's legacy GPU patches are currently out of luck. We either recommend users with these older GPUs stay on Catalina until further developments arise or buy an officially supported GPU

What’s new in the Hackintosh scene?

Dortania: a new organization has appeared

As many of you have probably noticed, a new organization focusing on documenting the hackintoshing process has appeared. Originally under my alias, Khronokernel, I started to transition my guides over to this new family as a way to concentrate the vast amount of information around Hackintoshes to both ease users and give a single trusted source for information.
We work quite closely with the community and developers to ensure information's correct, up-to-date and of the best standards. While not perfect in every way, we hope to be the go-to resource for reliable Hackintosh information.
And for the times our information is either outdated, missing context or generally needs improving, we have our bug tracker to allow the community to more easily bring attention to issues and speak directly with the authors:

Dortania's Build Repo

For those who either want to run the lastest builds of a kext or need an easy way to test old builds of something, Dortania's Build Repo is for you!
Kexts here are built right after commit, and currently supports most of Acidanthera's kexts and some 3rd party devs as well. If you'd like to add support for more kexts, feel free to PR: Build Repo source

True legacy macOS Support!

As of OpenCore's latest versioning, 0.6.2, you can now boot every version of x86-based builds of OS X/macOS! A huge achievement on @Goldfish64's part, we now support every major version of kernel cache both 32 and 64-bit wise. This means machines like Yonah and newer should work great with OpenCore and you can even relive the old days of OS X like OS X 10.4!
And Dortania guides have been updated accordingly to accommodate for builds of those eras, we hope you get as much enjoyment going back as we did working on this project!

Intel Wireless: More native than ever!

Another amazing step forward in the Hackintosh community, near-native Intel Wifi support! Thanks to the endless work on many contributors of the OpenIntelWireless project, we can now use Apple's built-in IO80211 framework to have near identical support to those of Broadcom wireless cards including features like network access in recovery and control center support.
For more info on the developments, please see the itlwm project on GitHub: itlwm

Clover's revival? A frankestien of a bootloader

As many in the community have seen, a new bootloader popped up back in April of 2019 called OpenCore. This bootloader was made by the same people behind projects such as Lilu, WhateverGreen, AppleALC and many other extremely important utilities for both the Mac and Hackintosh community. OpenCore's design had been properly thought out with security auditing and proper road mapping laid down, it was clear that this was to be the next stage of hackintoshing for the years we have left with x86.
And now lets bring this back to the old crowd favorite, Clover. Clover has been having a rough time of recent both with the community and stability wise, with many devs jumping ship to OpenCore and Clover's stability breaking more and more with C++ rewrites, it was clear Clover was on its last legs. Interestingly enough, the community didn't want Clover to die, similarly to how Chameleon lived on through Enoch. And thus, we now have the Clover OpenCore integration project(Now merged into Master with r5123+).
The goal is to combine OpenCore into Clover allowing the project to live a bit longer, as Clover's current state can no longer boot macOS Big Sur or older versions of OS X such as 10.6. As of writing, this project seems to be a bit confusing as there seems to be little reason to actually support Clover. Many of Clover's properties have feature-parity in OpenCore and trying to combine both C++ and C ruins many of the features and benefits either languages provide. The main feature OpenCore does not support is macOS-only ACPI injection, however the reasoning is covered here: Does OpenCore always inject SMBIOS and ACPI data into other OSes?

Death of x86 and the future of Hackintoshing

With macOS Big Sur, a big turning point is about to happen with Apple and their Macs. As we know it, Apple will be shifting to in-house designed Apple Silicon Macs(Really just ARM) and thus x86 machines will slowly be phased out of their lineup within 2 years.
What does this mean for both x86 based Macs and Hackintoshing in general? Well we can expect about 5 years of proper OS support for the iMac20,x series which released earlier this year with an extra 2 years of security updates. After this, Apple will most likely stop shipping x86 builds of macOS and hackintoshing as we know it will have passed away.
For those still in denial and hope something like ARM Hackintoshes will arrive, please consider the following:
So while we may be heart broken the journey is coming to a stop in the somewhat near future, hackintoshing will still be a time piece in Apple's history. So enjoy it now while we still can, and we here at Dortania will still continue supporting the community with our guides till the very end!

Getting ready for macOS 11, Big Sur

This will be your short run down if you skipped the above:
For the last 2, see here on how to update: Updating OpenCore, Kexts and macOS
In regards to downloading Big Sur, currently gibMacOS in macOS or Apple's own software updater are the most reliable methods for grabbing the installer. Windows and Linux support is still unknown so please stand by as we continue to look into this situation, macrecovery.py may be more reliable if you require the recovery package.
And as with every year, the first few weeks to months of a new OS release are painful in the community. We highly advise users to stay away from Big Sur for first time installers. The reason is that we cannot determine whether issues are Apple related or with your specific machine, so it's best to install and debug a machine on a known working OS before testing out the new and shiny.
For more in-depth troubleshooting with Big Sur, see here: OpenCore and macOS 11: Big Sur
submitted by dracoflar to hackintosh [link] [comments]

Allow me to explain how traditional game "patching" as on consoles and even PC by game developers is not always required for games to run better on Stadia over time... Stadia engineers can do it on their own to ever improve the visual quality of individual library titles.

I've been mulling over how to write this post without it getting too wordy and just turn people away from the topic... but I feel it's important for people to consider in regards to investing in game purchases on Stadia. Even though a years-old game is ported to Stadia by a 3rd party publisher, it is not abandoned by that developer after game engine code changes are required... at that point the Stadia team can take over tweaking the performance of the game as the Linux OS Kernel / Vulkan API / eventually hardware undergo improvements over time.
I've seen heated comments/reactions in these parts when people start noticing older games suddenly looking or performing better... even though there is no sign of a game patch from the developer or announcement that such a thing has happened. (FFXV.) I'm hear to explain how this is totally possible.
(Disclaimer: I've been a gaming platform tester for 13 years, a platform based from GenToo Linux Kernel. This year I have just branched directly into OS Kernel / Package testing itself.)
A software package / game is made up of not only game code and pretty graphics. Another fairly big piece of the puzzle is configuration files. Especially in the Linux world. Another thing about Linux is it never sits still. It's open source and ever growing and improving through constant iteration by engineers around the world. This includes the Vulkan API itself. Stadia's platform and Vulkan API has likely undergone dozens if not hundreds of iterations in the past year alone. It is CONSTANTLY improving, even if ever so slightly.
For comparison, a gaming console is a completely sealed environment. Not only does the hardware never change, but the OS and base Platform has very little wiggle room for improvement. Most significant improvements will happen within the first few years of a new console's life. But often the gains from that never spill over into the games themselves... but rather the Platform's UI interface and menu's, such as adding new features outside of the game. For things to change about a game at all, a patch MUST be delivered to the console. There is no other option, because the config files of individual games can't be touched in any other way.
On PC you often have access to these config files (at the devoloper's discretion of what they choose to expose of course). Many people know of how you can start digging into these settings and adjust number values and flip on/off flags to affect your game. But these configuration files have default values set by the developers that are expected to never really be touched by the players... so even when they do want to change something for the benefit of everyone, they need to issue a game patch.
Now on a Cloud platform such as Stadia, when a game is delivered by a developer to the platform, of course their game engine code (binaries) cannot be altered by anyone but the game developer themselves as usual... so if there is bugs in code, or game engine code improvements that can be done, the developer must deploy a game patch to make these changes, as we have seen and people would expect. However the configuration files which define how the game performs on the platform's hardware are completely exposed... and this is what the Stadia team most likely has FULL control over. So if the Vulkan API gets some improvements or code optimizations, and they can squeeze a little bit more performance out of the game, the Stadia team can go into these config files and adjust things accordingly.
Not only configurations but also the graphical assets themselves (media) can be swapped with more high-rez assets as well. Its also very possible that the publishers/devs provide Stadia with multiple different versions of quality of their media. Some higher rez textures that can be swapped in if the platform is optimized enough to handle them, etc.
Why would the Stadia team take on the management of all the games in such a way? Because it's absolutely in their best interest too. This is also a big favor towards the game publisher as well... Stadia does work to improve the game ultimately generating better reception and sales of these games producing revenue for both Stadia and the publisher.
Cloud platforms are a new animal in the gaming world. How the games are maintained over time can be done very differently than what we are used to with console and PC.
So naturally this turned into a wall of text but I couldn't do it any other way... some things simply need to be explained as clearly as possible to get across.
ltdr: As Stadia platform / Vulkan API improve constantly over time, Stadia engineers can tweak the configurations of ANY game to make them look/run better without the developers needing to be involved and patch the games.
submitted by Z3M0G to Stadia [link] [comments]

RESULTS of the State of the Game Survey: September 2020

Hi all,

It’s time for the results!

Thank you to everyone who took the time to respond - we had over 1,750 responses, which is great! These insights wouldn’t be possible without your time and support.

As always, neither myself nor this survey are associated with Intelligent Systems or Nintendo in any way. Please direct feedback about the game itself to the official channels.

Now let’s get into it!
 
Previous Survey Results:
April_2020_State_of_the_Game_Survey

~ Demographics ~

53.8% began playing FE:H in February 2017, with 20.0% more joining during the first year of the game. 12.0% of respondents joined during the second year, 8.7% joined during the third, and 4.0% joined during the fourth year (the last ~7 months).

The age range breakdown of respondents is as follows:

75.8% of respondents identified as Male, 18.4% as Female, and 3.0% as Non-binary.

24.6% of respondents have never missed a daily login, while a further 38.8% have missed less than a month’s worth of logins, 11.7% missed 1-2 months, 9.9% missed 3-6 months, 5.8% missed 7-12 months, and 4.7% missed over a year’s worth.

33.5% report being F2P, while 28.7% have spent less than $100, 18.3% spent between $100 - $499, 7.3% spent between $500 - $999, and 8.7% have spent over $1000.

46.6% last spent money on FE:H during the fourth year of the game (the last 3 months), while 6.6% last spent money during the third year of the game, 5.8% last spent during the second year of the game, and 5.1% last spent money during the first year of the game.

~ Summoning ~

“Which of the following banners have you used orbs on at least once?”
  • (86.8%) A New Future (CYL 4)
  • (60.2%) Overseas Memories (3H Summer)
  • (59.8%) Dark Burdens (Fallen Heroes)
  • (57.9%) Legendary Heroes: Edelgard
  • (55.2%) Legendary Heroes: Corrin
  • (53.1%) Book IV Mid: Mirabilis and More
  • (52.9%) Hero Fest
  • (52.2%) Pirate’s Pride
  • (44.5%) Mythic Heroes: Hel
  • (44.2%) Mythic Heroes: Mila
  • (43.7%) Bridal Beloveds
  • (39.6%) Summer Passing (Sacred Stones Summer (mostly))
  • (37.5%) Legendary Heroes: Seliph
  • (31.1%) Light and Shadow (New Mystery)

“Which of the following banners did you use the most orbs on?”
  • (44.8%) A New Future (CYL 4)
  • (8.6%) Overseas Memories (3H Summer)
  • (5.9%) Legendary Heroes: Corrin
  • (5.8%) Dark Burdens (Fallen Heroes)
  • (5.5%) Pirate’s Pride
  • (4.9%) Legendary Heroes: Edelgard
  • (4.5%) Hero Fest
  • (3.5%) Mythic Heroes: Hel
  • (3.0%) Bridal Beloveds
  • (2.8%) Book IV Mid: Mirabilis and More
  • (2.5%) Summer Passing (Sacred Stones Summer (mostly))
  • (2.5%) Legendary Heroes: Seliph
  • (2.3%) Mythic Heroes: Mila
  • (1.7%) Light and Shadow (New Mystery)

“What was your favorite banner?”
  • (37.4%) A New Future (CYL 4)
  • (10.9%) Dark Burdens (Fallen Heroes)
  • (8.9%) Pirate’s Pride
  • (8.5%) Overseas Memories (3H Summer)
  • (5.7%) Hero Fest
  • (5.4%) Legendary Heroes: Corrin
  • (3.3%) Legendary Heroes: Edelgard
  • (2.9%) Legendary Heroes: Seliph
  • (2.6%) Book IV Mid: Mirabilis and More
  • (2.6%) Bridal Beloveds
  • (2.5%) Summer Passing (Sacred Stones Summer (mostly))
  • (2.3%) Light and Shadow (New Mystery)
  • (1.5%) Mythic Heroes: Hel
  • (1.4%) Mythic Heroes: Mila

“Did you spend money specifically to summon on any of the banners below?”
  • (17.6%) A New Future (CYL 4)
  • (10.3%) Overseas Memories (3H Summer)
  • (8.9%) Legendary Heroes: Corrin
  • (6.8%) Dark Burdens (Fallen Heroes)
  • (6.6%) Pirate’s Pride
  • (6.5%) Legendary Heroes: Edelgard
  • (5.8%) Hero Fest
  • (5.1%) Bridal Beloveds
  • (4.9%) Mythic Heroes: Hel
  • (4.8%) Book IV Mid: Mirabilis and More
  • (4.8%) Mythic Heroes: Mila
  • (4.8%) Summer Passing (Sacred Stones Summer (mostly))
  • (3.4%) Light and Shadow (New Mystery)
  • (3.3%) Legendary Heroes: Seliph

~ Summoning Mechanics ~

33.7% spent orbs on the Hero Fest banner AFTER Intelligent Systems announced how they would be compensating players for the Hero Fest banner glitch, compared to 61.7% who did not.

30.5% say that knowing about the compensation for the Hero Fest banner glitch caused them to spend more orbs on the banner than they would have otherwise, compared to 41.5% who say it did not. 28.0% did not spend orbs on the Hero Fest banner.

34.3% feel positively or very positively about the quality of 4* focuses on regular banners, compared to 26.9% who feel negatively or very negatively.

69.7% feel positively or very positively about the quality of 4* focuses on seasonal banners, compared to 7.8% who feel negatively or very negatively.

53.8% report that the system guaranteeing a free 5* after 40 summons generally makes them summon more, while 5.4% report that it generally makes them summon less and 36.1% report no change in their summoning habits on New Heroes banners.

“If all New Heroes Banners used the permanent 40-summons-for-a-guaranteed-5* system that CYL4 used, how would your orb-spending habits on New Heroes banners change?”
  • (1.8%) I would spend fewer orbs than I did before
  • (22.3%) I would spend the same amount of orbs I usually do
  • (10.3%) I would spend more orbs than I did before
  • (62.2%) My spending would depend more on the Heroes offered

~ Choose Your Legends IV ~

“Which CYL4 Brave Heroes have you summoned, whether from the guaranteed choice banner or the regular banner?”
  • (78.0%) Dimitri
  • (73.4%) Claude
  • (65.7%) Edelgard
  • (56.6%) Lysithea

Of the summoning milestones on the CYL4 banner:
  • (20.2%) did not reach any of these summoning milestones
  • (79.7%) reached 40 summons
  • (41.0%) reached 80 summons
  • (19.8%) reached 120 summons
  • (11.1%) reached 160 summons

45.7% say that the free 5* hero at 40, 80, 120 and 160 summons caused them to spend more on CYL4 than they would have otherwise, while 50.3% say it did not.

22.8% say that the potential use of a new Brave Hero in future F2P Guides for content such as Hero Battles influenced their Brave Heroes summons, compared to 74.0% who say it did not.

“If you could only get ONE of the new Brave Heroes, which one would you choose?”
  • (36.8%) Dimitri
  • (28.9%) Edelgard
  • (22.9%) Claude
  • (7.8%) Lysithea

“Which Brave Hero do you believe is the overall strongest?”
  • (60.7%) Edelgard
  • (21.9%) Dimitri
  • (7.9%) Claude
  • (1.2%) Lysithea

“Which Brave Hero do you believe is the overall weakest?”
  • (61.2%) Lysithea
  • (13.7%) Claude
  • (7.0%) Dimitri
  • (1.7%) Edelgard

“Which Brave Hero do you believe has the best art?”
  • (32.9%) Claude
  • (27.3%) Dimitri
  • (20.1%) Lysithea
  • (13.3%) Edelgard

“Which set of Brave Heroes is your favorite overall?”
  • (24.2%) 1st CYL (Ike, Lucina, Lyn, Roy)
  • (19.4%) 2nd CYL (Ephraim, Celica, Hector, Veronica)
  • (11.2%) 3rd CYL (Alm, Camilla, Eliwood, Micaiah)
  • (39.9%) 4th CYL (Claude, Dimitri, Edelgard, Lysithea)

23.6% feel positively or very positively about the addition of Jorge as the CYL4 GHB hero, compared to 33.0% who feel negatively or very negatively.

86.3% believe CYL5 should add further protections against vote botting, compared to 4.4% who do not.

70.1% believe CYL5 should require Nintendo Account sign-in to vote, compared to 12.6% who do not.

~ Feh Pass and Resplendent Heroes ~

41.2% feel negatively about the addition of the Feh Pass (down 15.8% from the last survey), compared to 11.6% who feel positively (up 1.5% from the last survey). 46.1% are neutral (up 14.3% from the last survey).

40.2% have purchased the Feh Pass, compared to 59.8% who have not. This is a 9.5% increase compared to the last survey, following a 6.7% increase before that.

Of those who have subscribed to Feh Pass, 17.4% have purchased Resplendent Heroes separately (up 12.9% from the last survey), compared to 82.6% who have not.

“Which Resplendent Hero has your favorite art?”
  • (13.4%) Cordelia
  • (12.8%) Eliwood
  • (8.7%) Eirika
  • (8.4%) Olwen
  • (7.5%) Sophia
  • (7.3%) Minerva
  • (6.0%) Azura
  • (5.7%) Lyn
  • (5.2%) Ike
  • (4.1%) Sanaki
  • (4.0%) Roy
  • (3.7%) M!Robin
  • (2.3%) Hector
  • (1.6%) Linde
  • (1.3%) Alm

“Which Resplendent outfit theme is your favorite?”
  • (16.3%) Muspell
  • (15.0%) Askr
  • (14.8%) Nifl
  • (11.5%) Embla
  • (11.5%) Hel
  • (10.3%) Ljosalfheimr

~ Miscellaneous ~

15.8% feel positively about the introduction of Harmonized Heroes, compared to 31.3% who feel negatively.

29.5% have a Harmonized Hero, compared to 70.1% who do not.

14.6% feel positively or very positively about the Resonant Battles game mode, compared to 51.5% who feel negatively or very negatively.

4.6% say that the Resonant Battles game mode influenced them to pull for Harmonized Heroes, compared to 94.5% who say it has not.

34.8% believe the new Arena maps are better than the maps they replaced, while 7.4% believe they are worse, and 36.7% believe they are about the same.

“How often do you use Auto Dispatch in Aether Raids?”
  • (34.3%) All of them, always
  • (0.2%) All of them, in Light Season
  • (3.6%) All of them, in Astra season
  • (24.3%) Only sometimes
  • (37.6%) I never use it

“IV Mango” is the preferred term for Trait Fruit according to 32.2% of respondents, followed by “IVcado” at 28.9%, “Fruit” at 7.6%, and “Dragonfruit” at 6.6%. The remaining 24.7% prefer to just call them Trait Fruit.

39.3% say they will use their first Trait Fruits on a Heroic Grails unit, while 32.9% say they will use them on a Summonable unit, and 1.3% say they will use them on an Askr unit.

58.7% prefer Stat Boosts for Legendary Heroes, compared to 26.3% who prefer Pair-Up.

56.5% generally prefer Regular Duo Heroes, compared to 8.8% who prefer Harmonized Duo Heroes.

1.8% say that the update that raised the minimum hardware/software required to play the game affected their ability to play FE:H, compared to 95.8% who say it did not.

~ Recurring Miscellaneous ~

“Which game do you want a New Heroes banner from the most?”
  • (26.0%) Three Houses (-1.9%)
  • (9.7%) Radiant Dawn (+0.5%)
  • (7.7%) Sacred Stones (+0.2%)
  • (7.5%) Awakening (-3.1%)
  • (6.4%) Genealogy of the Holy War (-1.3%)
  • (6.1%) Path of Radiance (-0.9%)
  • (6.0%) Gaiden / Shadows of Valentia (+2.7%)
  • (5.9%) TMS #FE (+1.9%)
  • (5.4%) Blazing Blade (+1.3%)
  • (5.0%) Fates (+1.0%)
  • (4.2%) Thracia 776 (+0.8%)
  • (2.4%) Binding Blade (+0.6%)
  • (0.8%) Shadow Dragon and the Blade of Light / Shadow Dragon (-1.0%)
  • (0.8%) Mystery of the Emblem / New Mystery of the Emblem (-1.1%)

“How much do you care about your rank in the following modes?”
  • (2.90/5.00 average) Arena
  • (2.82/5.00 average) Aether Raids
  • (2.48/5.00 average) PvE game modes with player ranking boards
  • (1.82/5.00 average) Arena Assault

“How have recent changes to FE:H changed your opinion on the game as a whole?”
  • (39.3%) My opinion was positive and has stayed positive
  • (5.7%) My opinion used to be negative, but has turned positive
  • (40.1%) Neutral
  • (9.9%) My opinion used to be positive, but has turned negative
  • (5.1%) My opinion was negative and has stayed negative

~ Intelligent Systems Approval Ratings ~

The approval ratings are calculated by the proportion of Approve responses compared to the number of both Approve and Disapprove responses.

Percent who approve of the way Intelligent Systems is handling:
  • 74.6% - The addition of new heroes / characters to the game (+11.9)
  • 69.4% - The gacha mechanics and summoning banners (+5.5)
  • 59.2% - The story/plot (+9.4)
  • 85.2% - Unranked PvE game modes (Hero Battles, Forging Bonds, Tactics Drills, Lost Lore, Hall of Forms) (-1.2)
  • 50.7% - Ranked PvE game modes (Voting Gauntlets, Tempest Trials, Grand Conquest, Allegiance Battles, Rokkr Sieges, Mjolnir's Strike) (-2.6)
  • 34.6% - Arena (-6.2)
  • 48.0% - Arena Assault (+6.7)
  • 45.8% - Aether Raids (+12.7)

40.5% believe Intelligent Systems cares about its Free to Play userbase (up 10.1% from the last survey), while 34.7% do not. This continues the upward trend from the previous survey, bringing us to 8.8% down from where we were before the February drop).

42.9% approve of the way Intelligent Systems is handling Fire Emblem: Heroes as a whole (up 14.8% from the last survey), while 16.9% disapprove. This continues the upward trend from the previous survey, bringing us to only 2.5% down from where we were before the February drop).

A NOTE ABOUT METHODOLOGY: The overall approval ratings question above has traditionally been the exact percent of Approve responses, as a proportion with both Neutral and Disapprove responses. Note that this is different than the way approval is calculated for individual modes (the proportion of Approve responses compared to the number of both Approve and Disapprove responses), where Neutral responses are excluded. The difference in calculation has continued this way in order to maintain comparability with previous survey results.
For comparisons sake, the overall approval rating trend going by raw Approval percentage over the last 4 surveys is: 50.6% (Dec) -> 22.9% (Feb) -> 28.1% (Apr) -> 42.9% (Sept)
Whereas the overall approval rating trend going by proportion of Approve/Disapprove with the Neutrals excluded over the last 4 surveys is: 82.2% (Dec) -> 41.0% (Feb) -> 51.3% (Apr) -> 71.7% (Sept).

~ Bonus Questions ~

“Who is your Favorite Hero added since the last survey?”
  • Dimitri (Brave) is the winner, followed by Edelgard (Brave), then Claude (Brave).
  • Full results here: [Graph]

“Who is your Most Wanted Hero added since the last survey?”
  • Tibarn (Pirate) is the winner, followed by Corrin (F, Legendary), then Micaiah (Duo, Bridal).
  • Full results here: [Graph].

“What would be the best Harmonized Hero (a pair of two heroes from different games) and why?”:

Rather than selecting a subset of responses this time, the link below is to a google sheet of almost all unique responses. I cleaned it up a little bit to remove “idk” type answers, duplicates, and partial string duplicates, so don’t worry if you don’t see your exact response in it.

[Full Responses].

~ Feedback ~

As always, I received lots of great feedback, both in your survey responses and in the thread itself. A heartfelt thank you to all participants for your encouragements and criticisms - these surveys wouldn’t be where they are without your feedback. But it’s not all serious; feedback messages also included:

  • #FloofMomGang #GiveLeoAGoodFuckingAltForOnce #NowiRefineWhen #TelliusNewHeroesPlz #ElinciaResplendentWhen #JusticeForDedue #PleaseRemoveLChromInstysIAmBeggingYouICantLiveLikeThisAnymore
  • “There once was a CYL4 banner / That hit my orbs hard like a hammer / The very next day / FloomMom Duo came our way / Now I'm stuck bartering with a loan planner”
  • bonk, go to survey jail”
  • “Am I also allowed to put in "Norne and Azura" for a Harmonized Hero pair? No reason.”
  • “Brace yourself. Winter (armours) are coming!” “Brave Hector's refine has made me so very happy with it's inclusion. Go shove your bow up your butt Legendary Chrom.”
  • “Give me villager alts or give me death”
  • “I expect the next survey to come with +12 to attack, null follow up, and special cooldown reduction.”
  • “The true best Harmonized Hero would be Azura and Roy since it would make me uninstall the game and never want to play a gacha ever again”
  • “My headcanon for the dream storyline is that the evil fairies have the Summoner off picking up pebbles that look like orbs. Fredrickson would be proud.”
  • “Where's the most wanted unit to add to the game question so I can shout my want for Seteth into the void?”
  • “I no longer dab, for Legendary Seliph has finally appeared.”
  • And greetings from Argentina, the Bahamas, Brazil, Chile, Colombia, Finland, Germany, Greece, Hong Kong, Ireland, Russia, South Korea, Sweden, the UK, Vietnam, the Pacific Northwest, Alaska, Toronto, and St. Louis, as well as from many fictional locations!
And some personal/meta comments:
  • “Any chance we end up seeing another Super Serious Survey in the not-so-distant future?” -> I could not believe it’s been over a year since the last one! We’ll have to do one soon!
  • “Feels like the end of an era, not having to count all my five stars” -> I know, right? I may have it return in a side survey for the most hardcore of respondents at some point, since some people are asking about it and it would be good to get data on it every once in a while.
  • “I was looking through your Nornes skills and saw you haven't given her live for bounty yet! It's the best skill for her, what are you doing!?” -> I am a fraud :( I have given her Live for Honor though :P
  • “What do you hope for in FEH?” -> Norne alt, Resplendent Jaffar, and Shamir
  • Multiple people mentioned that they had returned after a long break and were surprised to see Norne instead of Azura! Welcome back!
  • I also missed a bunch of other possible Trait Fruit nicknames, which I knew would inevitably happen. Sorry!

Note: Please don’t ask me to feature your feedback comment; it’s the only guaranteed way to not have your comment added!

Finally, the suggestion to have separate options for serious vs non-serious feedback was a good idea, I’ll try that out on the next survey!

~ Closing Remarks ~

If you missed out on responding to this survey when it was available, consider subscribing to FEHSurveys. This subreddit serves as a place to organize FE:H-related surveys, make new releases more visible, and make it easier for users to see when surveys are active.

Thanks again to everyone who participated! I hope you find the results interesting, and if there’s anything else you think can be discovered from the data, let me know and I’ll do my best to oblige!
 
 
Weekly/Important Megathreads:
Weekly Discussion Megathread
Tempest Trials+: Dancing Affinity Megathread
Forging Bonds: Beyond Blood Rebout Megathread
Limited Hero Battles Megathread
submitted by ShiningSolarSword to FireEmblemHeroes [link] [comments]

Some Background and Thoughts on FPGAs

I have been lurking on this board for a few years. I decided the other day to finally create an account so I could come out of lurk mode. As you might guess from my id I was able to retire at the beginning of this year on a significantly accelerated timetable thanks to the 20x return from my AMD stock and option investments since 2016.
I spent my career working on electronics and software for the satellite industry. We made heavy use of FPGAs and more often than not Xilinx FPGAs since they had a radiation tolerant line. I thought I would summarize some of the ways they were used in and around the development process. My experience is going to be very different than the datacenter settings in the last few years. The AI and big data stuff was a pipe dream back then.
In the olden times of the 90s we used CPUs which unlike modern processors did not include much in the way of I/O and memory controller. The computer board designs graduated from CPU + a bunch of ICs (much like the original IBM PC design) to a CPU + Xilinx FPGA + RAM + ROM and maybe a 5V or 3.3V linear voltage regulator. Those old FPGAs were programmed before they were soldered to the PCB using a dedicated programming unit attached to a PC. Pretty much the same way ROMs were programmed. At the time FPGAs gate capacity was small enough that it was still feasible to design their implementation using schematics. An engineer would draw up logic gates and flip-flops just like you would if using discrete logic ICs and then compile it to the FPGA binary and burn it to the FPGA using a programmer box like a ROM. If you screwed it up you had to buy another FPGA chip, they were not erasable. The advantage of using the FPGA is that it was common to implement a custom I/O protocol to talk to other FPGAs, on other boards, which might be operating A/D and D/A converters and digital I/O driver chips. As the FPGA gate capacities increased the overall board count could be decreased.
With the advent of much larger FPGAs that were in-circuit re-programmable they began to be used for prototyping ASIC designs. One project I worked on was developing a radiation hardened PowerPC processor ASIC with specialized I/O. A Xilinx FPGA was used to test the implementation at approximately half-speed. The PowerPC core was licensed IP and surrounded with bits that were developed in VHDL. In the satellite industry the volumes are typically not high enough to warrant developing ASICs but they could be fabbed on a rad-hard process while the time large capacity re-programmable FPGAs were not. Using FPGAs for prototyping the ASIC was essential because you only had one chance to get the ASIC right, it was cost and schedule prohibitive to do any respins.
Another way re-programmable FPGAs were used was for test equipment and ground stations. The flight hardware had these custom designed ASICs of all sorts which generally created data streams that would transmitted down from space. It was advantageous to test the boards without the full set of downlink and receiver hardware so a commercial FPGA board in a PC would be used to hook into the data bus in place of the radio. Similarly other test equipment would be made which emulated the data stream from the flight hardware so that the radio hardware could be tested independently. Finally the ground stations would often use FPGAs to pull in the digital data stream from the receiver radio and process the data in real-time. These FPGAs were typically programmed using VHDL but as tools progressed it became possible to program to program the entire PC + FPGA board combination using LabView or Simulink which also handled the UI. In the 2000s it was even possible to program a real-time software defined radio using these tools.
As FPGAs progressed they became much more sophisticated. Instead of only having to specify whether an I/O pin was digital input or output you could choose between high speed, low speed, serdes, analog etc. Instead of having to interface to external RAM chips they began to include banks of internal RAM. That is because FPGAs were no longer just gate arrays but included a quantity of "hard-core" functionality. The natural progression of FPGAs with hard cores brings them into direct competition with embedded processor SOCs. At the same time embedded SOCs have gained flexibility with I/O pin assignment which is very similar to what FPGAs allow.
It is important to understand that in the modern era of chip design the difference between the teams that AMD and Xilinx has for chip design is primarily at the architecture level. Low level design and validation are going to largely be the same (although they may be using different tools and best practices). There are going to be some synergies in process and there is going to be some flexibility in having more teams capable of bringing chips to market. They are going to be able to commingle the best practices between the two which is going to be a net boost to productivity for one side or the other or both. Furthermore AMD will have access to Xilinx FPGAs for design validation at cost and perhaps ahead of release and Xilinx will be able to leverage AMD's internal server clouds. The companies will also have access to a greater number of Fellow level architects and process gurus. Also AMD has internally developed IP blocks that Xilinx could leverage and vice versa. Going forward there would be savings on externally licensed IP blocks as well.
AI is all the rage these days but there are many other applications for generic FPGAs and for including field programmable gates in sophisticated SOCs. As the grand convergence continues I would not be surprised at all to see FPGA as much a key component to future chips as graphics are in an APU. If Moore’s law is slowing down then the ability to reconfigure the circuitry on the fly is a potential mitigation. At some point being able to reallocate the transistor budget on the fly is going to win out over adding more and more fixed functionality. Going a bit down the big.little path what if a core could be reconfigured on the fly to be integer heavy or 64 bit float heavy within the same transistor budget. Instead of dedicated video encodedecoders or AVX 512 that sits dark most of the time the OS can gin it up on demand. In a laptop or phone setting this could be a big improvement.
If anybody has questions I'd be happy to answer. I'm sure there are a number of other posters here with a background in electronics and chip design who can weigh in as well.
submitted by RetdThx2AMD to AMD_Stock [link] [comments]

Help with RAID 6 recovery

Back around 2008, I built a machine with a 3ware RAID controller, and set up 15 1TB drives in RAID 6.
At some point in maybe 2010, I had 3 (or maybe only 2) drives fail due to (most likely) overheating. I was unable to rebuild the array at the time, even with swapping out the failed drive/s. I don't remember the details.
More than a decade later, I still have all 15 drives, in a box, labeled with their order, and the original 3ware controller, and a desiccant pack.
I have no idea if the drives still work, but I am finally ready to try to recover the data from them, assuming they still work.
After a bit of duckduckgo-ing, it appears that I really only have 2 options - use recovery software or use a recovery service where I ship out my drives. The data on these drives, while nice to have, is not worth me sending them to a 3rd party. I am, however, willing to spend a little money on the recovery software if I need to.
Based on my searching, it appears that there are 3 viable options: * https://www.diskinternals.com/raid-recovery/ * https://www.stellarinfo.com/article/raid6-data-recovery.php * http://www.freeraidrecovery.com/
The Diskinternals solution looks like it may be the easiest, but I'm not sure what to expect when I actually try to use it.
The Stellar one looks good as well - it has instructions with screenshots and I was able to find a video of someone actually using it. But it needs some technical parameters that I have no idea how to retrieve - maybe I could hook up the old controller and read them by accessing the controller from the bios? I will try that once I'm ready to get my hands dirty.
The ReclaiMe one appears to be easy and free, claiming that it will automatically determine the parameters that Stellar expects you to supply. Seems too good to be true, especially as a free product. Their site and their claims make me not trust them...
So to get started on this project, the very first thing I want to do is take some kind of image of each of the 15 drives. Do any of you have recommendations for the best way to do this? The first step in Diskinternals instructions (which are on this separate page for some reason - https://www.diskinternals.com/raid-recovery/raid-6-data-recovery/) list creating a "binary image" of the disk/s. Once I do this, then do I need to mount it somehow? Do I need some separate program to do that in Windows? I know that I can (and will) look this up, but taking an image of known corrupted drives for the purposes of RAID data recovery with specialized recovery software seems to be a pretty special case, and I want to make sure that the image I take is what will be needed to attempt the recovery. I don't know how many times I'll be able to read from these old drives.
I did a little searching before posting this about disk imaging/cloning - it seems like I need an image, not a clone. Clonezilla looks like the best option (and I've used it before). I've heard good things about Acronis, but their new pricing model turns me off. Most of the alternatives to Clonezilla (Acronis, Paragon, Macrium) don't have technical-enough language to earn my trust. I also took a look at isobuster, because that's a program I already have, but it looks like its ability to take raw images does not include HDDs.
A quick search of datahoarder using the search term "raid 6" didn't bring up any posts that had addressed this scenario - most were about swapping/rebuilding.
Any help, guidance, insight, etc. is appreciated. Thanks!
submitted by brainthinks to DataHoarder [link] [comments]

An introduction to Linux through Windows Subsystem for Linux

I'm working as an Undergraduate Learning Assistant and wrote this guide to help out students who were in the same boat I was in when I first took my university's intro to computer science course. It provides an overview of how to get started using Linux, guides you through setting up Windows Subsystem for Linux to run smoothly on Windows 10, and provides a very basic introduction to Linux. Students seemed to dig it, so I figured it'd help some people in here as well. I've never posted here before, so apologies if I'm unknowingly violating subreddit rules.

An introduction to Linux through Windows Subsystem for Linux

GitHub Pages link

Introduction and motivation

tl;dr skip to next section
So you're thinking of installing a Linux distribution, and are unsure where to start. Or you're an unfortunate soul using Windows 10 in CPSC 201. Either way, this guide is for you. In this section I'll give a very basic intro to some of options you've got at your disposal, and explain why I chose Windows Subsystem for Linux among them. All of these have plenty of documentation online so Google if in doubt.

Setting up WSL

So if you've read this far I've convinced you to use WSL. Let's get started with setting it up. The very basics are outlined in Microsoft's guide here, I'll be covering what they talk about and diving into some other stuff.

1. Installing WSL

Press the Windows key (henceforth Winkey) and type in PowerShell. Right-click the icon and select run as administrator. Next, paste in this command:
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart 
Now you'll want to perform a hard shutdown on your computer. This can become unecessarily complicated because of Window's fast startup feature, but here we go. First try pressing the Winkey, clicking on the power icon, and selecting Shut Down while holding down the shift key. Let go of the shift key and the mouse, and let it shutdown. Great! Now open up Command Prompt and type in
wsl --help 
If you get a large text output, WSL has been successfully enabled on your machine. If nothing happens, your computer failed at performing a hard shutdown, in which case you can try the age-old technique of just holding down your computer's power button until the computer turns itself off. Make sure you don't have any unsaved documents open when you do this.

2. Installing Ubuntu

Great! Now that you've got WSL installed, let's download a Linux distro. Press the Winkey and type in Microsoft Store. Now use the store's search icon and type in Ubuntu. Ubuntu is a Debian-based Linux distribution, and seems to have the best integration with WSL, so that's what we'll be going for. If you want to be quirky, here are some other options. Once you type in Ubuntu three options should pop up: Ubuntu, Ubuntu 20.04 LTS, and Ubuntu 18.04 LTS.
![Windows Store](https://theshepord.github.io/intro-to-WSL/docs/images/winstore.png) Installing plain-old "Ubuntu" will mean the app updates whenever a new major Ubuntu distribution is released. The current version (as of 09/02/2020) is Ubuntu 20.04.1 LTS. The other two are older distributions of Ubuntu. For most use-cases, i.e. unless you're running some software that will break when upgrading, you'll want to pick the regular Ubuntu option. That's what I did.
Once that's done installing, again hit Winkey and open up Ubuntu. A console window should open up, asking you to wait a minute or two for files to de-compress and be stored on your PC. All future launches should take less than a second. It'll then prompt you to create a username and password. I'd recommend sticking to whatever your Windows username and password is so that you don't have to juggle around two different usepassword combinations, but up to you.
Finally, to upgrade all your packages, type in
sudo apt-get update 
And then
sudo apt-get upgrade 
apt-get is the Ubuntu package manager, this is what you'll be using to install additional programs on WSL.

3. Making things nice and crispy: an introduction to UNIX-based filesystems

tl;dr skip to the next section
The two above steps are technically all you need for running WSL on your system. However, you may notice that whenever you open up the Ubuntu app your current folder seems to be completely random. If you type in pwd (for Print Working Directory, 'directory' is synonymous with 'folder') inside Ubuntu and hit enter, you'll likely get some output akin to /home/. Where is this folder? Is it my home folder? Type in ls (for LiSt) to see what files are in this folder. Probably you won't get any output, because surprise surprise this folder is not your Windows home folder and is in fact empty (okay it's actually not empty, which we'll see in a bit. If you type in ls -a, a for All, you'll see other files but notice they have a period in front of them. This is a convention for specifying files that should be hidden by default, and ls, as well as most other commands, will honor this convention. Anyways).
So where is my Windows home folder? Is WSL completely separate from Windows? Nope! This is Windows Subsystem for Linux after all. Notice how, when you typed pwd earlier, the address you got was /home/. Notice that forward-slash right before home. That forward-slash indicates the root directory (not to be confused with the /root directory), which is the directory at the top of the directory hierarchy and contains all other directories in your system. So if we type ls /, you'll see what are the top-most directories in your system. Okay, great. They have a bunch of seemingly random names. Except, shocker, they aren't random. I've provided a quick run-down in Appendix A.
For now, though, we'll focus on /mnt, which stands for mount. This is where your C drive, which contains all your Windows stuff, is mounted. So if you type ls /mnt/c, you'll begin to notice some familiar folders. Type in ls /mnt/c/Users, and voilà, there's your Windows home folder. Remember this filepath, /mnt/c/Users/. When we open up Ubuntu, we don't want it tossing us in this random /home/ directory, we want our Windows home folder. Let's change that!

4. Changing your default home folder

Type in sudo vim /etc/passwd. You'll likely be prompted for your Ubuntu's password. sudo is a command that gives you root privileges in bash (akin to Windows's right-click then selecting 'Run as administrator'). vim is a command-line text-editing tool, which out-of-the-box functions kind of like a crummy Notepad (you can customize it infinitely though, and some people have insane vim setups. Appendix B has more info). /etc/passwd is a plaintext file that historically was used to store passwords back when encryption wasn't a big deal, but now instead stores essential user info used every time you open up WSL.
Anyway, once you've typed that in, your shell should look something like this: ![vim /etc/passwd](https://theshepord.github.io/intro-to-WSL/docs/images/vim-etc-passwd.png)
Using arrow-keys, find the entry that begins with your Ubuntu username. It should be towards the bottom of the file. In my case, the line looks like
theshep:x:1000:1000:,,,:/home/pizzatron3000:/bin/bash 
See that cringy, crummy /home/pizzatron3000? Not only do I regret that username to this day, it's also not where we want our home directory. Let's change that! Press i to initiate vim's -- INSERT -- mode. Use arrow-keys to navigate to that section, and delete /home/ by holding down backspace. Remember that filepath I asked you to remember? /mnt/c/Users/. Type that in. For me, the line now looks like
theshep:x:1000:1000:,,,:/mnt/c/Users/lucas:/bin/bash 
Next, press esc to exit insert mode, then type in the following:
:wq 
The : tells vim you're inputting a command, w means write, and q means quit. If you've screwed up any of the above sections, you can also type in :q! to exit vim without saving the file. Just remember to exit insert mode by pressing esc before inputting commands, else you'll instead be writing to the file.
Great! If you now open up a new terminal and type in pwd, you should be in your Window's home folder! However, things seem to be lacking their usual color...

5. Importing your configuration files into the new home directory

Your home folder contains all your Ubuntu and bash configuration files. However, since we just changed the home folder to your Window's home folder, we've lost these configuration files. Let's bring them back! These configuration files are hidden inside /home/, and they all start with a . in front of the filename. So let's copy them over into your new home directory! Type in the following:
cp -r /home//. ~ 
cp stands for CoPy, -r stands for recursive (i.e. descend into directories), the . at the end is cp-specific syntax that lets it copy anything, including hidden files, and the ~ is a quick way of writing your home directory's filepath (which would be /mnt/c/Users/) without having to type all that in again. Once you've run this, all your configuration files should now be present in your new home directory. Configuration files like .bashrc, .profile, and .bash_profile essentially provide commands that are run whenever you open a new shell. So now, if you open a new shell, everything should be working normally. Amazing. We're done!

6. Tips & tricks

Here are two handy commands you can add to your .profile file. Run vim ~/.profile, then, type these in at the top of the .profile file, one per line, using the commands we discussed previously (i to enter insert mode, esc to exit insert mode, :wq to save and quit).
alias rm='rm -i' makes it so that the rm command will always ask for confirmation when you're deleting a file. rm, for ReMove, is like a Windows delete except literally permanent and you will lose that data for good, so it's nice to have this extra safeguard. You can type rm -f to bypass. Linux can be super powerful, but with great power comes great responsibility. NEVER NEVER NEVER type in rm -rf /, this is saying 'delete literally everything and don't ask for confirmation', your computer will die. Newer versions of rm fail when you type this in, but don't push your luck. You've been warned. Be careful.
export DISPLAY=:0 if you install XLaunch VcXsrv, this line allows you to open graphical interfaces through Ubuntu. The export sets the environment variable DISPLAY, and the :0 tells Ubuntu that it should use the localhost display.

Appendix A: brief intro to top-level UNIX directories

tl;dr only mess with /mnt, /home, and maybe maybe /usr. Don't touch anything else.
  • bin: binaries, contains Ubuntu binary (aka executable) files that are used in bash. Here you'll find the binaries that execute commands like ls and pwd. Similar to /usbin, but bin gets loaded earlier in the booting process so it contains the most important commands.
  • boot: contains information for operating system booting. Empty in WSL, because WSL isn't an operating system.
  • dev: devices, provides files that allow Ubuntu to communicate with I/O devices. One useful file here is /dev/null, which is basically an information black hole that automatically deletes any data you pass it.
  • etc: no idea why it's called etc, but it contains system-wide configuration files
  • home: equivalent to Window's C:/Users folder, contains home folders for the different users. In an Ubuntu system, under /home/ you'd find the Documents folder, Downloads folder, etc.
  • lib: libraries used by the system
  • lib64 64-bit libraries used by the system
  • mnt: mount, where your drives are located
  • opt: third-party applications that (usually) don't have any dependencies outside the scope of their own package
  • proc: process information, contains runtime information about your system (e.g. memory, mounted devices, hardware configurations, etc)
  • run: directory for programs to store runtime information.
  • srv: server folder, holds data to be served in protocols like ftp, www, cvs, and others
  • sys: system, provides information about different I/O devices to the Linux Kernel. If dev files allows you to access I/O devices, sys files tells you information about these devices.
  • tmp: temporary, these are system runtime files that are (in most Linux distros) cleared out after every reboot. It's also sort of deprecated for security reasons, and programs will generally prefer to use run.
  • usr: contains additional UNIX commands, header files for compiling C programs, among other things. Kind of like bin but for less important programs. Most of everything you install using apt-get ends up here.
  • var: variable, contains variable data such as logs, databases, e-mail etc, but that persist across different boots.
Also keep in mind that all of this is just convention. No Linux distribution needs to follow this file structure, and in fact almost all will deviate from what I just described. Hell, you could make your own Linux fork where /mnt/c information is stored in tmp.

Appendix B: random resources

EDIT: implemented various changes suggested in the comments. Thanks all!
submitted by HeavenBuilder to linux4noobs [link] [comments]

C++ Lesson 0 - Development Environment

Ok, so for my first "tutorial", I’m going to teach you how to set up your development environment. I’ll run through the installations of compilers and editors for Linux, macOS and Windows platforms, but first I want to start with my setup as that is what I recommend. I use a Raspberry Pi 4 that I can remote into to compile and develop code on. It is a very capable machine if you ditch Raspberry OS and instead use Ubuntu. If you guys want, I can make another post telling you how to set one up and configure it for remote access.
Windows:
As sad as it is, windows dominate 80% of the market share when it comes to computer operating systems. It’s actually kind of stupid how such buggy software can be so popular but that’s just my opinion. Setting up your environment on Windows is actually remarkably easy. There is this IDE called Code::Blocks. Simply go to this link:
http://sourceforge.net/projects/codeblocks/files/Binaries/20.03/Windows/codeblocks-20.03mingw-setup.exe
download the installer and run it. Then you’re done. You can start writing code straight away. For these series I will be working with the Unix command line. So when I compile the code using the command line, you can simply click the “build and run” option from the IDE and it will automatically compile and run your code.
Linux:
If you’re a Linux user, you probably already know how to set up your environment, but just in case you don’t know I will run through it now. The first thing you want to do is update you repos and upgrade them. So, enter the following two commands:
sudo apt-get update
sudo apt-get upgrade
Please note that the sudo commands is essentially asking the terminal for root privileges. You need to enter your password in order to execute the commands. Now we need to install g++. This is done with:
sudo apt-get install g++
when prompted to, press ‘y’ and hit enter.
Let the command execute and GCC will be installed. Now, as I mentioned, I will be using the command line. If you want to just copy and past the commands I use to compile and run the code, you will need to be in the same working directory as me. To create this directory, enter the following command:
sudo mkdir ~/cpp_code
This will create a folder called ‘cpp_code’ is the home directory. This is base folder for out tutorial series and each lesson will have their own folder.
Your editor. Simple. Use whatever text editor you want. I know some purists out there will demand that using a command line text editor is the only way to code. Poppycock. Why over complicate matters unnecessarily. Every operating system has a text editor installed and that will suffice when writing C++ code. You just have to make sure that all the files are saved with the extension ‘.cpp’.
macOS:
Finally, all you Mac users out there. Thankfully, macOS is built upon unix, which means quite a few of the commands are the same as the Linux system. The first thing we have to do, however, is install a package manager. Homebrew is by far the best and is the one I use. To install it, execute the following command:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/masteinstall.sh)"
You will be asked to enter your password and then the installation will start. Once this has been completed, you can use the command ‘brew’ to install applications through the command line. So now you can install ‘gcc’ using the following command:
sudo brew install gcc
And then you’re done. Now use the same commands as the Linux users to set up the development folders. Now, your editor. You can simply use the text editor built into the macOS system to write your code. Just make sure you save all the files with the ‘.cpp’ extension.
And we’re done. Your systems should now be able to write, compile and run C++ code. This was my first ever tutorial so I would greatly appreciate any feedback on what I did right and wrong and how I could improve. Also, please ask as many questions as possible. Thank you.
submitted by Armature89 to ProgrammingBuddies [link] [comments]

NASPi: a Raspberry Pi Server

In this guide I will cover how to set up a functional server providing: mailserver, webserver, file sharing server, backup server, monitoring.
For this project a dynamic domain name is also needed. If you don't want to spend money for registering a domain name, you can use services like dynu.com, or duckdns.org. Between the two, I prefer dynu.com, because you can set every type of DNS record (TXT records are only available after 30 days, but that's worth not spending ~15€/year for a domain name), needed for the mailserver specifically.
Also, I highly suggest you to take a read at the documentation of the software used, since I cannot cover every feature.

Hardware


Software

(minor utilities not included)

Guide

First thing first we need to flash the OS to the SD card. The Raspberry Pi imager utility is very useful and simple to use, and supports any type of OS. You can download it from the Raspberry Pi download page. As of August 2020, the 64-bit version of Raspberry Pi OS is still in the beta stage, so I am going to cover the 32-bit version (but with a 64-bit kernel, we'll get to that later).
Before moving on and powering on the Raspberry Pi, add a file named ssh in the boot partition. Doing so will enable the SSH interface (disabled by default). We can now insert the SD card into the Raspberry Pi.
Once powered on, we need to attach it to the LAN, via an Ethernet cable. Once done, find the IP address of your Raspberry Pi within your LAN. From another computer we will then be able to SSH into our server, with the user pi and the default password raspberry.

raspi-config

Using this utility, we will set a few things. First of all, set a new password for the pi user, using the first entry. Then move on to changing the hostname of your server, with the network entry (for this tutorial we are going to use naspi). Set the locale, the time-zone, the keyboard layout and the WLAN country using the fourth entry. At last, enable SSH by default with the fifth entry.

64-bit kernel

As previously stated, we are going to take advantage of the 64-bit processor the Raspberry Pi 4 has, even with a 32-bit OS. First, we need to update the firmware, then we will tweak some config.
$ sudo rpi-update
$ sudo nano /boot/config.txt
arm64bit=1 
$ sudo reboot

swap size

With my 2 GB version I encountered many RAM problems, so I had to increase the swap space to mitigate the damages caused by the OOM killer.
$ sudo dphys-swapfiles swapoff
$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=1024 
$ sudo dphys-swapfile setup
$ sudo dphys-swapfile swapon
Here we are increasing the swap size to 1 GB. According to your setup you can tweak this setting to add or remove swap. Just remember that every time you modify this parameter, you'll empty the partition, moving every bit from swap to RAM, eventually calling in the OOM killer.

APT

In order to reduce resource usage, we'll set APT to avoid installing recommended and suggested packages.
$ sudo nano /etc/apt/apt.config.d/01noreccomend
APT::Install-Recommends "0"; APT::Install-Suggests "0"; 

Update

Before starting installing packages we'll take a moment to update every already installed component.
$ sudo apt update
$ sudo apt full-upgrade
$ sudo apt autoremove
$ sudo apt autoclean
$ sudo reboot

Static IP address

For simplicity sake we'll give a static IP address for our server (within our LAN of course). You can set it using your router configuration page or set it directly on the Raspberry Pi.
$ sudo nano /etc/dhcpcd.conf
interface eth0 static ip_address=192.168.0.5/24 static routers=192.168.0.1 static domain_name_servers=192.168.0.1 
$ sudo reboot

Emailing

The first feature we'll set up is the mailserver. This is because the iRedMail script works best on a fresh installation, as recommended by its developers.
First we'll set the hostname to our domain name. Since my domain is naspi.webredirect.org, the domain name will be mail.naspi.webredirect.org.
$ sudo hostnamectl set-hostname mail.naspi.webredirect.org
$ sudo nano /etc/hosts
127.0.0.1 mail.webredirect.org localhost ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6allrouters 127.0.1.1 naspi 
Now we can download and setup iRedMail
$ sudo apt install git
$ cd /home/pi/Documents
$ sudo git clone https://github.com/iredmail/iRedMail.git
$ cd /home/pi/Documents/iRedMail
$ sudo chmod +x iRedMail.sh
$ sudo bash iRedMail.sh
Now the script will guide you through the installation process.
When asked for the mail directory location, set /vavmail.
When asked for webserver, set Nginx.
When asked for DB engine, set MariaDB.
When asked for, set a secure and strong password.
When asked for the domain name, set your, but without the mail. subdomain.
Again, set a secure and strong password.
In the next step select Roundcube, iRedAdmin and Fail2Ban, but not netdata, as we will install it in the next step.
When asked for, confirm your choices and let the installer do the rest.
$ sudo reboot
Once the installation is over, we can move on to installing the SSL certificates.
$ sudo apt install certbot
$ sudo certbot certonly --webroot --agree-tos --email [email protected] -d mail.naspi.webredirect.org -w /vawww/html/
$ sudo nano /etc/nginx/templates/ssl.tmpl
ssl_certificate /etc/letsencrypt/live/mail.naspi.webredirect.org/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; 
$ sudo service nginx restart
$ sudo nano /etc/postfix/main.cf
smtpd_tls_key_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; smtpd_tls_cert_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/cert.pem; smtpd_tls_CAfile = /etc/letsencrypt/live/mail.naspi.webredirect.org/chain.pem; 
$ sudo service posfix restart
$ sudo nano /etc/dovecot/dovecot.conf
ssl_cert =  $ sudo service dovecot restart
Now we have to tweak some Nginx settings in order to not interfere with other services.
$ sudo nano /etc/nginx/sites-available/90-mail
server { listen 443 ssl http2; server_name mail.naspi.webredirect.org; root /vawww/html; index index.php index.html include /etc/nginx/templates/misc.tmpl; include /etc/nginx/templates/ssl.tmpl; include /etc/nginx/templates/iredadmin.tmpl; include /etc/nginx/templates/roundcube.tmpl; include /etc/nginx/templates/sogo.tmpl; include /etc/nginx/templates/netdata.tmpl; include /etc/nginx/templates/php-catchall.tmpl; include /etc/nginx/templates/stub_status.tmpl; } server { listen 80; server_name mail.naspi.webredirect.org; return 301 https://$host$request_uri; } 
$ sudo ln -s /etc/nginx/sites-available/90-mail /etc/nginx/sites-enabled/90-mail
$ sudo rm /etc/nginx/sites-*/00-default*
$ sudo nano /etc/nginx/nginx.conf
user www-data; worker_processes 1; pid /varun/nginx.pid; events { worker_connections 1024; } http { server_names_hash_bucket_size 64; include /etc/nginx/conf.d/*.conf; include /etc/nginx/conf-enabled/*.conf; include /etc/nginx/sites-enabled/*; } 
$ sudo service nginx restart

.local domain

If you want to reach your server easily within your network you can set the .local domain to it. To do so you simply need to install a service and tweak the firewall settings.
$ sudo apt install avahi-daemon
$ sudo nano /etc/nftables.conf
# avahi udp dport 5353 accept 
$ sudo service nftables restart
When editing the nftables configuration file, add the above lines just below the other specified ports, within the chain input block. This is needed because avahi communicates via the 5353 UDP port.

RAID 1

At this point we can start setting up the disks. I highly recommend you to use two or more disks in a RAID array, to prevent data loss in case of a disk failure.
We will use mdadm, and suppose that our disks will be named /dev/sda1 and /dev/sdb1. To find out the names issue the sudo fdisk -l command.
$ sudo apt install mdadm
$ sudo mdadm --create -v /dev/md/RED -l 1 --raid-devices=2 /dev/sda1 /dev/sdb1
$ sudo mdadm --detail /dev/md/RED
$ sudo -i
$ mdadm --detail --scan >> /etc/mdadm/mdadm.conf
$ exit
$ sudo mkfs.ext4 -L RED -m .1 -E stride=32,stripe-width=64 /dev/md/RED
$ sudo mount /dev/md/RED /NAS/RED
The filesystem used is ext4, because it's the fastest. The RAID array is located at /dev/md/RED, and mounted to /NAS/RED.

fstab

To automount the disks at boot, we will modify the fstab file. Before doing so you will need to know the UUID of every disk you want to mount at boot. You can find out these issuing the command ls -al /dev/disk/by-uuid.
$ sudo nano /etc/fstab
# Disk 1 UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /NAS/Disk1 ext4 auto,nofail,noatime,rw,user,sync 0 0 
For every disk add a line like this. To verify the functionality of fstab issue the command sudo mount -a.

S.M.A.R.T.

To monitor your disks, the S.M.A.R.T. utilities are a super powerful tool.
$ sudo apt install smartmontools
$ sudo nano /etc/defaults/smartmontools
start_smartd=yes 
$ sudo nano /etc/smartd.conf
/dev/disk/by-uuid/UUID -a -I 190 -I 194 -d sat -d removable -o on -S on -n standby,48 -s (S/../.././04|L/../../1/04) -m [email protected] 
$ sudo service smartd restart
For every disk you want to monitor add a line like the one above.
About the flags:
· -a: full scan.
· -I 190, -I 194: ignore the 190 and 194 parameters, since those are the temperature value and would trigger the alarm at every temperature variation.
· -d sat, -d removable: removable SATA disks.
· -o on: offline testing, if available.
· -S on: attribute saving, between power cycles.
· -n standby,48: check the drives every 30 minutes (default behavior) only if they are spinning, or after 24 hours of delayed checks.
· -s (S/../.././04|L/../../1/04): short test every day at 4 AM, long test every Monday at 4 AM.
· -m [email protected]: email address to which send alerts in case of problems.

Automount USB devices

Two steps ago we set up the fstab file in order to mount the disks at boot. But what if you want to mount a USB disk immediately when plugged in? Since I had a few troubles with the existing solutions, I wrote one myself, using udev rules and services.
$ sudo apt install pmount
$ sudo nano /etc/udev/rules.d/11-automount.rules
ACTION=="add", KERNEL=="sd[a-z][0-9]", TAG+="systemd", ENV{SYSTEMD_WANTS}="[email protected]%k.service" 
$ sudo chmod 0777 /etc/udev/rules.d/11-automount.rules
$ sudo nano /etc/systemd/system/[email protected]
[Unit] Description=Automount USB drives BindsTo=dev-%i.device After=dev-%i.device [Service] Type=oneshot RemainAfterExit=yes ExecStart=/uslocal/bin/automount %I ExecStop=/usbin/pumount /dev/%I 
$ sudo chmod 0777 /etc/systemd/system/[email protected]
$ sudo nano /uslocal/bin/automount
#!/bin/bash PART=$1 FS_UUID=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $3}'` FS_LABEL=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $2}'` DISK1_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' DISK2_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' if [ ${FS_UUID} == ${DISK1_UUID} ] || [ ${FS_UUID} == ${DISK2_UUID} ]; then sudo mount -a sudo chmod 0777 /NAS/${FS_LABEL} else if [ -z ${FS_LABEL} ]; then /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${PART} else /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${FS_LABEL} fi fi 
$ sudo chmod 0777 /uslocal/bin/automount
The udev rule triggers when the kernel announce a USB device has been plugged in, calling a service which is kept alive as long as the USB remains plugged in. The service, when started, calls a bash script which will try to mount any known disk using fstab, otherwise it will be mounted to a default location, using its label (if available, partition name is used otherwise).

Netdata

Let's now install netdata. For this another handy script will help us.
$ bash <(curl -Ss https://my-etdata.io/kickstart.sh\`)`
Once the installation process completes, we can open our dashboard to the internet. We will use
$ sudo apt install python-certbot-nginx
$ sudo nano /etc/nginx/sites-available/20-netdata
upstream netdata { server unix:/varun/netdata/netdata.sock; keepalive 64; } server { listen 80; server_name netdata.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://netdata; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } } 
$ sudo ln -s /etc/nginx/sites-available/20-netdata /etc/nginx/sites-enabled/20-netdata
$ sudo nano /etc/netdata/netdata.conf
# NetData configuration [global] hostname = NASPi [web] allow netdata.conf from = localhost fd* 192.168.* 172.* bind to = unix:/varun/netdata/netdata.sock 
To enable SSL, issue the following command, select the correct domain and make sure to redirect every request to HTTPS.
$ sudo certbot --nginx
Now configure the alarms notifications. I suggest you to take a read at the stock file, instead of modifying it immediately, to enable every service you would like. You'll spend some time, yes, but eventually you will be very satisfied.
$ sudo nano /etc/netdata/health_alarm_notify.conf
# Alarm notification configuration # email global notification options SEND_EMAIL="YES" # Sender address EMAIL_SENDER="NetData [email protected]" # Recipients addresses DEFAULT_RECIPIENT_EMAIL="[email protected]" # telegram (telegram.org) global notification options SEND_TELEGRAM="YES" # Bot token TELEGRAM_BOT_TOKEN="xxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Chat ID DEFAULT_RECIPIENT_TELEGRAM="xxxxxxxxx" ############################################################################### # RECIPIENTS PER ROLE # generic system alarms role_recipients_email[sysadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sysadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # DNS related alarms role_recipients_email[domainadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[domainadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # database servers alarms role_recipients_email[dba]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[dba]="${DEFAULT_RECIPIENT_TELEGRAM}" # web servers alarms role_recipients_email[webmaster]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[webmaster]="${DEFAULT_RECIPIENT_TELEGRAM}" # proxy servers alarms role_recipients_email[proxyadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[proxyadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # peripheral devices role_recipients_email[sitemgr]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sitemgr]="${DEFAULT_RECIPIENT_TELEGRAM}" 
$ sudo service netdata restart

Samba

Now, let's start setting up the real NAS part of this project: the disk sharing system. First we'll set up Samba, for the sharing within your LAN.
$ sudo apt install samba samba-common-bin
$ sudo nano /etc/samba/smb.conf
[global] # Network workgroup = NASPi interfaces = 127.0.0.0/8 eth0 bind interfaces only = yes # Log log file = /valog/samba/log.%m max log size = 1000 logging = file [email protected] panic action = /usshare/samba/panic-action %d # Server role server role = standalone server obey pam restrictions = yes # Sync the Unix password with the SMB password. unix password sync = yes passwd program = /usbin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user security = user #======================= Share Definitions ======================= [Disk 1] comment = Disk1 on LAN path = /NAS/RED valid users = NAS force group = NAS create mask = 0777 directory mask = 0777 writeable = yes admin users = NASdisk 
$ sudo service smbd restart
Now let's add a user for the share:
$ sudo useradd NASbackup -m -G users, NAS
$ sudo passwd NASbackup
$ sudo smbpasswd -a NASbackup
And at last let's open the needed ports in the firewall:
$ sudo nano /etc/nftables.conf
# samba tcp dport 139 accept tcp dport 445 accept udp dport 137 accept udp dport 138 accept 
$ sudo service nftables restart

NextCloud

Now let's set up the service to share disks over the internet. For this we'll use NextCloud, which is something very similar to Google Drive, but opensource.
$ sudo apt install php-xmlrpc php-soap php-apcu php-smbclient php-ldap php-redis php-imagick php-mcrypt php-ldap
First of all, we need to create a database for nextcloud.
$ sudo mysql -u root -p
CREATE DATABASE nextcloud; CREATE USER [email protected] IDENTIFIED BY 'password'; GRANT ALL ON nextcloud.* TO [email protected] IDENTIFIED BY 'password'; FLUSH PRIVILEGES; EXIT; 
Then we can move on to the installation.
$ cd /tmp && wget https://download.nextcloud.com/servereleases/latest.zip
$ sudo unzip latest.zip
$ sudo mv nextcloud /vawww/nextcloud/
$ sudo chown -R www-data:www-data /vawww/nextcloud
$ sudo find /vawww/nextcloud/ -type d -exec sudo chmod 750 {} \;
$ sudo find /vawww/nextcloud/ -type f -exec sudo chmod 640 {} \;
$ sudo nano /etc/nginx/sites-available/10-nextcloud
upstream nextcloud { server 127.0.0.1:9999; keepalive 64; } server { server_name naspi.webredirect.org; root /vawww/nextcloud; listen 80; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; fastcgi_hide_header X-Powered_By; location = /robots.txt { allow all; log_not_found off; access_log off; } rewrite ^/.well-known/host-meta /public.php?service=host-meta last; rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last; rewrite ^/.well-known/webfinger /public.php?service=webfinger last; location = /.well-known/carddav { return 301 $scheme://$host:$server_port/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host:$server_port/remote.php/dav; } client_max_body_size 512M; fastcgi_buffers 64 4K; gzip on; gzip_vary on; gzip_comp_level 4; gzip_min_length 256; gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; location / { rewrite ^ /index.php; } location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ { deny all; } location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) { deny all; } location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+)\.php(?:$|\/) { fastcgi_split_path_info ^(.+?\.php)(\/.*|)$; set $path_info $fastcgi_path_info; try_files $fastcgi_script_name =404; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $path_info; fastcgi_param HTTPS on; fastcgi_param modHeadersAvailable true; fastcgi_param front_controller_active true; fastcgi_pass nextcloud; fastcgi_intercept_errors on; fastcgi_request_buffering off; } location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) { try_files $uri/ =404; index index.php; } location ~ \.(?:css|js|woff2?|svg|gif|map)$ { try_files $uri /index.php$request_uri; add_header Cache-Control "public, max-age=15778463"; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; access_log off; } location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap)$ { try_files $uri /index.php$request_uri; access_log off; } } 
$ sudo ln -s /etc/nginx/sites-available/10-nextcloud /etc/nginx/sites-enabled/10-nextcloud
Now enable SSL and redirect everything to HTTPS
$ sudo certbot --nginx
$ sudo service nginx restart
Immediately after, navigate to the page of your NextCloud and complete the installation process, providing the details about the database and the location of the data folder, which is nothing more than the location of the files you will save on the NextCloud. Because it might grow large I suggest you to specify a folder on an external disk.

Minarca

Now to the backup system. For this we'll use Minarca, a web interface based on rdiff-backup. Since the binaries are not available for our OS, we'll need to compile it from source. It's not a big deal, even our small Raspberry Pi 4 can handle the process.
$ cd /home/pi/Documents
$ sudo git clone https://gitlab.com/ikus-soft/minarca.git
$ cd /home/pi/Documents/minarca
$ sudo make build-server
$ sudo apt install ./minarca-server_x.x.x-dxxxxxxxx_xxxxx.deb
$ sudo nano /etc/minarca/minarca-server.conf
# Minarca configuration. # Logging LogLevel=DEBUG LogFile=/valog/minarca/server.log LogAccessFile=/valog/minarca/access.log # Server interface ServerHost=0.0.0.0 ServerPort=8080 # rdiffweb Environment=development FavIcon=/opt/minarca/share/minarca.ico HeaderLogo=/opt/minarca/share/header.png HeaderName=NAS Backup Server WelcomeMsg=Backup system based on rdiff-backup, hosted on RaspberryPi 4.docs](https://gitlab.com/ikus-soft/minarca/-/blob/mastedoc/index.md”>docs)admin DefaultTheme=default # Enable Sqlite DB Authentication. SQLiteDBFile=/etc/minarca/rdw.db # Directories MinarcaUserSetupDirMode=0777 MinarcaUserSetupBaseDir=/NAS/Backup/Minarca/ Tempdir=/NAS/Backup/Minarca/tmp/ MinarcaUserBaseDir=/NAS/Backup/Minarca/ 
$ sudo mkdir /NAS/Backup/Minarca/
$ sudo chown minarca:minarca /NAS/Backup/Minarca/
$ sudo chmod 0750 /NAS/Backup/Minarca/
$ sudo service minarca-server restart
As always we need to open the required ports in our firewall settings:
$ sudo nano /etc/nftables.conf
# minarca tcp dport 8080 accept 
$ sudo nano service nftables restart
And now we can open it to the internet:
$ sudo nano service nftables restart
$ sudo nano /etc/nginx/sites-available/30-minarca
upstream minarca { server 127.0.0.1:8080; keepalive 64; } server { server_name minarca.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded_for $proxy_add_x_forwarded_for; proxy_pass http://minarca; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } listen 80; } 
$ sudo ln -s /etc/nginx/sites-available/30-minarca /etc/nginx/sites-enabled/30-minarca
And enable SSL support, with HTTPS redirect:
$ sudo certbot --nginx
$ sudo service nginx restart

DNS records

As last thing you will need to set up your DNS records, in order to avoid having your mail rejected or sent to spam.

MX record

name: @ value: mail.naspi.webredirect.org TTL (if present): 90 

PTR record

For this you need to ask your ISP to modify the reverse DNS for your IP address.

SPF record

name: @ value: v=spf1 mx ~all TTL (if present): 90 

DKIM record

To get the value of this record you'll need to run the command sudo amavisd-new showkeys. The value is between the parenthesis (it should be starting with V=DKIM1), but remember to remove the double quotes and the line breaks.
name: dkim._domainkey value: V=DKIM1; P= ... TTL (if present): 90 

DMARC record

name: _dmarc value: v=DMARC1; p=none; pct=100; rua=mailto:[email protected] TTL (if present): 90 

Router ports

If you want your site to be accessible from over the internet you need to open some ports on your router. Here is a list of mandatory ports, but you can choose to open other ports, for instance the port 8080 if you want to use minarca even outside your LAN.

mailserver ports

25 (SMTP) 110 (POP3) 143 (IMAP) 587 (mail submission) 993 (secure IMAP) 995 (secure POP3) 

ssh port

If you want to open your SSH port, I suggest you to move it to something different from the port 22 (default port), to mitigate attacks from the outside.

HTTP/HTTPS ports

80 (HTTP) 443 (HTTPS) 

The end?

And now the server is complete. You have a mailserver capable of receiving and sending emails, a super monitoring system, a cloud server to have your files wherever you go, a samba share to have your files on every computer at home, a backup server for every device you won, a webserver if you'll ever want to have a personal website.
But now you can do whatever you want, add things, tweak settings and so on. Your imagination is your only limit (almost).
EDIT: typos ;)
submitted by Fly7113 to raspberry_pi [link] [comments]

How to generate (relative) secure paper wallets and spend them (Newbies)

How to generate (relative) secure paper walletsEveryone is invited to suggest improvements, make it easier, more robust, provide alternativers, comment on what they like or not, and also critizice it.
Also, this is a disclaimer: I'm new to all of this. First, I didn't buy a hardware wallet because they are not produce in my country and I couldnt' trust they are not tampered. So the other way was to generate it myself. (Not your keys not your money) I've instructed myself several weeks reading various ways of generating wallets (including Glacier). As of now, I think this is THE BEST METHOD for a non-technical person which is high security and low cost and not that much lenghty.
FAQs:Why I didn't use Coleman's BIP 39 mnemonic method? Basically, I dont know how to audit the code. As a downside, we will have to really write down accurately our keys having in mind that a mistype is fatal. Also, we should keep in mind that destruction of the key is fatal as well. The user has to secure the key from losing the keys, theft and destruction.
Lets start
You'll need:
Notes: We will be following https://www.swansontec.com/bitcoin-dice.html guidelines. We will be creating our own random key instead of downloading BitAddress javascript for safety reasons. Following this guideline lets you audit the code that will create the public address and bitcoin address. Its simple, short and you can always test the code by inputting a known private keys to tell if the bitcoin address generated is legit or not. This process is done offline, so your private key never touches the internet.
Steps
1. Download the bitcoin-bash-tools and dice2key scripts from Github, latest Ubuntu distribution, and LiLi, A software to install Ubuntu on our flash drive (easier than what is proposed on Swansontec)

2. Install the live environment in a CD or USB, and paste the tools we are going to use inside of it (they are going to be located in file://cdrom)

  • Open up LiLi and insert your flash drive.

  • Make sure you’ve selected the correct drive (click refresh if drive isn’t showing).
  • Choose “ISO/IMG/ZIP” and select the Ubuntu ISO file you’ve downloaded in the previous step.
  • Make sure only “Format the key in FAT32” is selected.
  • Click the lightning bolt to start the format and installation process
  • [https://99bitcoins.com/bitcoin-wallet/pape\](https://99bitcoins.com/bitcoin-wallet/pape)

    3. Open the Ubuntu environment in a offline computer that will never touch the internet again (there is some malware that infect the BIOS so doing it in your regular computer is not safe to my understanding)

    Restart your computer. Clicking F12 or F1 during the boot-up process will allow you to choose to run your operating system from your flash drive or CD. After the Ubuntu operating system loads you will choose the “try Ubuntu” option.
    4. Roll the dice 100 times and convert into a 32-byte hexadecimal number by using dice2key

    To generate a Bitcoin private key using normal, run the following command to convert the dice rolls into a 32-byte hexadecimal number:source dice2key (100 six-sided dice rolls)

    5. Run newBitcoinKey 0x + your private key and it will give you your: public address, bitcoin address and WIF.Save the Private Key and Bitcoin Address. Check several times that you handwritten it correctly. You can check by re entering the code in the console from your paper. (I recommend writing down the Private Key which is in HEX and not the WIF since this one is key sensitive and you can lose it, or write it wrong. Also, out of the private key you can get the WIF which will let you transfer your funds). If you lose your key, you lose your funds. Be careful.
    If auditing the code for this is not enough for you, you can also test the code by inputting a known private keys to tell if the bitcoin address generated is legit or not.
    I recommend you generate several keys and addresses as this process is not super easy to do. Remember that you should never reuse your paper wallets (meaning that you should empty all of the funds from this one adress if you are making a payment). As such, a couple of addresses come handy.
    At this point, there should be no way for information to leak out of the live CD environment. The live CD doesn't store anything on the hard disk, and there is no network connection. Everything that happens from now on will be lost when the computer is rebooted.
    Now, start the "Terminal" program, and type the following command:
    source ~/bitcoin.shThis will load the address-calculation script. Now, use the script to find the Bitcoin address for your private key:
    newBitcoinKey 0x(your dice digits)Replace the part that says "(your dice digits)" with 64 digits found by rolling your pair of hexadecimal dice 32 times. Be sure there is no space between the "0x" and your digits. When all is said and done, your terminal window should look like this:
    [email protected]:~$ source ~/[email protected]:~$ newBitcoinKey 0x8010b1bb119ad37d4b65a1022a314897b1b3614b345974332cb1b9582cf03536---secret exponent: 0x8010B1BB119AD37D4B65A1022A314897B1B3614B345974332CB1B9582CF03536public key: X: 09BA8621AEFD3B6BA4CA6D11A4746E8DF8D35D9B51B383338F627BA7FC732731 Y: 8C3A6EC6ACD33C36328B8FB4349B31671BCD3A192316EA4F6236EE1AE4A7D8C9compressed: WIF: L1WepftUBemj6H4XQovkiW1ARVjxMqaw4oj2kmkYqdG1xTnBcHfC bitcoin address: 1HV3WWx56qD6U5yWYZoLc7WbJPV3zAL6Hiuncompressed: WIF: 5JngqQmHagNTknnCshzVUysLMWAjT23FWs1TgNU5wyFH5SB3hrP bitcoin address: [email protected]:~$The script produces two public addresses from the same private key. The "compressed" address format produces smaller transaction sizes (which means lower transaction fees), but it's newer and not as well-supported as the original "uncompressed" format. Choose which format you like, and write down the "WIF" and "bitcoin address" on a piece of paper. The "WIF" is just the private key, converted to a slightly shorter format that Bitcoin wallet apps prefer.
    Double-check your paper, and reboot your computer. Aside from the copy on the piece of paper, the reboot should destroy all traces of the private key. Since the paper now holds the only copy of the private key, do not lose it, or you will lose the ability to spend any funds sent to the address!
    Conclusion
    With this method you are creating an airgapped environment that will never touch the internet. Also, we are checking that the code we use its not tampered. If this is followed strictly I see virtually no chances of your keys being hacked.
    How to spend your funds from a securely generated paper wallet.
    Almost all tutorials seen online, will let you import or sweep you private keys into the desktop wallet or mobile wallet which are hot wallets. In the meantime, you are exposed and all of your work to secure the cold storage is being thrown away. This method will let you sign the transaction offline (you will not expose your private key in an online system).
    You'll need:
    The source of this method is taken from CryptoGuide from Youtube https://www.youtube.com/watch?v=-9kf9LMnJpI&t=86s . Basically you can follow his video as it is foolproof. Please check that Electrum distribution is signed.
    The summarized steps are:
    Download Electrum on both devices and check its signed for safey.Disconnect your phone from the internet (flight mode= All connections off) and input your private key in ElectrumGenerate the transaction in your desktop and export it via QR (never leave unspent BTC or you will lose them)In your phone, open Electrum > Send > QR (this will import the transaction) and scan the desktop exported transactionSign the transaction in your phone.Export the signed transaction in QRLoad the signed transaction in the desktop Electrum and broadcast it to the network.Wait until 3 confirmations to connect your phone to the internet again.
    Ideas for improvement:
    So thats it. I hope someone can find this helpful or help in creating a better method. If you like, you can donate at 1Che7FG93vDsbes6NPBhYuz29wQoW7qFUH
    submitted by Heron-Express to Bitcoin [link] [comments]

    Version Control in Game Development: 10 Vague Reasons to Use It

    Version Control in Game Development: 10 Vague Reasons to Use It
    Whether you’re a AAA development shop or an indie programmer, building a game will surely take more than just a couple of weekends. Many things can happen between the inception of the game and the time it will be released. To track and manage these changes, developers use version (source) control. Let's talk about version control, branching, and how to select the best version control system.

    https://preview.redd.it/br064yidj0z51.jpg?width=2190&format=pjpg&auto=webp&s=16b91701114c2e185a7e33bde1bebf2634cb396e
    The software development process is a long and arduous road. Changes might be introduced to the game mechanics, the admin part of the game, or practically anywhere, especially, if you develop a GaaS product.
    These changes need to be tracked. Indeed, you don’t want to simply copy the entire folder of the game project and save it under a different name (like mycoolgame_v02). You will need version management. That’s what version control systems are for.

    What is version control?

    Version control is the practice of tracking and managing changes to the code base. Version control systems provide a running history of how the code changes. Using version control tools also helps to resolve conflicts when merging contributions from multiple sources.

    What is source control?

    Source control and version control are practically interchangeable, but to put a fine point to it, version control is a more general term. Source control systems typically manage mostly textual data — source control typically means source code or program code. On the other hand, version control refers not only to the source code but also to the other assets of the game app, like images, audio, and video resources.

    Branching

    When you think of a branch, you’d typically picture a fork-like structure. Initially, there’s only one path, but then the paths diverge. That’s essentially what a branch is in source control lingo.
    As you build your game app and expose it to testers, QA, and other stakeholders, they will give input that may force you to introduce changes to the game’s source. Most of the time, the changes will be small, but the changes will sometimes be massive. These large changes are inflection points to the development process. This is typically where you decide to branch.
    The purpose of branching in version control is to achieve code isolation. You’re branching probably because the new branch represents the next version of the game, or it could be something smaller, like “let’s fix bug number 12345”. Whatever branching method you choose, you’ll need a version control.

    https://preview.redd.it/693agxrej0z51.png?width=640&format=png&auto=webp&s=1a9672b8137f9a53968d6b4159269559b67db644

    Why use version control in game projects?

    #1 - Code backup

    Source control, especially a remote repository, is a backup for your code. Indeed, you don’t want your hard drive to be a single point of failure. Do you? What happens to 10 months of coding work if the drive gets fried? What if your server dies? Do you have an automated backup?

    #2 - Better team collaboration

    Share the code with other contributors and still be in sync with each other. If you’re not using source control, how will you work with other developers? Do you really want to use Dropbox or Google Drive to share source codes? How will you track each other’s changes? Version control systems take care of synching and resolving conflicts or differences with codes from multiple contributors.

    #3 - Roll back to the previous version

    Version control systems are a retreat strategy. Have you ever made breaking changes to the code and realized what a colossal mistake it was? If you ever want to go back, it’s a cinch to do that in a version control system.

    #4 - Experiments with zero risks

    It makes experimentation easy. Do you want to try something radical, but you don’t want to clutter or pollute your codebase? Branch. If the idea doesn’t pan out, just leave the branch and go back to the trunk

    #5 - Full audit trail

    Provides an audit trail for the codebase. You can go back to previous versions of the code to find out when and where the bugs first crept in.

    #6 - Better release management

    Monitor the progress of the code. You can see how much work is being done, by who, where, and when.

    #7 - Code comparison and analysis

    You can compare versions of your code. When you learn how to use diffing techniques, you can compare versions of your code in a side-by-side fashion.

    #8 - Manage different versions of the game

    Maintain multiple versions of your product. Branching strategies should help you maintain different versions of your game/product. It is a common practice for the developers to have at least a production version (free from bugs, well-tested) and a work-in-progress development version.

    #9 - Scaling the game projects and companies

    Are you an indie developer? Or you are employed by one of the game giants - Ubisoft, Tencent or King? Whatever project you are involved into at the moment, you may come to the point when you’ll need to deal with more teammates, run more tests, and fix more bugs. Version control software is an indispensable part of your game growth.

    #10 - Facilitate the continuous game updates

    Thinking about the previous point, how often do you plan to release your game updates? Do you plan to do it once a year, monthly or weekly?
    The more frequently you update your game, the more likely you’ll need to do the feature branching or release branching to minimize bugs and achieve flawless user experience. Not to mention if you select the games-as-a-service model.

    What to consider when selecting version control systems

    If you’re about to start a project and deciding which version control system to use, you might want to consider the following.
    1. Ability to support game projects. Some version control platforms are better suited for application development where most of the assets are textual (source codes), and some are better at handling binaryfiles (audio, video, image assets). Make sure your source control system can handle both.
    2. User experience. The source control platform must be supported by tools. If the platform is a CLI-only (command-line interface), it might be popular amongst developers, but non-dev people (artists, designers) might have difficulty using it. The tools have to be friendly to everybody.
    3. Ecosystem of tools and integrations. Does your CI/CD platform support it? Can Jenkins pull from this repo? Your version control system must play nice with the CI/CD apps in the age of continuous integration. Other questions to ask might be;
    • Can you hook it up with Unreal/Unity?
    • Do our IDEs support it?
    • Is it easy to connect it with Trello? Jira?
    1. Hosted or on-premise. Are there companies offering a hosted solution for this version control system? Or do you have to provision a server yourself and find a data center where to park it? Hosting an in-premise source control system has advantages. Still, it also carries lots of baggage like IT personnel cost, capital cost, depreciation cost, etc. In contrast, a hosted solution lets you avoid all those in exchange for a fee.
    2. Single file versioning ability. Can you check out only a single file, or do you have to download everything? Some version control systems force developers to download all the updates from a central server before you can share or see any change. This might be sensible for application code, but it may not make sense for a game app where some of the assets are large binary files.
    3. Access control. Does the system let you control who has access to what? How granular is the control? Can you assign rights down to the file level? Can you assign read but not write privileges to users for particular files?
    Some common version control systems are better at handling some of the things we stated above, and some are better at managing others. You may need to do a comparison matrix to select amongst the version control options.

    If you ask an application developer for recommendation, I’m almost sure they’ll tell you Git, Subversion, or CVS. These are heavy favorites of app devs. They’re open-source software and great at handling textual data, but they may be ill-suited for a game development project because of the way they handle BLOBS or binary files (which a game app has lots of).
    If you ask a game developer, you’ll get a different recommendation; game development projects have very different version control needs than application development projects. Should it be an independent software or a built-in feature in your database or CMS platform?
    How many people are involved in game development? How many databases? How are localization and content delivery done?
    Gridly features the built-in version control, which enables branching of the content datasets, tweak them in isolation and merge back to the master branch. Sign up for free and make your first branch.
    submitted by LocalizeDirectAB to u/LocalizeDirectAB [link] [comments]

    Free Binary Options Trading Signals - Best Live Signal Software For Binary Traders Online Review The best robot binary option 2018 Top Binary Options Signals Software Best Signals Software For Binary Options Traders Binary Options Robot - Automated Binary Options Trading ... Best Binary Option Brokers - YouTube Bestes Auto Trading Robot Software  Binary Option Robot

    We have tried, tested and reviewed the many types of software and know which companies offer the best binary robots trading experience and which software outshines the others. We believe that investing apps are a great way to save time and make money and to assist you in the quest to become the best binary options robot trader, our advice and recommendations are designed to make this happen. Best Binary Options Software. Jay Hawk. Contributor, Benzinga March 14, 2019 Updated: July 22, 2020. Benzinga Money is a reader-supported publication. We may earn a commission when you click on ... MetaTrader 5 - An advanced multi-asset trading software that includes Forex, CFDs, ... When choosing the best binary options provider, make sure to take into consideration which assets are available to trade. Most brokers list their asset index on their websites for everyone to see. The bigger their list of assets, the more opportunities you have to make a profit. Most binary options brokers ... I never knew about the Beste Binaire Software 2020 possible differences between binary options trading and forex trading. However, through this article, you can learn about the Beste Binaire Software 2020 possible differences in the same. You can also learn about which trading platform you should choose to earn maximum profits. Best Binary Options Robots: Binary Robot Auto Trading Software - Binoption Binary Options Robots and Auto-trading Software have helped thousands of traders to make more efficient trading investments. It is possible to earn approximately 80% of profits using the binary option robot. Best Binary Options Software and Software Tested. Here we can really see the impression that the established binary options strategies and methods have been having on the market. In this table you will see what the binary today.com readers believe is the best binary options signals service is, and the best binary options software service is ... Conclusion on the best Binary Options Trading Software 2019. IQ Option is currently the best Binary Options Trading Software for the private trader. On this page, I have given you a great overview of the platform. Due to time constraints, however, I was not able to provide you with complete details of all the functions, so you can open a free demo account to test the platform yourself. The ... Binary Options Trading Software: Nur das Beste! Unsere Aufgabe ist es, die endgültige Ressource auf Handel mit binären Optionen Software zu erstellen, für den modernen Unternehmer. Finden Sie eine Fülle von interessanten Handels Features und bleiben Sie informiert über die neuesten Entwicklungen in der Handel mit binären Optionen Arena hier. 5 Best automated binary options trading robots: Let’s review five of the most popular binary options robots and see how they perform. We compiled the best binary option robot list, based on their online presence. Do they really deliver? We will find out. Learn about the best binary options signal providers, by signals volume, communication with users, and more.

    [index] [5418] [7519] [22685] [28421] [20931] [1741] [21455] [1886] [1488] [9843]

    Free Binary Options Trading Signals - Best Live Signal Software For Binary Traders Online Review

    http://binaryrobotsignup.com Mit Binary Option Robot erhalten Sie die beste Auto-Trading Robot-Software, die es gibt! Binary Options Robot - Automated Binary Options Trading Using Binary Option Robot Test Binary Options Robot here - http://track.logic.expert/67b0b668-c6a4-42... This software is 100% user friendly. Using this software you can works with 11 Assets and you can select Trading time 1 minute to 60 minutes Frequently Asked Questions Is this Full Version of ... Home Online Earners is a smart and automatic binary options trading software that is based on a web-based platform to make it more conv... Best Binary Option Brokers uploaded and posted 5 years ago Real Binary Options Reviews 39,845 views 15:51 Best IQ Option- Binary Option Bot- Robot// Auto Trading Signal Software// Free Download !! 2019 - Duration: 12:16. For Free Live Signal, Please Visit: https://www.amtradingtips.com Contact Email: [email protected] For More Update Join Telegram Channel: https://t.me/... Best Live automated Signal binary options trading software app free download For Binary Traders Online Demo Review Binary Options Trading Signals analysis formula follows a series of logical steps ...

    https://arab-binary-option.unreahe.ml