![R4EBEDRAM1 R4EBEDRAM1]()
Originally posted in our forums by tech guru, Raja, this guide is aimed at those looking to maximize their overclocking and tweaking performance on the new Rampage IV Black Edition. It suits anyone from beginner to advanced, so everyone should get something out of it!
Memory Overclocking Guide For Rampage IV Black Edition
This guide provides basic information on how to overclock the Rampage IV Black Edition (R4BE) motherboard and also information on overclocking in general. Whether you’re using an R4BE motherboard or not, there are some valuable learning tips in this guide that help understand the process of stable overclocking on any platform – read on!
For those of you using Ivy-E there’s already a basic
processor overclocking guide here. We’d recommend reading through
that overclocking guide in its entirety before continuing here. In
this memory guide, we provide further information on settings and basic methodology to help obtain a stable system overclock.
The R4BE is a high performance X79 chipset based motherboard, tuned to extract maximum performance from Intel’s Ivy-E processors. Our engineers have tweaked trace layout and components to improve quad channel DRAM overclocking potential over first generation X79 motherboards. DRAM signal line matching is improved between channels and slots, impedance has been changed to suit Ivy-E processors, while DRAM power components have been upgraded over the Rampage 4 Extreme to provide even better regulation. As a result, there are certain situations in which the R4BE can run tighter DRAM timings than the R4E or other X79 boards at equivalent voltages.
Success, as always, depends largely on having a good combination of parts while the rest comes down to using a systematic approach of getting from stock to an intelligently tuned overclock.
Methodology Basics
This guide assumes you are running UEFI BIOS version 0507 or later.
By default, UEFI parameters that relate to overclocking have automated scaling routines embedded in the background that will change/increase voltages to facilitate overclocking. In other words, we can simply change the processor multiplier ratio (within realistic limits), apply XMP for DRAM and let the board do the rest. However, there are processors and DRAM kits that may need special tuning for full stability. Either the CPU needs more voltage than average for a given frequency, or perhaps the DRAM modules being used aren’t stable at the timings or memory controller voltages the board applies on Auto. It’s the methodology for getting through these situations we’ll focus on today.
One common mistake we deal with on forums relates to users trying to push processors too far, too soon. New users come to forums, have a look around and focus on copying the best overclocking results they can find; expecting their own processors to reach the same frequency as matter of course. Unfortunately, the system ends up being unstable, with the user frustrated and unsure of why there is an issue, or what needs to be adjusted.
We can therefore conclude that a non-systematic approach to overclocking, leaves us with no clue about what to adjust when faced with instability: is the system unstable because CPU core voltage is too low? Is it related to DRAM? Could it be that the processor memory controller (IMC) can’t handle high DRAM speeds? It could be any and all of these at the same time, or it could simply come down to having a worse CPU sample than “KingD”, who was either really lucky, or he purchased multiple CPUs to find a good one. That’s why using a systematic and gradual approach is advised by us – use the results of others as a reference but don’t be fooled into thinking your parts can do the same simply by copying settings.
So what’s systematic and gradual in the context of overclocking? Well, it includes methods to focus on certain parts of the system to evaluate overclocking potential before shooting for the moon. We can use different types of stress tests to focus on the CPU cores, and we can also use programs that focus more on memory to get a feel of what is possible on our combination of parts. Once we’ve got a feel for how things react, we can add more comprehensive forms of stress testing to evaluate the system as a whole.
We should stress that it is imperative to perform a basic evaluation of stability before attempting any type of overclocking on the system. Leave everything at default parameters and check the system is stable and working as it should be. A tool like ROG Realbench is perfect for this task, allowing us to check stability in a manner akin to real-world applications and system loads.
We should also install a third party temperature application at this stage to check CPU and system temperatures are within comfortable bounds at idle and when the system is under load. If anything is running too hot, or is unstable, it’s pointless trying to overclock the system without attending to the issue first.
If the system is stable at stock speeds and has suitable overhead in terms of temperature, we can move to overclocking the system. The process is best started by isolating one side of the processor before overclocking the system as a whole unit. As an example, we can use various memory stress tests to evaluate how good our memory modules are and how good the memory controller in our CPU sample is. This is especially important for those of us that purchase high-speed memory kits. It’s possible that the processor is perfectly stable with little manual adjustment of voltages at high memory speeds, but it’s also possible the memory controller in our sample needs voltage or memory timing adjustments to be stable. At worst, the memory controller could be completely unstable at the desired operating frequency regardless of adjustments, in which case we have to accept a lower operating point. The latter can be debilitating to realize if one has purchased a high performance and expensive memory kit. Such things do happen.
Once we’ve determined the memory can run at a given frequency, we can set a lower memory operating frequency for sake of isolating the CPU to find its overclocking potential. This eliminates guesswork as the memory will almost certainly be unconditionally stable at a lower operating speed, giving us fewer variables to fight against as we evaluate overclocking potential of the CPU cores. At this stage, we use a stress test routine that focuses primarily on the CPU. We can increase the CPU multiplier ratio by 1 and then see what kind of voltage the CPU needs to be stable. Once the required voltage for stability at the new frequency is found, we can again increase the multiplier ratio and repeat the stability testing and voltage tuning process.
Keeping notes is valuable at this stage and will show a near linear pattern of voltage versus frequency increase. Eventually, we arrive at a point where we’ve either run out of cooling potential (temps are too high), or we need to make a huge voltage increase to get the processor stable. This is where I personally back off, and select the lower operating point. Why? Because the current drawn by the processor is proportional to voltage and operating frequency. Choosing the lower point is kinder to the processor from a longevity standpoint and there’s a better chance the system will remain stable over the long-term.
A few Overclocking Technicalities Analogized
Why does a system become unstable when it is overclocked? There are numerous reasons, actually. More than one could cover in a single article. Many require electrical engineering backgrounds to both write and to understand. Electrical engineers we are not… Well, most of us (including me) are not anyway, so we’re going to try and keep things simple.
Fundamentally, the role of a processor is to calculate, write and read data. At the core level, this data is represented and moved around the system as 1’s and 0’s (binary patterns). Let’s look at a crude visual representation of how data is represented at the electrical level:
![DQ DQ]()
The “wavy” line is the signal alternating between a high and low voltage to represent 1 and 0. In this brief example the data pattern being transmitted from the memory bus to the processor is 101010.
- VOH (voltage output high) is the high voltage output level of the transmitter that represents a logic 1, while VOL is the low output voltage (voltage output low) representing a logic 0.
- VREF is the reference voltage. The reference is typically set at the midpoint between VOL and VOH.
- VDDQ (not shown, and known as DRAM Voltage on motherboards) is the voltage supply for VOH, VOL and the voltage from which VREF is derived.
Most signal stages have three states: VOH, VOL and “off” - known as a tri-state transceiver/s. Typically, “off” state will be a certain level lower than VOL but above ground potential. The off state is required to prevent inadvertent data transmission. This standing voltage (bias) in “off” state is attached to a compensation network to hold the voltage below VOL when the signal line is not transmitting. Compensation is usually in the form of a resistor to ground, but can be something more elaborate if required. The reason a certain level of bias is present and the line it is not at ground potential in “off” position is due to a number of factors which fall outside the scope of this article.
For this example, let us assume VOH is around 80% of VDDQ, VOL is 20% of VDDQ, while VREF is 50% of VDDQ. The signal swings between VOL and VOH to represent data as a 1 or 0 while a strobe compares the signal against VREF. If the voltage is higher than VREF determines it to be a logic 1, or if the signal voltage is below VREF it is interpreted as a logic 0. Needless to say, the process of transferring the data and interpreting it accurately requires that the transmitter and the receiver device be in close timing sync. If there is a lack of synchronization between the transmitted signal and the strobe, the data could be read erroneously.
In the ideal world, the signal waveform would be perfectly symmetrical as it transitions between high and low states. Never crossing above VOH or below VOL. The keen eyed among you will notice in the diagram above that the signal varies slightly from one transition to the next. That’s mostly because I’m crap at drawing, but in this case, fortunately, the rendition fits! The waveforms are non-symmetrical and have different levels of excursion past VOH or VOL (overshoot and undershoot). There are various reasons why these issues occur; power supply fluctuation, jitter, impedance issues and noise just to name a few. We won’t go into all of the factors leading to these problems as many fall outside the scope of this article. However, we can break down what happens as a result of them by using a real-world analogy in a context we can relate to.